CN114452646A - Virtual object perspective processing method and device and computer equipment - Google Patents

Virtual object perspective processing method and device and computer equipment Download PDF

Info

Publication number
CN114452646A
CN114452646A CN202011239948.4A CN202011239948A CN114452646A CN 114452646 A CN114452646 A CN 114452646A CN 202011239948 A CN202011239948 A CN 202011239948A CN 114452646 A CN114452646 A CN 114452646A
Authority
CN
China
Prior art keywords
camera
source
target
virtual object
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011239948.4A
Other languages
Chinese (zh)
Inventor
唐声福
严吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011239948.4A priority Critical patent/CN114452646A/en
Publication of CN114452646A publication Critical patent/CN114452646A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Abstract

The application provides a perspective processing method, a device and a computer device for a virtual object, wherein in the running process of a game, the computer device acquires a target camera view angle of a first virtual application scene in real time, and after a source camera view angle and a source grid model of the first virtual object, the computer device corrects the space position of each vertex of a source grid model by using a projection matrix corresponding to the target camera view angle and the source camera view angle respectively to obtain the target grid model under a world coordinate system, and simultaneously performs offset correction on the source camera space position under the source camera view angle and the source rendering space position of the first virtual object according to a view angle change constraint condition to obtain a target camera space position under the target camera view angle and a target rendering space position of the first virtual object so as to realize rendering of the target grid model and obtain a target rendering image meeting the requirement of the perspective effect of the first virtual object, the processing efficiency and reliability are improved, and the camera view angle in the game operation is not limited any more.

Description

Virtual object perspective processing method and device and computer equipment
Technical Field
The present application relates to the field of image processing applications, and in particular, to a method and an apparatus for perspective processing of a virtual object, and a computer device.
Background
With the development of computer communication technology, electronic games have become an important part of life and entertainment of users, and in the development process of electronic games, two-dimensional models and three-dimensional models in games are generally required to be rendered. Taking a horizontal version game under a monocular camera as an example, in a model rendering process, for a virtual object with a specific perspective effect, a source mesh model of the virtual object needs to be subjected to perspective correction processing first, so that the virtual object output by rendering shows the perspective effect under different camera view angles.
Specifically, referring to a schematic diagram of a currently-used perspective processing flow of a virtual object shown in fig. 1, at present, an art manufacturer manually performs lattice deformation processing on a source mesh model of a constructed virtual object in a three-dimensional modeling tool, and then introduces a deformed target mesh model into a game engine to complete a subsequent model rendering flow, so as to achieve a required perspective display effect.
However, the existing mode of manually performing lattice deformation processing on a grid model is often influenced by subjective vision of art makers, reliability and precision are uncontrollable, and in order to guarantee the display effect of a final virtual object, a camera view angle under any game scene is required and must be consistent with a camera view angle set by the art makers in a three-dimensional modeling tool, so that the method has a great limitation, often cannot meet application requirements, and reduces user experience.
Disclosure of Invention
In order to achieve the above purpose, the embodiments of the present application provide the following technical solutions:
in one aspect, the present application provides a method for perspective processing of a virtual object, where the method includes:
acquiring a target camera field angle of a first virtual application scene, and a source camera field angle and a source grid model of a first virtual object in the first virtual application scene;
correcting the space position of each vertex of the source mesh model by using the projection matrixes corresponding to the target camera field angle and the source camera field angle to obtain a target mesh model under a world coordinate system;
performing offset correction on the source camera space position under the field angle of the source camera and the source rendering space position of the first virtual object according to the field angle change constraint condition to obtain a target camera space position under the field angle of the target camera and a target rendering space position of the first virtual object;
rendering the target grid model according to the space position of the target camera and the space position of the target rendering to obtain a target rendering image of the first virtual object.
Optionally, the obtaining a target camera field angle of the first virtual application scene, and a source camera field angle and a source mesh model of the first virtual object in the first virtual application scene includes:
acquiring a target camera field angle of a first virtual application scene, and a source camera field angle and a source grid model corresponding to each virtual object in the first virtual application scene;
comparing the source camera field angles corresponding to the virtual objects with the target camera field angles respectively;
and determining a source camera field angle and a source grid model corresponding to a first virtual object according to the comparison result, wherein the first virtual object is a virtual object corresponding to the source camera field angle which is inconsistent with the target camera field angle.
Optionally, the method for obtaining the source-machine field angle and the source mesh model of the first virtual object includes:
receiving a three-dimensional model data import request sent by an electronic device, wherein the three-dimensional model data import request is generated by the electronic device in response to a three-dimensional model data import operation on a first virtual object;
and analyzing the three-dimensional model data import request to obtain a source machine field angle and a source grid model of the first virtual object.
Optionally, the performing, by using the projection matrices corresponding to the target camera field angle and the source camera field angle, spatial position offset correction on each vertex of the source mesh model to obtain the target mesh model in the world coordinate system includes:
acquiring a first projection matrix of the field angle of the target camera, a second projection matrix of the field angle of the source camera, a camera transformation matrix between a camera coordinate system and a world coordinate system, and source vertex space positions corresponding to all vertexes in the source mesh model;
performing offset correction on the spatial position of the source vertex by using the first projection matrix, the second projection matrix and the camera transformation matrix to obtain the position offset of the corresponding vertex under the world coordinate system;
and according to the obtained plurality of position offsets, performing spatial position offset correction on the corresponding vertex of the source mesh model to obtain a target mesh model of the first virtual object in the world coordinate system.
Optionally, the constraint condition of the change of the angle of field includes that the screen space sizes of the same virtual object are the same under different camera angles.
Optionally, the performing offset correction on the source camera space position under the source camera view angle and the source rendering space position of the first virtual object according to the view angle change constraint condition to obtain the target camera space position under the target camera view angle and the target rendering space position of the first virtual object includes:
acquiring source machine state parameters under the field angle of the source machine in the world coordinate system;
according to the source camera state parameters, obtaining a target camera space position under the target camera field angle and rendering correction parameters of the first virtual object; wherein the rendering correction parameters include a position correction parameter for the source rendering spatial position and/or a model scaling for the first virtual object in the world coordinate system;
and correcting the source rendering space position of the first virtual object by using the rendering correction parameters to obtain a target rendering space position.
Optionally, the source camera state parameters include a source camera spatial position, a source camera pitch angle, and a first included angle of a source camera position vector, where the first included angle is an included angle between the source camera position vector and a first coordinate axis of the world coordinate system, the first coordinate axis may refer to a Y axis or an X axis of the world coordinate system, the source camera spatial position includes a first coordinate projected by a source camera on the first coordinate axis and a second coordinate projected on a second coordinate axis of the world coordinate system, and the second coordinate axis includes a Z axis;
the obtaining a spatial position of the target camera under the field angle of the target camera and a rendering correction parameter of the first virtual object according to the state parameter of the source camera includes:
obtaining a camera source distance between a source camera and the origin of a world coordinate system under the field angle of the source camera by using the first coordinate and the second coordinate;
obtaining a model scaling of the first virtual object by using the source camera field angle and the target camera field angle;
obtaining a space position of the target camera under the field angle of the target camera according to the camera source distance, the model scaling and the first included angle;
and obtaining a rendering correction parameter of the first virtual object according to the model scaling ratio, the first coordinate, the camera source pitch angle and the first included angle.
Optionally, the obtaining a spatial position of the target camera under the field angle of the target camera according to the camera source distance, the model scaling and the first included angle includes:
obtaining a camera distance offset by using the camera source distance and the model scaling;
obtaining a camera position offset under the field angle of the target camera by using the first coordinate, the camera source pitch angle, the first included angle, the camera distance offset and the camera source distance, wherein the camera position offset refers to a position offset of the target camera projected on the Z axis relative to the source camera;
obtaining a third coordinate under the field angle of the target camera by using the camera position offset, the first included angle, the camera distance offset and the camera source distance, wherein the third coordinate is a coordinate projected on the Z axis by the target camera;
obtaining a fourth coordinate under the field angle of the target camera by using the first included angle, the camera distance offset and the camera source distance, wherein the fourth coordinate is a coordinate projected on the first coordinate axis by the target camera;
determining the space position of the target camera under the field angle of the target camera according to the third coordinate and the fourth coordinate;
the obtaining a rendering correction parameter of the first virtual object according to the model scaling ratio, the first coordinate, the camera source pitch angle and the first included angle includes:
and forming rendering correction parameters of the first virtual object by the camera position offset and the model scaling.
In another aspect, the present application further provides a virtual object perspective processing apparatus, including:
the system comprises a first data acquisition module, a second data acquisition module and a third data acquisition module, wherein the first data acquisition module is used for acquiring a target camera field angle of a first virtual application scene, and a source camera field angle and a source grid model of a first virtual object in the first virtual application scene;
the first position correction module is used for correcting the spatial position of each vertex of the source mesh model by using the projection matrixes corresponding to the target camera field angle and the source camera field angle to obtain a target mesh model in a world coordinate system;
a second position correction module, configured to perform offset correction on the source camera spatial position at the source camera view angle and the source rendering spatial position of the first virtual object according to a view angle change constraint condition, to obtain a target camera spatial position at the target camera view angle and a target rendering spatial position of the first virtual object;
and the rendering module is used for rendering the target grid model according to the target camera space position and the target rendering space position to obtain a target rendering image of the first virtual object.
In yet another aspect, the present application further proposes a computer device, comprising:
a communication module;
a memory for storing a program of the virtual object perspective processing method as described above;
and the processor is used for loading and executing the program stored in the memory to realize the steps of the virtual object perspective processing method.
In yet another aspect, the present application further proposes a readable storage medium, which is characterized by storing thereon a computer program, which is loaded and executed by a processor, and implements the steps of the virtual object perspective processing method as described above.
Based on the above technical solution, in the embodiment of the present application, there is no need for an art maker to perform perspective correction on a first virtual object (i.e., a virtual object requiring perspective correction) in a three-dimensional modeling tool, but in the running process of an application such as a landscape game, a computer device (e.g., a game engine) directly obtains a target camera view angle of a first virtual application scene (i.e., a virtual application scene corresponding to any application running time) and a source machine view angle and a source mesh model of the first virtual object in the first virtual application scene, then, spatial position correction may be performed on each vertex of the source mesh model by using a projection matrix corresponding to each target camera view angle and the source machine view angle to obtain a target mesh model in a world coordinate system, and offset correction may be performed on a source camera spatial position under the source machine view angle and a source rendering spatial position of the first virtual object according to a view angle change constraint condition, the method has the advantages that the space position of the target camera under the field angle of the target camera and the space position of the target rendering of the first virtual object are obtained, the rendering of the target grid model is achieved, the target rendering image of the first virtual object is obtained, the processing mode of automatically achieving the perspective correction of the virtual object does not limit the field angle of the camera in the game running process, the target rendering image displayed by the virtual application scene at each running moment is guaranteed, the perspective effect of corresponding requirements can be achieved, the original running state of the application cannot be influenced, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic view of a perspective process of a currently employed virtual object;
FIG. 2 is a schematic diagram of a hardware structure of an embodiment of a computer device suitable for the method and apparatus for perspective processing of virtual objects proposed in the present application;
fig. 3 is a schematic hardware structure diagram of an embodiment of an electronic device suitable for the virtual object perspective processing method and apparatus proposed in the present application;
fig. 4 is a schematic flowchart of an alternative example of a virtual object perspective processing method proposed in the present application;
FIG. 5a is a schematic view of a camera coordinate system;
FIG. 5b is a schematic view of a world coordinate system;
fig. 6 is a schematic flowchart of yet another alternative example of the virtual object perspective processing method proposed in the present application;
fig. 7 is a schematic flowchart of yet another alternative example of the virtual object perspective processing method proposed in the present application;
fig. 8 is a schematic perspective projection processing diagram of a first virtual object on the YOZ plane of the world coordinate system in the virtual object perspective processing method proposed in the present application;
fig. 9 is a schematic flowchart of yet another alternative example of the virtual object perspective processing method proposed in the present application;
fig. 10 is a schematic structural diagram of an alternative example of the virtual object perspective processing apparatus proposed in the present application;
fig. 11 is a schematic structural diagram of yet another alternative example of the virtual object perspective processing apparatus proposed in the present application.
Detailed Description
Based on the technical problems described in the background section, the present application is expected to automatically implement perspective correction processing of a virtual object with specific perspective requirements in an application such as a landscape game by a game engine (i.e., a computer device supporting normal operation of a game), that is, performing real-time spatial position correction on vertices of a mesh model of the virtual object and a camera so that the virtual object shows a perspective effect different from a current camera viewing angle, wherein the perspective correction processing is no longer manually performed by an art producer in a three-dimensional modeling tool (e.g., a modeling software such as Maya/Max), thereby solving a series of problems caused by manually performing lattice deformation processing on the mesh model of the virtual object.
In practical application of the method, the construction of the grid model of the virtual object can be realized according to an Artificial Intelligence (AI) technology. Artificial intelligence is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. Generally, the Technology includes basic technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing Technology, operating/interactive systems, mechatronics, and software technologies in several directions such as Computer Vision (CV), Speech processing (Speech Technology), Natural Language Processing (NLP), and Machine Learning (ML)/deep Learning.
For the applications such as the horizontal version game of the monocular camera, a proper artificial intelligence technology can be selected according to application requirements. For example, image recognition and processing, three-dimensional object reconstruction, three-dimensional technology, and the like in the computer vision technology are used to construct a mesh model (i.e., a mesh body of a virtual object model) of each game object (i.e., a virtual object) in a game scene, and a specific construction process is not described in detail in this application.
In the model rendering stage after the perspective correction processing of the grid model is completed, the method and the device can be realized by using a rendering technology, the specific realization process is not described in detail, and in the rendering process, the method and the device can be realized by combining the artificial intelligence technology according to the application requirements so as to improve the rendering efficiency and reliability.
In some embodiments, the model rendering process may be implemented by using a cloud rendering technology, that is, a three-dimensional program is placed in a remote server for rendering, a user terminal clicks a "cloud rendering" function button through Web software or directly in a local three-dimensional program, accesses an access resource through the internet, sends a cloud rendering request to the server, and the server executes a corresponding rendering task according to the cloud rendering request and feeds back an obtained rendering result picture to the user terminal for display, so as to achieve a desired animation display effect for a user to view and operate. Of course, the model rendering process may also be completed by an electronic device having a certain data processing capability locally, and the type of the computer device performing the model rendering is not limited in the present application, and may be determined according to the circumstances.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements. An element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
Wherein in the description of the embodiments of the present application, "/" indicates an inclusive meaning, for example, a/B may indicate a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two. The terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
Referring to fig. 2, a schematic diagram of a hardware structure of a computer device suitable for the virtual object perspective processing method and apparatus provided in the present application is shown, and with the above analysis, the computer device may be a server and/or an electronic device, the server may be an independent physical server, a server cluster or a distributed system formed by multiple physical servers, a cloud server supporting cloud computing services, or the like. The server can be directly or indirectly connected with electronic equipment such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a netbook and the like in a wired or wireless communication mode, so that the data interaction requirement between the electronic equipment and the server is met, and the specific communication connection mode can be determined according to the situation.
As shown in fig. 2, the computer device proposed in the embodiment of the present application may include, but is not limited to: a communication module 11, a memory 12, and a processor 13, wherein:
the number of the communication module 11, the memory 12 and the processor 13 may be at least one, and the communication module 11, the memory 12 and the processor 13 may all be connected to a communication bus to implement data communication therebetween, where the specific communication process may be determined as appropriate.
The communication module 11 may include a GSM module, a GPRS module, a WIFI module, and/or a communication module that implements other wireless communication networks or wired communication networks, and may further include a communication module such as a USB interface, a serial/parallel interface, and the like to implement data transmission between internal components of the computer device.
The memory 12 may be configured to store a program for implementing the virtual object perspective processing method proposed in the present application, and the processor 13 may be configured to load and execute the program stored in the memory 12 to implement the steps of the virtual object perspective processing method proposed in the embodiment of the present application, and the specific implementation process may refer to, but is not limited to, the following description of the corresponding parts of the method embodiment, and is not described in detail herein.
In the present embodiment, the memory 12 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device or other volatile solid state storage device. The processor 13 may be a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA), or other programmable logic devices.
It should be understood that the structure of the computer device shown in fig. 2 is not intended to limit the computer device in the embodiments of the present application, and in practical applications, the computer device may include more or less components than those shown in fig. 2, or some components in combination. For example, if the computer device is an electronic device as listed above, the electronic device may be configured with an application engine supporting normal operation of an application, such as a game engine supporting operation of a landscape game, and in addition, from a hardware configuration perspective, as shown in fig. 3, the electronic device may further include a display, input devices, output devices, an antenna, a power module, a sensor module, and the like, which are not listed herein.
In practical application, in combination with the description of the relevant parts, in the production and production stage of game animation, it is usually necessary for an art producer to use a three-dimensional modeling tool according to the electronic device to complete animation production of a game object (denoted as a virtual object) in each game scene, for example, a three-dimensional model or a two-dimensional model of a corresponding virtual object is constructed by using each obtained object resource.
In the embodiment of the present application, in combination with the above description of the technical concept of the present application, after completing a landscape game or other fixed-view-angle type games on a three-dimensional modeling tool, a artist may directly import a mesh model (also referred to as a model mesh) of each virtual object into the computer device, so that during the running process of the game, a target camera view angle of a game scene may be calculated in real time and sent to the computer device, so that a game engine (i.e. some compiled editable computer game systems or core components of some interactive real-time image applications) in the computer device executes the virtual object perspective processing method provided by the present application, and perspective correction is performed on a first virtual object (i.e. a virtual object with unconventional perspective expression requirements) in the game scene according to the target camera view angle at the current time, therefore, the first virtual object can show the perspective effect of any camera view angle, the perspective correction and rendering efficiency in the game running process is improved, the workload of art production personnel is greatly reduced, and the technical problems that the reliability and the precision are uncontrollable and the camera view angle in the game process is strictly limited due to the fact that the lattice deformation processing is carried out on the grid model manually are solved.
The virtual object perspective processing method proposed in the present application will be described in detail below from the perspective of a computer device, but is not limited to the processing method described in the following embodiments, and for the operations performed by the computer device according to the embodiments of the present application, which are described in the present application using flowcharts, it is understood that the preceding or following operations are not necessarily performed in the order precisely. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to these processes, or a certain step or several steps of operations may be removed from these processes, and the following embodiments of the present application are not described in detail, but all fall within the scope of the technical solution of the present application.
Referring to fig. 4, a flow chart of an alternative example of the virtual object perspective processing method proposed by the present application, which is suitable for the computer device as described above, as shown in fig. 4, may include, but is not limited to, the following steps:
step S11, obtaining a target camera field angle of the first virtual application scene, and a source camera field angle and a source grid model of the first virtual object in the first virtual application scene;
in this embodiment of the application, the first virtual application scene may be any application scene to be rendered, which is to be output in an application running process, such as any game scene in a horizontal version game running process under a monocular camera. Correspondingly, the first virtual object may refer to a virtual object having an irregular perspective display effect in the first virtual application scene, that is, the first virtual object is rendered according to a conventional model, and the achieved perspective display effect cannot meet the application requirement.
The camera may be a virtual application scene setting, and is a scene camera for implementing model rendering of a virtual object, and generally has a certain spatial position and direction, or a certain field angle fov (field of view), and determines a field range of a corresponding virtual application scene, that is, display content of the virtual application scene. Therefore, when the field angle of the scene camera changes, the virtual object included in the virtual application scene displayed to the user may also change, and the operation requirement of the user on the application is met.
In the process of constructing a mesh model of each virtual object in a virtual application scene, an initial field angle corresponding to each virtual object in the virtual application scene is usually determined and recorded as a source machine field angle, which may not satisfy a perspective effect that different users desire to show when outputting the virtual object. For example, if the preset viewing angle of the source machine of the virtual object 1 is selected to be a small value, such as 5 ° (unit: degree), the virtual object controlled by the game player often shows a visual effect close to orthogonal projection, and the visual effect of perspective projection cannot be achieved.
In this regard, taking such applications as a cross-plate game under a monocular camera as an example, the present application proposes that, according to a target camera view angle obtained in real time during a game running process, model rendering is performed on a first virtual object whose preset source camera view angle is inconsistent with the target camera view angle, that is, a virtual object which cannot achieve a required perspective display effect, and information such as each vertex space position, a camera space position, a rendering space position, and the like of a grid model is corrected, so that the first virtual object which is rendered and displayed (that is, a three-dimensional game object in a game scene) can achieve the required perspective effect.
It can be understood that the target camera view angle may be obtained by real-time calculation in the game running process and dynamically changes with the game playing of the user, the calculation method of the target camera view angle is not limited in the present application, and the target camera view angles at different times may be calculated in real time according to the currently displayed scene data of the first virtual application scene, the operation data input by the user, information such as each state parameter sensed by the electronic device used by the user to play the game, and the specific implementation process of this embodiment is not described in detail.
For the source grid module of the first virtual object, the art manufacturer may complete the construction in the three-dimensional modeling tool and import the construction into the computer device.
Step S12, correcting the space position of each vertex of the source mesh model by using the projection matrix corresponding to the target camera field angle and the source camera field angle respectively to obtain a target mesh model under a world coordinate system;
continuing with the above description, the source mesh model of the first virtual object is constructed under the source camera view angle, but at this time, the target camera view angle that the user needs to show is not consistent with the source camera view angle, and if the source mesh model is continuously rendered, the final displayed rendered image cannot achieve the required visual effect of perspective projection. The present application therefore proposes to correct the source mesh model of the first virtual object according to the currently obtained target camera field angle.
Because the mesh model of the virtual object is usually composed of triangular surfaces (or other shape planes, the triangular surfaces are taken as an example for the description in the present application) and each triangular surface is composed of three vertexes, after the spatial position of each vertex is changed, the shape of the correspondingly composed triangular surface and the relative position relationship between the correspondingly composed triangular surface and the adjacent triangular surface are changed, so that the display structure of the mesh model is changed, and the effect of the required perspective animation is generated. Therefore, in order to enable the virtual object to exhibit the required visual effect of perspective projection, the present application proposes that the spatial position of each vertex in the source network model of the virtual object can be dynamically adjusted, that is, offset correction is performed on the spatial position of each vertex of the source network model, and a specific implementation method is not limited. It is understood that perspective projection is a single-sided projection that projects a feature onto a projection surface using a central projection method, thereby achieving a visual effect that is closer to that of a single-sided projection. It has a series of perspective characteristics of disappearing sense, distance sense and regular change of body form with identical size, can vividly reflect the space image of body form, and can be generally used for animation, visual simulation and other various aspects with reality reflection. In order to meet the visual effect of the perspective projection of the first virtual object, the requirements of perspective correction can be met by adjusting the camera focal length or the zoom scale of the camera model (namely, the scene camera), namely, by adopting a projection transformation processing mode.
The projection transformation may be a transformation process of spatial positions of vertices in the mesh model by using a projection matrix, so as to project a three-dimensional scene onto a screen to form a two-dimensional image, and the two-dimensional image can achieve the visual effect of the perspective projection. Therefore, in the embodiment of the present application, in order to enable the first virtual object to exhibit a rendering effect under a target camera view angle, the projection matrix of the source camera view angle corresponding to the first virtual object may be corrected, so as to calculate the offset of the spatial position of each vertex in the source mesh model by reverse estimation, and the specific calculation process may refer to, but is not limited to, the description of the corresponding part of the following embodiment, and the detailed construction method of the projection matrix under different camera view angles is not described in detail in the present application.
In practical applications, the data of the target camera view angle, the source mesh model, etc. may be obtained in a camera coordinate system of the camera space, where the camera coordinate system is a coordinate system with the camera as an origin, as shown in fig. 5 a. To facilitate the perspective correction process, these data may be converted to a world coordinate system, i.e., a three-dimensional spatial coordinate system of the world space, as shown in fig. 5 b. Specifically, the conversion processing of the spatial position coordinates from the camera coordinate system to the world coordinate system can be realized by using the camera transformation matrix, and finally, a target grid model in the world coordinate system, namely, the grid model after the correction processing is obtained.
Step S13, according to the field angle change constraint condition, offset correction is carried out on the source camera space position under the field angle of the source camera and the source rendering space position of the first virtual object, and the target camera space position under the field angle of the target camera and the target rendering space position of the first virtual object are obtained;
wherein, the viewing angle change constraint condition may include: under different camera angles, the screen space size of the same virtual object is the same, and it is ensured that the screen space size of the same virtual object is unchanged in the virtual application scene displayed by the adjacent frames, so that the spatial position of the camera and the spatial position of the first virtual object (which may be referred to as a rendering spatial position) need to be corrected. The detailed calibration process is not described in detail in this embodiment.
And step S14, rendering the target grid model according to the target camera space position and the target rendering space position to obtain a target rendering image of the first virtual object.
Referring to the schematic flow chart of the virtual object correction processing method shown in fig. 6, as described above, after the art manufacturer completes the construction of the mesh model of the first virtual object, the art manufacturer may import the source mesh model into the game engine of the computer device, and respectively implement the vertex space position correction, the source rendering space position correction (i.e., the space position correction of the virtual object), and the camera space position correction of the source mesh model according to the manners described in the above steps, so that, in the model rendering stage, the model rendering of the first virtual object may be implemented by using the data obtained after the correction, and the target rendering image of the first virtual object is obtained, so that the required perspective projection visual effect can be achieved when the first virtual object is displayed.
Specifically, the rendering process may be implemented by a rendering engine in the game engine, and the target grid model, the animation, the light and shadow, the perspective, the special effect equivalent fruit are calculated and displayed on the screen, and signals from a keyboard, a mouse, and other peripherals may be processed to implement communication between the game player and the electronic device.
To sum up, in the embodiment of the present application, the art producer only needs to complete the model construction of the first rendering object in the three-dimensional modeling tool, and does not need to consider the perspective effect problem, and introduces the constructed source mesh model into the computer device, and during the game operation process, the computer device (specifically, a game engine) can directly obtain the target camera view angle of the first virtual application scene (i.e., the virtual application scene corresponding to any application operation time) at the current time, and after the source camera view angle and the source mesh model of the first virtual object in the first virtual application scene, the projection matrices corresponding to the target camera view angle and the source camera view angle can be used to perform spatial position correction on each vertex of the source mesh model, so as to obtain the target mesh model in the world coordinate system, and meanwhile, according to the constraint condition of the view angle change, the processing method for automatically realizing perspective correction of the virtual object greatly improves perspective processing efficiency and reliability, does not limit the camera angle of view of each game scene during game running any more, ensures the target rendering image displayed by the virtual application scene at each running moment, can achieve a perspective effect required correspondingly, does not influence the game running state, and improves user experience.
Referring to fig. 7, a schematic flowchart of another optional example of the virtual object perspective processing method proposed in the present application, the present embodiment may be an optional implementation of the virtual object perspective processing method described in the foregoing embodiment, but is not limited to the refined implementation described in the present embodiment. In combination with the above analysis, the present application may be applied to a landscape game with a monocular camera, which requires a game object to show a scene with different perspective effects of camera angles, and specifically, as shown in fig. 7, the method may include:
step S21, acquiring a target camera field angle of the first virtual application scene, and a source camera field angle and a source grid model corresponding to each virtual object in the first virtual application scene;
in the game animation production stage, a model mesh of each virtual object in the game scene may be constructed, and recorded as a source mesh model, and a corresponding source machine view angle may be determined, and these data may be imported into a game engine in the computer device. In this way, during the game playing process of the user, the game engine may display the currently displayed first virtual application scene in real time, that is, the target camera view angle of the first game scene, and the specific acquisition process is not described in detail.
Based on this, in some embodiments, the process of obtaining the source machine field of view and the source mesh model of the virtual object may include: the three-dimensional model data import request sent by the electronic device is received, and the three-dimensional model data import request may be generated by the electronic device in response to a three-dimensional model data import operation on the virtual object, for example, after the model building of the virtual object is completed, an "import" function button may be clicked to generate the three-dimensional model data import request, but the method is not limited to the generation manner described in the embodiment. Then, the game engine may analyze the three-dimensional model data import request to obtain the source machine view angle and the source mesh model of each virtual object.
It will be appreciated that the source machine field angle and the source mesh model for each virtual object in the virtual application scene may both be obtained as described above. In practical application, each time an art producer constructs a source grid model of a virtual object, the art producer can directly import the source grid model into a game engine; the source mesh models of all the virtual objects can be constructed and then introduced into the game engine together, which is not limited in the present application and may be determined according to the situation.
Moreover, in combination with the description of the type of the computer device in the corresponding part of the above embodiment, the game engine may be deployed in the electronic device, and in this case, the three-dimensional model data import request may be generated by a three-dimensional modeling tool in the electronic device; if the game engine is deployed on the server, the electronic device may establish a communication connection with the server, and send the generated three-dimensional model data import request to the server, so that the game engine obtains a source machine view angle, a source mesh model, and the like of each virtual object.
Step S22, comparing the source camera angle of view corresponding to each virtual object with the target camera angle of view respectively;
step S23, determining a source machine field angle and a source mesh model corresponding to the first virtual object according to the comparison result;
as described above, in the present application, only the perspective correction processing needs to be performed on the virtual object having the unconventional perspective display requirement, that is, the virtual object corresponding to the source camera view angle that is not consistent with the target camera view angle is marked as the first virtual object, and the virtual object corresponding to the source camera view angle that is consistent with the target camera view angle is marked as the second virtual object, so that the perspective effect that the user wants to display is the same as the perspective effect obtained by rendering and displaying the source mesh model according to the source camera view angle that the user has, and thus the perspective correction is not needed. Therefore, the embodiment of the present application mainly performs perspective correction processing on the first virtual object.
For example, in the running process of a certain horizontal game, the target camera view angles of the game scenes at different times may be different, and according to the processing result of the three-dimensional modeling tool, the source camera view angles of the rendering effect that can be expressed by each game object in the game scenes may also be different, and the specific determination method may be described with reference to the corresponding part of the foregoing embodiment. The present embodiment takes the target camera viewing angles corresponding to different times shown in table 1 and the source camera viewing angles of the virtual object a and the virtual object B in the game scene at the time as examples, and a virtual object perspective processing scheme will be described herein.
TABLE 1
Figure BDA0002768030360000161
As shown in Table 1 above, at the first moment of the horizontal version game run, the target camera field angle of the game scene is FOV1At this time, the source camera field angles corresponding to the virtual object a and the virtual object B, that is, the camera field angles of the rendered images obtained by rendering the source mesh model, are both equal to the target camera field angle FOV at that time1The consistency shows that the rendering effect of the user on the virtual object A and the virtual object B displayed at the moment is conventionalAnd the perspective effect can be used for directly rendering and displaying the source grid model without correction processing.
And when the game runs to the second moment, the target view angle of the game scene is changed into FOV2At this time, the source camera angle of view corresponding to the virtual object a is also FOV2Perspective correction of the virtual object a is not required; however, the source-machine angle of view of the virtual object B at this time is FOV1Angle of view FOV of target camera at second moment2If the virtual object B is not the same as the first virtual object, the virtual object B may be perspective-corrected according to the virtual object perspective processing method proposed in the present application.
Therefore, in the method, the source camera view angle and the target camera view angle at the same time are compared, after the first virtual object corresponding to the inconsistency between the source camera view angle and the target camera view angle is determined, the source camera view angle and the source grid model corresponding to the first virtual object are subsequently and directly corrected, and the second virtual object with the consistency in the comparison result can be directly subjected to model rendering without executing subsequent correction processing steps, so that the source camera view angle and the source grid model involved in the subsequent correction processing steps are related model data of the first virtual object.
Step S24, acquiring a first projection matrix of a target camera field angle, a second projection matrix of a source camera field angle of the first virtual object, a camera transformation matrix between a camera coordinate system and a world coordinate system, and source vertex space positions corresponding to all vertexes in the source mesh model;
following the scene example shown in table 1 above, perspective correction needs to be performed on the virtual object B (i.e., the first virtual object), and for convenience of the subsequent description of the processing procedure, the present application symbolizes each model parameter of the acquired virtual object B and the relevant parameter at the target camera angle of view at the current time.
Specifically, the source vertex space position corresponding to each vertex in the source mesh model, that is, the vertex original world space coordinate position, may be denoted as VworldThe first projection matrix of the field angle of the target camera is recorded as Mprojection1Recording the second projection matrix of the source camera field angle of the first virtual object as Mprojection2The above-mentioned camera transformation matrix is denoted as Mview
Step S25, using the first projection matrix, the second projection matrix and the camera transformation matrix to offset and correct the space position of the source vertex to obtain the position offset of the corresponding vertex under the world coordinate system;
optionally, in order to implement spatial position correction of each vertex in the source mesh model of the first virtual object, in the embodiment of the present application, a position offset of each vertex may be obtained according to the following formula, that is, a world space coordinate position offset V of each vertexwpoThis embodiment describes, but is not limited to, such a position offset amount calculation method:
VWPO=Vworld×(Mview×Mprojection2×(Mview×Mprojection1)-1-I);
in the position offset amount calculation formula, I may represent an identity matrix, ()-1An inverse matrix may be represented. After the matrices are obtained in the above manner, the position offset calculation formula may be substituted to obtain the position offset of each vertex space coordinate position in the source mesh model of the first virtual object in the world coordinate system, and the specific calculation process is not described in detail in this application.
Step S26, according to the obtained multiple position offsets, carrying out spatial position offset correction on the corresponding vertex of the source mesh model to obtain a target mesh model of the first virtual object in a world coordinate system;
after determining the position offset of each vertex according to the above manner in combination with the above description of the mesh model of the virtual object, the source spatial position of the corresponding vertex in the source mesh model may be adjusted according to the position offset to obtain the target spatial position of the vertex, and meanwhile, a change in the vertex spatial position may cause deformation of each triangular surface constituting the mesh model, so that the entire structure of the source mesh model is deformed, and the deformed mesh model is recorded as the target mesh model.
In some embodiments, for the vertex correction processing procedure of the source mesh model of the first virtual object, the animation deformer in the game engine may obtain the target mesh model of the first virtual object according to a preset model deformation rule and by using the obtained position offset of each vertex, and a specific implementation procedure of the present application is not described in detail.
Step S27, acquiring source machine state parameters under the field angle of a source machine in a world coordinate system;
in the present application, the scene example shown in table 1 above and the camera position correction diagram of the projection of the YOZ plane of the world coordinate system shown in fig. 8 are also described as an example, and it can be understood that, because the camera does not rotate in the Z-axis direction of the world coordinate system in the landscape game scene to which the present application is applied, fig. 8 of the present application only illustrates the perspective projection correction processing procedure of the virtual object by taking the case where the plane perpendicular to the screen plane in the landscape game is the YOZ plane, but is not limited thereto.
Therefore, based on the camera position correction diagram shown in fig. 8, the source camera state parameters obtained by this embodiment may include the spatial position of the source camera (e.g., the virtual object B in the FOV)1Lower camera home position), the source camera pitch angle pitch, and the first angle pitch' of the source camera position vector. The first included angle pitch' may refer to an included angle between a source camera position vector and a first coordinate axis (specifically, a Y axis or an X axis, which is only exemplified herein by the first coordinate axis as the Y axis, and a projection processing manner similar to the first coordinate axis as the X axis is not described herein), and the source camera spatial position may include a first projection position of the source camera on a first plane (for example, a YOZ plane shown in fig. 8) formed by the first coordinate axis and a second coordinate axis (for example, the Z axis) of the world coordinate system, and the first projection position may include a projection position of the source camera on the Y axis (when a projection on the XOZ plane is obtained, this projection position may include a projection of the source camera on the Y axis of the world coordinate system (when a projection on the XOZ plane is obtained), this projection position may be referred to as an included angle between the source camera position vector and the first coordinate axis (for example, which may be referred to a Y axis or an X axis) of the world coordinate systemMay be the X axis) of the first coordinate cyAnd a second coordinate c projected on the Z-axis of the world coordinate systemzIt can be seen that the spatial position of the source camera in this embodiment can be denoted as camera (c)y,cz)。
It should be noted that the content included in the source camera state parameter under the field angle of the source camera is not limited to the above-mentioned content, and the present application does not limit the manner of acquiring each parameter, for example, after determining the origin o (target) of the world coordinate system, the related data of the source grid model and the operation data corresponding to the first virtual application scene may be calculated and obtained, and the like, and the determination may be performed according to the content of each parameter, and the present application is not described in detail.
Step S28, obtaining a target camera space position under a target camera field angle and a rendering correction parameter of the first virtual object according to the source camera state parameter;
referring to FIG. 8, the spatial position camera (c) of the source camera is known according to the principle of mathematical operations such as trigonometric functiony,cz) In the case of (2), the first coordinate c may be usedyAnd a second coordinate czThe camera source distance do between the source camera and the world coordinate system origin at the source camera field angle is obtained, i.e.
Figure BDA0002768030360000191
Figure BDA0002768030360000192
In this embodiment, the rendering correction parameter may include a position correction parameter for a source rendering spatial position in the world coordinate system, and/or a model scaling ratio for the first virtual object. Wherein the position correction parameter may include the first coordinate cyA camera source pitch angle pitch and a first included angle pitch' and the like; for the acquisition of the model scaling dr, reference may be made to, but not limited to, the following implementation:
FOV using source camera field angle1And field angle FOV of target camera2Obtaining a model scaling dr of the first virtual object,the tangent function tan () may be specifically used to construct the calculation formula of the model scaling dr, but is not limited to the calculation formula:
Figure BDA0002768030360000193
step S29, correcting the source rendering space position of the first virtual object by using the rendering correction parameters to obtain a target rendering space position;
step S210, rendering the target grid model according to the target camera space position and the target rendering space position to obtain a target rendering image of the first virtual object.
In this embodiment of the application, the position offset value of the first virtual object, that is, the offset value of the source rendering spatial position and the model scaling ratio of the first virtual object, may be obtained by using the rendering correction parameter, and in the model rendering stage, the rendering of the target mesh model may be implemented according to the offset value of the source rendering spatial position and the model scaling ratio, so as to ensure that the rendered target rendering image of the first virtual object is consistent with the screen space size of the source rendering image rendered to the source mesh model in the screen space size.
It should be noted that, as shown in fig. 6, the mesh model vertex spatial position correction process, the rendering spatial position correction process, and the camera spatial position correction process may be executed synchronously to improve perspective correction efficiency; of course, the method may be implemented according to other execution orders, and is not limited to the execution order of the above steps in this embodiment.
To sum up, in the running process of such applications as a landscape game, the computer device may obtain a target camera field angle of a current first virtual application scene, determine a first virtual object corresponding to a source machine field angle that is inconsistent with the target camera field angle in the first virtual application scene, then perform spatial position correction on each vertex of a source mesh model of the first virtual object by using a projection matrix and a camera transformation matrix of the target camera field angle and the source machine field angle, so as to obtain the target mesh model capable of achieving a desired perspective effect, and simultaneously correct a rendering spatial position and a camera spatial position of the first virtual object by using source machine state parameters so as to enable the computer device to render the spatial position according to the target camera spatial position and the target, and rendering the target grid model, and after the obtained target rendering image of the first virtual object is displayed, the required visual effect of perspective projection can be achieved. Therefore, perspective correction and model rendering can be performed during operation of a horizontal game, processing efficiency is greatly improved, the technical problems that a lattice deformation processing mode is performed on a source grid model manually by means of a three-dimensional modeling tool, reliability and accuracy are uncontrollable, and the field angle of a target camera of a game scene is limited to be consistent with the preset field angle of the source camera are solved.
Moreover, the method can be used for meeting the requirement of unconventional perspective performance in other fixed-field-angle games, so that the application range is widened; when the situations such as the abnormal elements in the game need to be expressed, according to the virtual object perspective processing method provided by the application, the scenes do not need to be switched, and the current game state can be completely reserved. It can be seen that for the artifical perspective correction processing mode of background art description, this application improves scheme application scope, has greatly reduced artifical work load, has reduced fine arts producers' burden.
In some embodiments provided in the present application, in combination with fig. 8, the above-described camera spatial position correction process and the rendering spatial position correction process of the first virtual object may be further described in detail in the present application, but the present application is not limited to the refinement processing manner described in the present embodiment, and for the vertex spatial position correction process of the source mesh model, reference may be made to the description of the corresponding parts in the above embodiments, and details are not repeated in the present embodiment. As shown in fig. 9, the virtual object perspective processing method proposed by the present application may include:
step S31, obtaining a source camera space position, a source camera pitch angle and a first included angle of a source camera position vector under a source camera field angle in a world coordinate system;
step S32, obtaining a camera source distance between a source camera and the origin of the world coordinate system under the field angle of the source camera by using the first coordinate and the second coordinate in the spatial position of the source camera;
step S33, obtaining a model scaling of the first virtual object by using the source camera field angle and the target camera field angle;
regarding the implementation process of step S31 to step S33, reference may be made to the description of corresponding parts in the foregoing embodiments, which are not described in detail in this embodiment.
Step S34, obtaining a camera distance offset by using the camera source distance and the model scaling;
alternatively, referring to the perspective projection correction diagram on the YOZ plane shown in fig. 8, the camera distance offset dp may be calculated by using the camera source distance do and the model scaling ratio dr, using a suitable data operation formula, that is, dp is ═ do × (dr-1), but the invention is not limited thereto.
Step S35, obtaining a camera position offset under a target camera field angle by using the first coordinate, the camera source pitch angle, the first included angle, the camera distance offset and the camera source distance;
as shown in fig. 8, the camera position offset amount may refer to a position offset amount offset of the target camera projected on the Z-axis with respect to the source camera.
Optionally, the first coordinate c in the embodiments of the present applicationyAfter the camera source pitch angle pitch, the first included angle pitch', the camera distance offset dp, and the camera source distance do, the position offset of the target camera spatial position on the Z axis may be calculated by using the following calculation formula, but is not limited to this calculation method:
Figure BDA0002768030360000211
step S36, obtaining a third coordinate under the field angle of the target camera by using the camera position offset, the first included angle, the camera distance offset and the camera source distance;
as shown in FIG. 8, the third coordinate is the coordinate projected on the Z axis by the target camera and can be written as cz' combining the above obtained parameter values, as shown in fig. 8, the present application can calculate the coordinate c of the target camera projected on the Z axis under the field angle of the target camera by using the trigonometric function operation formulaz', such as c'zThe calculation is not limited to (do + dp) × sin (pitch') -offset.
Step S37, obtaining a fourth coordinate under the field angle of the target camera by using the first included angle, the camera distance offset and the camera source distance;
step S38, determining the space position of the target camera under the field angle of the target camera according to the third coordinate and the fourth coordinate;
as shown in FIG. 8, the fourth coordinate is the coordinate projected on the Y-axis by the target camera and can be denoted as cy' combining the above obtained parameter values, as shown in fig. 8, the present application can calculate the fourth coordinate c by using a trigonometric function operation formulay', i.e. c'yThe calculation method is not limited to (do + dp) × cos (pitch'). At this time, the target camera spatial position camera' (c) at the target camera angle of view in the YOZ plane in the world coordinate system can be obtainedy’,cz’)。
It should be noted that, for the landscape game scene applicable to the present application, the camera does not rotate in the Z-axis direction of the world coordinate system, and the foregoing embodiment only describes the perspective projection correction processing of the virtual object in the YOZ plane of the world coordinate system, and the perspective projection correction processing process of the virtual object in the XOZ plane is similar, and is not repeated in this application.
Step S39, obtaining a target rendering space position of the first virtual object using the camera position offset and the model scaling.
As described above, in order to ensure that the screen space sizes of the first virtual objects are consistent, the position offset value (0, 0, -offset) and the scaling value (dr, dr, dr) of the first virtual object in fig. 8 may be determined according to the camera position offset value offset and the model scaling dr obtained above, and during the rendering process of the target mesh model, the target rendering space position of the first virtual object may be determined according to these two parameters, so as to ensure that the target rendering image obtained by model rendering can satisfy the required perspective effect.
Referring to fig. 10, a schematic structural diagram of an alternative example of the virtual object perspective processing apparatus proposed in the present application, which may be applied to the computer device, as shown in fig. 10, may include:
a first data obtaining module 21, configured to obtain a target camera field angle of a first virtual application scene, and a source camera field angle and a source mesh model of a first virtual object in the first virtual application scene;
in some embodiments presented herein, the first data acquisition module 21 may include:
the system comprises a first information acquisition unit, a second information acquisition unit and a third information acquisition unit, wherein the first information acquisition unit is used for acquiring a target camera field angle of a first virtual application scene, and a source camera field angle and a source grid model which correspond to each virtual object in the first virtual application scene;
optionally, the first information obtaining unit may include:
an import request receiving unit configured to receive a three-dimensional model data import request transmitted by an electronic device, the three-dimensional model data import request being generated by the electronic device in response to a three-dimensional model data import operation on a first virtual object;
and the import request analysis unit is used for analyzing the three-dimensional model data import request to obtain a source machine field angle and a source grid model of the first virtual object.
The first comparison unit is used for comparing the field angles of the source camera corresponding to the virtual objects with the field angles of the target camera respectively;
and the first determining unit is used for determining a source camera field angle and a source grid model corresponding to a first virtual object according to the comparison result, wherein the first virtual object refers to a virtual object corresponding to the source camera field angle which is inconsistent with the target camera field angle.
The first position correction module 22 is configured to perform spatial position correction on each vertex of the source mesh model by using the projection matrices corresponding to the target camera field angle and the source camera field angle, so as to obtain a target mesh model in a world coordinate system;
a second position correction module 23, configured to perform offset correction on the source camera space position under the source camera view angle and the source rendering space position of the first virtual object according to a view angle change constraint condition, to obtain a target camera space position under the target camera view angle and a target rendering space position of the first virtual object;
and a rendering module 24, configured to render the target mesh model according to the target camera spatial position and the target rendering spatial position, so as to obtain a target rendering image of the first virtual object.
In some embodiments proposed in the present application, as shown in fig. 11, the first position correction module 22 may include:
a second information obtaining unit 221, configured to obtain a first projection matrix of the field angle of the target camera, a second projection matrix of the field angle of the source camera, a camera transformation matrix between a camera coordinate system and a world coordinate system, and a source vertex space position corresponding to each vertex in the source mesh model;
a first offset correction unit 222, configured to perform offset correction on the spatial position of the source vertex by using the first projection matrix, the second projection matrix, and the camera transformation matrix, so as to obtain a position offset of a corresponding vertex under the world coordinate system;
the second offset correction unit 223 is configured to perform spatial offset correction on the corresponding vertex of the source mesh model according to the obtained multiple position offsets, so as to obtain a target mesh model of the first virtual object in the world coordinate system.
In this embodiment, the constraint condition for the change of the viewing angle may include that the screen space size of the same virtual object is the same under different camera viewing angles, and in order to ensure that the perspective-corrected virtual object meets the constraint condition for the change of the viewing angle, the second position correcting module 23 may specifically include
A third information obtaining unit 231, configured to obtain a source camera state parameter under the source camera field angle in the world coordinate system;
a fourth information obtaining unit 232, configured to obtain a spatial position of the target camera at the field angle of the target camera and a rendering correction parameter of the first virtual object according to the source camera state parameter; wherein the rendering correction parameters include a position correction parameter for the source rendering spatial position and/or a model scaling for the first virtual object in the world coordinate system;
a third offset correction unit 234, configured to correct the source rendering spatial position of the first virtual object by using the rendering correction parameter, so as to obtain a target rendering spatial position.
In a possible implementation manner, the source camera state parameter may include a source camera spatial position, a source camera pitch angle, and a first included angle of a source camera position vector, where the first included angle is an included angle between the source camera position vector and a first coordinate axis of a world coordinate system, the first coordinate axis may refer to a Y axis or an X axis of the world coordinate system, and the source camera spatial position includes a first coordinate of a source camera projected on the first coordinate axis and a second coordinate of the source camera projected on a Z axis of the world coordinate system.
Based on this, the fourth information acquiring unit 232 may include:
the camera source distance obtaining unit is used for obtaining a camera source distance between a source camera under the field angle of the source camera and the origin of a world coordinate system by using the first coordinate and the second coordinate;
the model scaling obtaining unit is used for obtaining the model scaling of the first virtual object by utilizing the field angle of the source camera and the field angle of the target camera;
the target camera space position obtaining unit is used for obtaining a target camera space position under the field angle of the target camera according to the camera source distance, the model scaling and the first included angle;
and the rendering correction parameter obtaining unit is used for obtaining the rendering correction parameter of the first virtual object according to the model scaling ratio, the first coordinate, the camera source pitch angle and the first included angle.
Further, in some embodiments, the above-mentioned target camera spatial position obtaining unit may include:
a camera distance offset obtaining unit, configured to obtain a camera distance offset by using the camera source distance and the model scaling;
a camera position offset obtaining unit, configured to obtain a camera position offset under the field angle of the target camera by using the first coordinate, the camera source pitch angle, the first included angle, the camera distance offset, and the camera source distance;
wherein the camera position offset is the position offset of the target camera relative to the source camera projected on the Z axis;
a third coordinate obtaining unit, configured to obtain a third coordinate under the field angle of the target camera by using the camera position offset, the first included angle, the camera distance offset, and the camera source distance;
wherein the third coordinate is a coordinate projected by the target camera on the Z axis;
a fourth coordinate obtaining unit, configured to obtain a fourth coordinate under the field angle of the target camera by using the first included angle, the camera distance offset, and the camera source distance;
wherein the fourth coordinate is a coordinate projected on the first coordinate axis by the target camera;
a target camera spatial position determination unit configured to determine the target camera spatial position at the target camera field angle from the third coordinate and the fourth coordinate;
accordingly, the rendering correction parameter obtaining unit may be specifically configured to form the rendering correction parameter of the first virtual object by the camera position offset and the model scaling.
It should be noted that, various modules, units, and the like in the embodiments of the foregoing apparatuses may be stored in the memory as program modules, and the processor executes the program modules stored in the memory to implement corresponding functions, and for the functions implemented by the program modules and their combinations and the achieved technical effects, reference may be made to the description of corresponding parts in the embodiments of the foregoing methods, which is not described in detail in this embodiment.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and the computer program is loaded and executed by a processor to implement the steps of the virtual object perspective processing method, where a specific implementation process may refer to descriptions of corresponding parts in the foregoing embodiment, and details are not described in this embodiment.
The present application also proposes a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instruction from the computer-readable storage medium, and executes the computer instruction, so that the computer device executes the methods provided in the various optional implementation manners in the aspect of the virtual object perspective processing method or the aspect of the virtual object perspective processing apparatus.
Finally, it should be noted that, in the present specification, the embodiments are described in a progressive or parallel manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device and the computer equipment disclosed by the embodiment correspond to the method disclosed by the embodiment, so that the description is relatively simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether these functions are performed in hardware or software depends on the specific application of the solution and design pre-set conditions. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for perspective processing of a virtual object, the method comprising:
acquiring a target camera field angle of a first virtual application scene, and a source camera field angle and a source grid model of a first virtual object in the first virtual application scene;
correcting the space position of each vertex of the source mesh model by using the projection matrixes corresponding to the target camera field angle and the source camera field angle to obtain a target mesh model under a world coordinate system;
performing offset correction on the source camera space position under the field angle of the source camera and the source rendering space position of the first virtual object according to the field angle change constraint condition to obtain a target camera space position under the field angle of the target camera and a target rendering space position of the first virtual object;
rendering the target grid model according to the space position of the target camera and the space position of the target rendering to obtain a target rendering image of the first virtual object.
2. The method of claim 1, wherein obtaining the target camera field angle of the first virtual application scene and the source camera field angle and the source mesh model of the first virtual object in the first virtual application scene comprises:
acquiring a target camera field angle of a first virtual application scene, and a source camera field angle and a source grid model corresponding to each virtual object in the first virtual application scene;
comparing the source camera field angles corresponding to the virtual objects with the target camera field angles respectively;
and determining a source camera angle of view and a source grid model corresponding to a first virtual object according to the comparison result, wherein the first virtual object is a virtual object corresponding to the source camera angle of view which is inconsistent with the target camera angle of view.
3. The method according to claim 1 or 2, wherein the method for obtaining the source machine field angle and the source mesh model of the first virtual object comprises:
receiving a three-dimensional model data import request sent by an electronic device, wherein the three-dimensional model data import request is generated by the electronic device in response to a three-dimensional model data import operation on a first virtual object;
and analyzing the three-dimensional model data import request to obtain a source machine field angle and a source grid model of the first virtual object.
4. The method according to claim 1, wherein the performing spatial position offset correction on each vertex of the source mesh model by using the projection matrix corresponding to each of the target camera field angle and the source camera field angle to obtain the target mesh model in the world coordinate system comprises:
acquiring a first projection matrix of the field angle of the target camera, a second projection matrix of the field angle of the source camera, a camera transformation matrix between a camera coordinate system and a world coordinate system, and source vertex space positions corresponding to all vertexes in the source mesh model;
performing offset correction on the spatial position of the source vertex by using the first projection matrix, the second projection matrix and the camera transformation matrix to obtain the position offset of the corresponding vertex under the world coordinate system;
and according to the obtained plurality of position offsets, carrying out spatial position offset correction on the corresponding vertex of the source mesh model to obtain a target mesh model of the first virtual object under the world coordinate system.
5. The method of claim 1, wherein the field angle variation constraints include that screen space sizes of the same virtual object are the same at different camera field angles.
6. The method according to claim 5, wherein the performing offset correction on the source camera space position under the source camera view angle and the source rendering space position of the first virtual object according to the view field angle variation constraint condition to obtain the target camera space position under the target camera view angle and the target rendering space position of the first virtual object includes:
acquiring source machine state parameters under the field angle of the source machine in the world coordinate system;
according to the source camera state parameters, obtaining a target camera space position under the field angle of the target camera and rendering correction parameters of the first virtual object; wherein the rendering correction parameters include a position correction parameter for the source rendering spatial position and/or a model scaling for the first virtual object in the world coordinate system;
and correcting the source rendering space position of the first virtual object by using the rendering correction parameters to obtain a target rendering space position.
7. The method of claim 6, wherein the source camera state parameters comprise a source camera spatial position, a source camera pitch angle, and a first angle of a source camera position vector, wherein the first angle refers to an angle between the source camera position vector and a first coordinate axis of the world coordinate system, the first coordinate axis comprising a Y axis or an X axis, the source camera spatial position comprises a first coordinate of a source camera projected on the first coordinate axis and a second coordinate of the world coordinate system projected on a second coordinate axis, the second coordinate axis comprising a Z axis;
the obtaining a spatial position of the target camera under the field angle of the target camera and a rendering correction parameter of the first virtual object according to the state parameter of the source camera includes:
obtaining a camera source distance between a source camera under the field angle of the source camera and the origin of a world coordinate system by using the first coordinate and the second coordinate;
obtaining a model scaling of the first virtual object by using the source camera field angle and the target camera field angle;
obtaining a space position of the target camera under the field angle of the target camera according to the camera source distance, the model scaling and the first included angle;
and obtaining a rendering correction parameter of the first virtual object according to the model scaling ratio, the first coordinate, the camera source pitch angle and the first included angle.
8. The method of claim 7, wherein obtaining the target camera spatial position at the target camera view angle from the camera source distance, the model scaling and the first angle comprises:
obtaining a camera distance offset by using the camera source distance and the model scaling;
obtaining a camera position offset under the field angle of the target camera by using the first coordinate, the camera source pitch angle, the first included angle, the camera distance offset and the camera source distance, wherein the camera position offset refers to a position offset of a target camera projected on the Z axis relative to the source camera;
obtaining a third coordinate under the field angle of the target camera by using the camera position offset, the first included angle, the camera distance offset and the camera source distance, wherein the third coordinate is a coordinate projected on the Z axis by the target camera;
obtaining a fourth coordinate under the field angle of the target camera by using the first included angle, the camera distance offset and the camera source distance, wherein the fourth coordinate is a coordinate projected on the first coordinate axis by the target camera;
determining the space position of the target camera under the field angle of the target camera according to the third coordinate and the fourth coordinate;
the obtaining a rendering correction parameter of the first virtual object according to the model scaling ratio, the first coordinate, the camera source pitch angle and the first included angle includes:
and forming rendering correction parameters of the first virtual object by the camera position offset and the model scaling.
9. A virtual object perspective processing apparatus, the apparatus comprising:
the system comprises a first data acquisition module, a second data acquisition module and a third data acquisition module, wherein the first data acquisition module is used for acquiring a target camera field angle of a first virtual application scene, and a source camera field angle and a source grid model of a first virtual object in the first virtual application scene;
the first position correction module is used for correcting the spatial position of each vertex of the source mesh model by using the projection matrixes corresponding to the target camera field angle and the source camera field angle to obtain a target mesh model in a world coordinate system;
a second position correction module, configured to perform offset correction on the source camera spatial position at the source camera view angle and the source rendering spatial position of the first virtual object according to a view angle change constraint condition, to obtain a target camera spatial position at the target camera view angle and a target rendering spatial position of the first virtual object;
and the rendering module is used for rendering the target grid model according to the target camera space position and the target rendering space position to obtain a target rendering image of the first virtual object.
10. A computer device, characterized in that the computer device comprises:
a communication module;
a memory for storing a program of the virtual object perspective processing method according to claim 1;
a processor for loading and executing the program stored in the memory to implement the steps of the virtual object perspective processing method as claimed in claim 1.
CN202011239948.4A 2020-11-09 2020-11-09 Virtual object perspective processing method and device and computer equipment Pending CN114452646A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011239948.4A CN114452646A (en) 2020-11-09 2020-11-09 Virtual object perspective processing method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011239948.4A CN114452646A (en) 2020-11-09 2020-11-09 Virtual object perspective processing method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN114452646A true CN114452646A (en) 2022-05-10

Family

ID=81404964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011239948.4A Pending CN114452646A (en) 2020-11-09 2020-11-09 Virtual object perspective processing method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN114452646A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116244886A (en) * 2022-11-29 2023-06-09 北京瑞风协同科技股份有限公司 Virtual-real test data matching method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116244886A (en) * 2022-11-29 2023-06-09 北京瑞风协同科技股份有限公司 Virtual-real test data matching method and system
CN116244886B (en) * 2022-11-29 2024-03-15 北京瑞风协同科技股份有限公司 Virtual-real test data matching method and system

Similar Documents

Publication Publication Date Title
CN106846497B (en) Method and device for presenting three-dimensional map applied to terminal
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
CN111161336B (en) Three-dimensional reconstruction method, three-dimensional reconstruction apparatus, and computer-readable storage medium
CN109144252B (en) Object determination method, device, equipment and storage medium
US9754398B1 (en) Animation curve reduction for mobile application user interface objects
CN113610981A (en) Face model generation method, interaction method and related device
US10909752B2 (en) All-around spherical light field rendering method
CN114758108A (en) Virtual object driving method, device, storage medium and computer equipment
US10699372B2 (en) Image generation apparatus and image display control apparatus
CN111739134B (en) Model processing method and device for virtual character and readable storage medium
CN114452646A (en) Virtual object perspective processing method and device and computer equipment
WO2018182938A1 (en) Method and system for wireless ultra-low footprint body scanning
KR102176805B1 (en) System and method for providing virtual reality contents indicated view direction
US10754498B2 (en) Hybrid image rendering system
US20230120883A1 (en) Inferred skeletal structure for practical 3d assets
CN114913277A (en) Method, device, equipment and medium for three-dimensional interactive display of object
CA3169005A1 (en) Face mesh deformation with detailed wrinkles
CN112802183A (en) Method and device for reconstructing three-dimensional virtual scene and electronic equipment
JP6967150B2 (en) Learning device, image generator, learning method, image generation method and program
CN117173378B (en) CAVE environment-based WebVR panoramic data display method, device, equipment and medium
CN116563505B (en) Avatar generation method, apparatus, electronic device, and storage medium
CN117197319B (en) Image generation method, device, electronic equipment and storage medium
US20230334810A1 (en) Methods, storage media, and systems for generating a three-dimensional coordinate system
CN117442965A (en) Icon generation method and device for model, electronic equipment and storage medium
CN115222866A (en) Rendering method and device of virtual prop, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination