CN113476835B - Picture display method and device - Google Patents
Picture display method and device Download PDFInfo
- Publication number
- CN113476835B CN113476835B CN202011137849.5A CN202011137849A CN113476835B CN 113476835 B CN113476835 B CN 113476835B CN 202011137849 A CN202011137849 A CN 202011137849A CN 113476835 B CN113476835 B CN 113476835B
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- user
- dimensional
- pixel
- world coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000002452 interceptive effect Effects 0.000 claims abstract description 57
- 230000000007 visual effect Effects 0.000 claims abstract description 7
- 238000003384 imaging method Methods 0.000 claims description 25
- 238000006243 chemical reaction Methods 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 18
- 230000003993 interaction Effects 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 10
- 238000004891 communication Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 23
- 238000004590 computer program Methods 0.000 description 7
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 5
- 238000013507 mapping Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2250/00—Miscellaneous game characteristics
- A63F2250/30—Miscellaneous game characteristics with a three-dimensional image
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method and a device for displaying pictures, comprising the following steps: and constructing a camera coordinate system and a world coordinate system of the interactive application, wherein the interactive application is used for a plurality of users to interact in a virtual scene, the camera coordinate system is a three-dimensional coordinate system determined by a view angle of a first user in the interactive application, then determining the three-dimensional coordinate of at least one object in a picture to be displayed of the interactive application under the world coordinate system, and converting the three-dimensional coordinate of the at least one object under the world coordinate system into a pixel coordinate under a pixel coordinate system by the camera coordinate system, wherein the pixel coordinate is used for displaying the at least one object. Through the three-dimensional stereoscopic visual effect of the picture to be displayed under the two-dimensional pixel coordinate system, the real feeling of the user in the game virtual scene is improved, the virtual reality effect of communication between the users is improved, and the game experience of the users is improved.
Description
Technical Field
The present invention relates to the field of image display, and in particular, to a method and apparatus for displaying a picture.
Background
Currently, online games (e.g., chess games, such as fighting, three kingdoms, etc.) generally employ cartoon character avatars instead of the avatars of real users, resulting in an unrealistic overall game interface and a lack of user-facing game-playing pleasure. In the prior art, when a user uses a scene online game of video call, a real-time picture of the user replaces a cartoon character, so that double pleasure of video chat while the game is performed is realized. However, the above method simply replaces the cartoon character head with the video picture, and lacks a real feeling of presence. Especially, the spatial position relation between the game scene and the video picture of the user in the game is disordered, so that the display is unreal, and the game experience of the user is affected.
Therefore, a method for displaying a picture to enhance the sense of reality is needed, which gives the user the experience of reality and the like to the game, and enhances the sense of experience of the user.
Disclosure of Invention
The embodiment of the invention provides a method and a device for displaying pictures, which are used for improving the real presence of a user in a game virtual scene, increasing the virtual reality effect of communication between users and improving the game experience of the users.
In a first aspect, an embodiment of the present invention provides a method for displaying a picture, including: constructing a camera coordinate system and a world coordinate system of the interactive application; the interaction application is used for a plurality of users to interact in the virtual scene; the camera coordinate system is a three-dimensional coordinate system determined by the view angle of a first user in the interactive application;
Determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application under the world coordinate system;
Converting three-dimensional coordinates of the at least one object in the world coordinate system into pixel coordinates in a pixel coordinate system through the camera coordinate system; the pixel coordinates are used to display the at least one object.
According to the technical scheme, the camera coordinate system is determined according to the visual angle of the first user, then the three-dimensional coordinates of the picture to be displayed in the world coordinate system are converted into the pixel coordinates in the pixel coordinate system through the camera coordinate system, the display picture of at least one object is determined according to the pixel coordinates of at least one object, and the display picture is sent to the display screen to be displayed, so that the display picture in the two-dimensional pixel coordinate system has a three-dimensional stereoscopic visual effect, namely an effect of virtual reality. Therefore, the real presence of the user in the game virtual scene is improved, the virtual reality effect of communication between the users is improved, and the game experience of the user is improved.
Optionally, converting, by the camera coordinate system, three-dimensional coordinates of the at least one object in the world coordinate system to pixel coordinates in a pixel coordinate system, including:
Determining a conversion relationship between the camera coordinate system and the world coordinate system; the conversion relation is used for determining three-dimensional coordinates of the at least one object under the camera coordinate system;
Determining imaging parameters in the camera coordinate system; the imaging parameters are used for converting the three-dimensional coordinates of the at least one object in the camera coordinate system into two-dimensional pixel coordinates in the pixel coordinate system;
And determining pixel coordinates of the at least one object according to the conversion relation and the imaging parameters.
According to the technical scheme, the relation between the world coordinate system and the pixel coordinate system is determined according to the camera coordinate system, and then the coordinates of the coordinates in the world coordinate system under the pixel coordinate system can be obtained, so that the three-dimensional coordinates are converted into two-dimensional coordinates, the two-dimensional coordinates have three-dimensional stereoscopic vision effect, the real presence of a user in a game virtual scene is improved, the virtual reality effect of communication between the user and the user is improved, and the game experience of the user is improved.
Optionally, the at least one object is a scene object in the virtual scene;
determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application in the world coordinate system comprises:
And constructing a geometric function of the scene object under the world coordinate system according to the geometric figure of the scene object, so as to determine the three-dimensional coordinate of the scene object under the world coordinate system.
According to the technical scheme, after the three-dimensional coordinates of the scene object in the world coordinate system are determined, the coordinates of the scene object in the virtual scene in the pixel coordinate system are obtained according to the scheme, so that the real presence of a user in the game virtual scene is improved.
Optionally, the at least one object is a user window in the virtual scene;
determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application in the world coordinate system comprises:
Determining a user window in the virtual scene;
and determining the three-dimensional coordinates of the user window in the world coordinate system, thereby obtaining the three-dimensional coordinates of the user window in the world coordinate system.
Optionally, the at least one object is a second user within the virtual scene; the second user is any user except the first user in a plurality of users;
determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application in the world coordinate system comprises:
Acquiring an interaction picture of the second user from the video picture of the second user;
Projecting the interaction picture of the second user into a user window of the second user;
And determining the three-dimensional coordinates of the user window of the second user in the world coordinate system, so as to obtain the three-dimensional coordinates of the second user in the world coordinate system.
According to the technical scheme, after the three-dimensional coordinates of the interactive picture and the user window in the world coordinate system are determined, the coordinates of the interactive picture and the user window in the pixel coordinate system are obtained according to the scheme, so that the virtual reality effect of communication between users is improved, and the game experience of the users is improved.
Optionally, the three-dimensional origin of the world coordinate system is projected in a direction perpendicular to the ground by the three-dimensional origin of the camera coordinate system.
The three-dimensional origin of the camera coordinate system is projected in the direction perpendicular to the ground to be preset as the three-dimensional origin of the world coordinate system, so that the complexity of the conversion relationship between the camera coordinate system and the world coordinate system is reduced, and the calculated amount is reduced.
Optionally, constructing a camera coordinate system of the interactive application includes:
Acquiring a camera coordinate system corresponding to a switching view angle instruction according to the switching view angle instruction of the first user; a camera coordinate system with a plurality of visual angles is preset in the interactive application; or creating the camera coordinate system according to the view angle creation instruction of the first user; or creating the camera coordinate system according to the camera coordinate system preset in the interactive application.
According to the technical scheme, different display picture effects are provided for the user, the user can select the display picture effects, different favorites of the user are met, and the game experience of the user is improved.
In a second aspect, an embodiment of the present invention provides an apparatus for displaying a picture, including:
the construction module is used for constructing a camera coordinate system and a world coordinate system of the interactive application; the interaction application is used for a plurality of users to interact in the virtual scene; the camera coordinate system is a three-dimensional coordinate system determined by the view angle of a first user in the interactive application;
The processing module is used for determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application under the world coordinate system;
Converting three-dimensional coordinates of the at least one object in the world coordinate system into pixel coordinates in a pixel coordinate system through the camera coordinate system; the pixel coordinates are used to display the at least one object.
Optionally, the processing module is specifically configured to:
Determining a conversion relationship between the camera coordinate system and the world coordinate system; the conversion relation is used for determining three-dimensional coordinates of the at least one object under the camera coordinate system;
Determining imaging parameters in the camera coordinate system; the imaging parameters are used for converting the three-dimensional coordinates of the at least one object in the camera coordinate system into two-dimensional pixel coordinates in the pixel coordinate system;
And determining pixel coordinates of the at least one object according to the conversion relation and the imaging parameters.
Optionally, the at least one object is a scene object in the virtual scene;
the processing module is specifically configured to:
And constructing a geometric function of the scene object under the world coordinate system according to the geometric figure of the scene object, so as to determine the three-dimensional coordinate of the scene object under the world coordinate system.
Optionally, the at least one object is a user window in the virtual scene;
the processing module is specifically configured to:
Determining a user window in the virtual scene;
and determining the three-dimensional coordinates of the user window in the world coordinate system, thereby obtaining the three-dimensional coordinates of the user window in the world coordinate system.
Optionally, the at least one object is a second user within the virtual scene; the second user is any user except the first user in a plurality of users;
the processing module is specifically configured to:
determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application in the world coordinate system comprises:
Acquiring an interaction picture of the second user from the video picture of the second user;
Projecting the interaction picture of the second user into a user window of the second user;
And determining the three-dimensional coordinates of the user window of the second user in the world coordinate system, so as to obtain the three-dimensional coordinates of the second user in the world coordinate system.
Optionally, the three-dimensional origin of the world coordinate system is projected in a direction perpendicular to the ground by the three-dimensional origin of the camera coordinate system.
Optionally, the processing module is specifically configured to:
Acquiring a camera coordinate system corresponding to a switching view angle instruction according to the switching view angle instruction of the first user; a camera coordinate system with a plurality of visual angles is preset in the interactive application; or creating the camera coordinate system according to the view angle creation instruction of the first user; or creating the camera coordinate system according to the camera coordinate system preset in the interactive application.
In a third aspect, embodiments of the present invention also provide a computing device, comprising:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the picture display method according to the obtained program.
In a fourth aspect, embodiments of the present invention further provide a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the above-described method of displaying a screen.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of a chess and card game provided by an embodiment of the present invention;
FIG. 2 is a system architecture according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for displaying a frame according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a camera coordinate system according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a world coordinate system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a camera coordinate system according to an embodiment of the present invention;
Fig. 7 is a schematic diagram of a scene object in a virtual scene according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a scene object in a virtual scene according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a user window according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a user window according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a display screen according to an embodiment of the present invention;
Fig. 12 is a schematic structural diagram of a device for displaying images according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the prior art, a chess game lacks real presence under a video chat scene at one side of the game, and particularly, the spatial position relationship between a table top and a user in a virtual scene is disordered, so that the display is unreal, and a scene diagram of the chess game is exemplarily shown in fig. 1. As shown in fig. 1, the coordinate systems of the person and the desktop in the picture are inconsistent, that is, the coordinate systems of the person and the desktop under the display view angle are inconsistent, and the coordinate system under the display view angle is inconsistent with the coordinate system under the real view angle, so that the display is not real. Specifically, the lack of a spatial mapping relationship between the desktop in the virtual scene and the human eye sense in the real space, that is, the lack of a mapping relationship between the geometric shape of the desktop in the real space and the geometric shape on the displayed two-dimensional picture, results in unrealistic virtual license tables, and the lack of a spatial mapping relationship between the character and the human eye sense in the real space, that is, the lack of a mapping relationship between the spatial position of the character in the space and the spatial position of the character on the displayed two-dimensional picture, results in unrealistic characters.
Therefore, the embodiment of the invention discloses a picture display method and a picture display device, which are used for improving the real realistic picture display method, giving the user realistic game experience and improving the experience of the user.
Fig. 2 exemplarily shows a system architecture to which an embodiment of the present invention is applied, the system architecture including a server 201, an interactive terminal 1, an interactive terminal 2, and a display screen 202.
The interactive terminal 1 and the interactive terminal 2 are used for collecting video pictures, such as video information, of an interactive user through a camera. The video frames are then uploaded to the server 201, and optionally, the interactive terminal may be plural, which is not limited herein. The interactive terminal can be an intelligent device, such as a mobile phone, a tablet computer, a notebook computer and the like, and the camera can be an external camera of the terminal, an internal camera of the terminal or a television camera and the like.
The server 201 is configured to construct a camera coordinate system and a world coordinate system, determine a conversion relationship between the camera coordinate system and the world coordinate system, and imaging parameters under the camera coordinate system, determine pixel coordinates under a pixel coordinate system of an interactive picture in a video picture and a user window in a virtual scene according to coordinates under the world coordinate system of the interactive picture in the video picture and the user window in the virtual scene through the conversion relationship and the imaging parameters, then determine a display picture (such as a video picture and a virtual game scene including an interactive user) according to pixel coordinates under the pixel coordinate system of the interactive picture in the video picture and the user window in the virtual scene, and send the display picture to the display screen 202.
The display 202 may be a liquid crystal display, an organic light emitting display, a projection device, or the like, for displaying a display screen.
It should be noted that the structure shown in fig. 2 is merely an example, and the embodiment of the present invention is not limited thereto.
Based on the above description, fig. 3 illustrates an exemplary flow of a method for displaying a screen, which may be performed by a device for displaying a screen according to an embodiment of the present invention.
As shown in fig. 3, the process specifically includes:
in step 301, a camera coordinate system and a world coordinate system of the interactive application are constructed.
In the embodiment of the invention, the interactive application is used for a plurality of users to interact in the virtual scene, and the camera coordinate system is a three-dimensional coordinate system determined by the view angle of the first user in the interactive application. As shown in fig. 4, fig. 4 is a schematic diagram schematically illustrating a camera coordinate system, where fig. 4a is a top view of the camera coordinate system, fig. 4b is a side view of the camera coordinate system, according to the tangential point of the first user and the real game table as the origin, the x-axis is parallel to the table, the z-axis is forward to the table (i.e. the direction of sight), and the y-axis is perpendicular to the xOz-axis plane. It should be noted that, the camera coordinate system and the world coordinate system may be preset to reduce the calculation links.
Fig. 5 is a schematic diagram of a world coordinate system, as shown in fig. 5, where fig. 5a is a top view of the world coordinate system, fig. 5b is a side view of the world coordinate system, and according to the tangential point of the first user and the real game table surface as the origin, the x w axis is parallel to the table surface, the z w axis is forward to the table surface (i.e. the line of sight), and the y w axis is perpendicular to the x wOzw axis plane.
Further, according to a view angle switching instruction of the first user, a camera coordinate system corresponding to the view angle switching instruction is obtained, wherein the camera coordinate systems with a plurality of view angles are preset in the interactive application, or according to a view angle creating instruction of the first user, the camera coordinate system is created, or according to the camera coordinate system preset in the interactive application, the camera coordinate system is created. Fig. 6 is a schematic diagram schematically illustrating a camera coordinate system, as shown in fig. 6, in which fig. 6 a is a top view of the camera coordinate system, fig. 6 b is a side view of the camera coordinate system, according to the center of a real game table as an origin, an x-axis is parallel to the table, a z-axis is forward to the table (i.e., a viewing direction), and a y-axis is perpendicular to an xOz-axis plane. It should be noted that, the first user may create a camera coordinate system according to the information such as the height, the pitch angle, the position, and the like of the set viewing angle defined by the creation instruction, so as to switch the viewing angle of the picture.
Step 302, determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application in the world coordinate system.
In the embodiment of the present invention, according to the constructed world coordinate system, the coordinates of the object in the image to be displayed in the world coordinate system are determined, for example, the radius of the real game table is set to be 0.5 m, and as can be known from fig. 5, the world coordinates of the center of the real game table are (0, 0.5). It should be noted that one object may include a scene object in a virtual scene and/or a user window in the virtual scene, and a plurality of interactive users (such as a second user and a third user) in the virtual scene.
Step 303, converting three-dimensional coordinates of the at least one object in the world coordinate system into pixel coordinates in a pixel coordinate system through the camera coordinate system.
In the embodiment of the invention, according to the relation between the world coordinate system and the pixel coordinate system, the three-dimensional coordinate of at least one object under the world coordinate system is converted into the pixel coordinate under the pixel coordinate system.
Further, a conversion relation between the camera coordinate system and the world coordinate system is determined, wherein the conversion relation is used for determining three-dimensional coordinates of at least one object in the camera coordinate system, imaging parameters in the camera coordinate system are determined, the imaging parameters are used for converting the three-dimensional coordinates of the at least one object in the camera coordinate system into two-dimensional pixel coordinates in the pixel coordinate system, and then the pixel coordinates of the at least one object are determined according to the conversion relation and the imaging parameters.
In the embodiment of the invention, the world coordinate system can be obtained through rotation and translation of the camera coordinate system, namely, the world coordinate system and the camera coordinate system have a conversion relationship, and the pixel coordinate system can be obtained from the camera coordinate system according to the imaging parameters, so that the coordinates of the pixel coordinate system can be expressed by the coordinates of the world coordinate system according to the imaging parameters and the conversion relationship.
The three-dimensional origin of the world coordinate system may be obtained by projecting the three-dimensional origin of the camera coordinate system in a direction perpendicular to the ground, and the x w axis of the world coordinate system may also be obtained by projecting the x axis of the camera coordinate system in a direction perpendicular to the ground, so as to reduce the complexity of the relationship between the world coordinate system and the pixel coordinate system. The relationship of the world coordinate system to the pixel coordinate system is described in the examples below with reference to fig. 4 and 5 described above.
Example 1
As shown in fig. 4 and 5, the world coordinate system in fig. 5 can be obtained by rotating the pitch angle of the first user of the camera coordinate system in fig. 4 to be parallel to the real game table and translating the camera coordinate system downward along the y-axis, so that the relationship between the three-dimensional coordinate point in the camera coordinate system and the three-dimensional coordinate point in the world coordinate system can be expressed as the following formula (1).
Wherein R is a rotation matrix, and T is an offset vector.
Since the three-dimensional origin of the world coordinate system is projected in the direction perpendicular to the ground through the three-dimensional origin of the camera coordinate system according to the preset position of the camera coordinate system and the world coordinate system (i.e., the distance between the pitch angle and the y-axis of the first user in fig. 4), the x w axis of the world coordinate system is projected in the direction perpendicular to the ground through the x-axis of the camera coordinate system, the R, T matrix in equation (1) can be derived:
Where h is the distance between the camera coordinate system and the world coordinate system in the y-axis and θ is the pitch angle of the first user.
In the embodiment of the invention, the relation between the camera coordinate system and the pixel coordinate system can be obtained according to the imaging parameters, specifically, the three-dimensional coordinate point under the camera coordinate system is perspective projected to the pixel coordinate system according to the imaging parameters, so as to obtain:
Wherein f is an imaging parameter.
Therefore, the two-dimensional pixel coordinates of the three-dimensional coordinate point in the world coordinate system according to the perspective projection in the pixel coordinate system can be obtained as follows:
the three-dimensional coordinate point under the world coordinate system can be obtained by the above formula, and the two-dimensional pixel coordinate of the three-dimensional coordinate point under the pixel coordinate system according to perspective projection is the following formula (2):
Where u o and v o are the pixel coordinates of the optical center on the pixel coordinate system.
Illustratively, at the pitch angle θ=0 of the first user, sin θ=0, cos θ=1, and thus equation (2) may be simplified to equation (3) below:
For example, when at least one object is a scene object in a virtual scene, a geometric function of the scene object in a world coordinate system is constructed according to the geometric figure of the scene object, so as to determine the three-dimensional coordinates of the scene object in the world coordinate system.
In the embodiment of the invention, after the three-dimensional coordinates of the scene object in the virtual scene in the world coordinate system are determined, the three-dimensional coordinates of the scene object in the virtual scene in the world coordinate system are converted into the pixel coordinates in the pixel coordinate system according to the formula. In particular, the following will be described in the specific example in connection with the actual game table of fig. 5 described above.
Example 2
Referring to fig. 4 to 6, the scene objects in the virtual scene are real game desktops, and three-dimensional coordinates of the real game desktops under the world coordinate system can be obtained as follows:
Where r is the radius of the real game table. According to the formula (2), the two-dimensional pixel coordinates of the real game table surface under the pixel coordinate system can be obtained as follows:
And satisfies the following:
FIG. 7 is a schematic view schematically showing a scene object in a virtual scene, as shown in FIG. 7, wherein when the pitch angle θ of the first user is >0, the two-dimensional pixel coordinates of the real game table in the pixel coordinate system are a pair of axes v 0, where u=u o is the pair of axes v 0, so as to Is parabolic with a vertex.
Based on the above description of fig. 7, when the pitch angle θ=0 of the first user, then according to the three-dimensional coordinates of the real game table in the world coordinate system and the above formula (3), the two-dimensional pixel coordinates of the real game table in the pixel coordinate system may be obtained as follows:
And satisfies the following:
Fig. 8 is a schematic diagram schematically illustrating a scene object in a virtual scene, where when the pitch angle θ=0 of the first user, as shown in fig. 8, the two-dimensional pixel coordinates of the real game table in the pixel coordinate system are a symmetry axis with u=u o, so as to Is parabolic with a vertex.
For example, when at least one object is a user window in the virtual scene, determining the user window in the virtual scene, and determining the three-dimensional coordinates of the user window in the world coordinate system, thereby obtaining the three-dimensional coordinates of the user window in the world coordinate system.
In the embodiment of the invention, after the three-dimensional coordinates of the vertex of the user window under the world coordinate system are determined, the three-dimensional coordinates of the vertex under the world coordinate system are perspective projected into the pixel coordinate system, so that the two-dimensional pixel coordinates of the vertex under the pixel coordinate system are obtained. Specifically, the description will be given below in specific examples.
Example 3
FIG. 9 is an exemplary schematic diagram of a user window, as shown in FIG. 9 a, in which three-dimensional coordinates of a vertex ABCD of a user window of a third user in a world coordinate system are determined according to a radius r and an included angle sigma of a real game table:
According to the above formula (2), two-dimensional pixel coordinates of the vertex ABCD of the user window in the pixel coordinate system are obtained as follows:
As can be seen from the two-dimensional pixel coordinates of the vertices, the u coordinate of the two-dimensional pixel coordinate a 'of the point a is smaller than the u coordinate of the two-dimensional pixel coordinate C' of the point C, the u coordinate of the two-dimensional pixel coordinate B 'of the point B is larger than the u coordinate of the two-dimensional pixel coordinate D' of the point D, the v coordinate of the two-dimensional pixel coordinate a 'of the point a is larger than the v coordinate of the two-dimensional pixel coordinate B' of the point B, and the v coordinate of the two-dimensional pixel coordinate C 'of the point C is smaller than the v coordinate of the two-dimensional pixel coordinate D' of the point D, whereby the two-dimensional pixel coordinate a 'B' C 'D' of the three-dimensional coordinate of the rectangular window ABCD in the world coordinate system in the pixel coordinate system can be obtained as shown in fig. 9B. Similarly, for the window LMNS of the second user, the u coordinate of the two-dimensional pixel coordinate L 'of the L point is larger than the u coordinate of the two-dimensional pixel coordinate N' of the N point, the u coordinate of the two-dimensional pixel coordinate M 'of the M point is smaller than the u coordinate of the two-dimensional pixel coordinate S' of the S point, the v coordinate of the two-dimensional pixel coordinate L 'of the L point is larger than the v coordinate of the two-dimensional pixel coordinate M' of the M point, and the v coordinate of the two-dimensional pixel coordinate N 'of the N point is smaller than the v coordinate of the two-dimensional pixel coordinate S' of the S point can be obtained.
Based on the above description of fig. 9, when the pitch angle θ=0 of the first user, according to the radius r and the included angle σ of the real game table, it is determined that the two-dimensional pixel coordinates of the vertex ABCD of the user window of the third user in the pixel coordinate system are:
Since the u coordinate of the two-dimensional pixel coordinate a 'of the a point is the same as the u coordinate of the two-dimensional pixel coordinate C' of the C point, and the u coordinate of the two-dimensional pixel coordinate B 'of the B point is the same as the u coordinate of the two-dimensional pixel coordinate D' of the D point, the a 'C' is parallel to the B 'D', whereby lengths of a 'C' and B 'D' and a 'B' and C 'D' are respectively:
Namely: a ' C// B ' D ' and A ' C ' B ' D ' and A ' B ' C ' D ' FIG. 10 illustrates a schematic view of a user window, as can be seen in FIG. 10, of the location of the vertex ABCD of the user window of the third user in the world coordinate system in the pixel coordinate system A ' B ' C ' D '. Similarly, for the position of the vertex LMNS of the user window of the second user in the world coordinate system in the pixel coordinate system, the position of L 'M' N 'S' is specifically: m ' S// L ' N ', M ' S ' L ' N ', M ' L ' S ' N ' and M ' L ' S ' N '.
The method includes the steps that at least one object is a second user in a virtual scene, the second user is any user except a first outdoor user in a plurality of users, an interaction picture of the second user is obtained from a video picture of the second user, the interaction picture of the second user is projected into a user window of the second user, three-dimensional coordinates of the user window of the second user in a world coordinate system are determined, and therefore the three-dimensional coordinates of the second user in the world coordinate system are obtained.
And step 304, determining the display picture of the at least one object according to the pixel coordinates of the at least one object.
In the embodiment of the invention, the pixel coordinates are used for displaying at least one object, specifically, the display picture is determined according to the pixel coordinates of the at least one object and the resolution of the display screen, and then the determined display picture is sent to the display screen so as to enable the display screen to display. Fig. 11 is a schematic diagram of a display screen, as shown in fig. 11, according to information such as a viewing angle selected by a first user, a display image of a virtual desktop on an image coordinate system is obtained, and then, a second user's camera is used to collect a user video, where the camera may be a television camera, a mobile phone camera, etc., and may be an internal camera, or an external camera. And then displaying the acquired video information of the second user in a user window of the corresponding second user in the world coordinate system.
For example, in order to increase the display effect of the display screen, the second user in the user window in the world coordinate system is in the same proportion with the real person or in the same proportion with the scene object (such as the virtual desktop) in the virtual scene, and then the user window in the world coordinate system is projected to the scene object (such as the virtual game desktop) in the virtual scene in the pixel coordinate system for display according to the information of the scene object size (such as the virtual desktop size) in the virtual scene, the deflection angle, the viewing angle height and the pitch angle of the first user and the like.
Based on the same technical concept, fig. 12 is a schematic structural diagram schematically illustrating a device for displaying a picture according to an embodiment of the present invention, where the device may perform a flow of a method for displaying a picture.
As shown in fig. 12, the apparatus specifically includes:
A building module 1201, configured to build a camera coordinate system and a world coordinate system of the interactive application; the interaction application is used for a plurality of users to interact in the virtual scene; the camera coordinate system is a three-dimensional coordinate system determined by the view angle of a first user in the interactive application;
A processing module 1201, configured to determine three-dimensional coordinates of at least one object in a frame to be displayed of the interactive application in the world coordinate system;
Converting three-dimensional coordinates of the at least one object in the world coordinate system into pixel coordinates in a pixel coordinate system through the camera coordinate system; the pixel coordinates are used to display the at least one object.
Optionally, the processing module 1202 is specifically configured to:
Determining a conversion relationship between the camera coordinate system and the world coordinate system; the conversion relation is used for determining three-dimensional coordinates of the at least one object under the camera coordinate system;
Determining imaging parameters in the camera coordinate system; the imaging parameters are used for converting the three-dimensional coordinates of the at least one object in the camera coordinate system into two-dimensional pixel coordinates in the pixel coordinate system;
And determining pixel coordinates of the at least one object according to the conversion relation and the imaging parameters.
Optionally, the at least one object is a scene object in the virtual scene;
the processing module 1202 is specifically configured to:
And constructing a geometric function of the scene object under the world coordinate system according to the geometric figure of the scene object, so as to determine the three-dimensional coordinate of the scene object under the world coordinate system.
Optionally, the at least one object is a user window in the virtual scene;
the processing module 1202 is specifically configured to:
Determining a user window in the virtual scene;
and determining the three-dimensional coordinates of the user window in the world coordinate system, thereby obtaining the three-dimensional coordinates of the user window in the world coordinate system.
Optionally, the at least one object is a second user within the virtual scene; the second user is any user except the first user in a plurality of users;
the processing module 1202 is specifically configured to:
determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application in the world coordinate system comprises:
Acquiring an interaction picture of the second user from the video picture of the second user;
Projecting the interaction picture of the second user into a user window of the second user;
And determining the three-dimensional coordinates of the user window of the second user in the world coordinate system, so as to obtain the three-dimensional coordinates of the second user in the world coordinate system.
Optionally, the three-dimensional origin of the world coordinate system is projected in a direction perpendicular to the ground by the three-dimensional origin of the camera coordinate system.
Optionally, the processing module 1202 is specifically configured to:
Acquiring a camera coordinate system corresponding to a switching view angle instruction according to the switching view angle instruction of the first user; a camera coordinate system with a plurality of visual angles is preset in the interactive application; or creating the camera coordinate system according to the view angle creation instruction of the first user; or creating the camera coordinate system according to the camera coordinate system preset in the interactive application.
Based on the same technical concept, the embodiment of the invention further provides a computing device, including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the picture display method according to the obtained program.
Based on the same technical concept, the embodiment of the present invention also provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the above-described method of displaying a screen.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (8)
1. A method of displaying a picture, comprising:
Constructing a camera coordinate system and a world coordinate system of the interactive application; the interaction application is used for a plurality of users to interact in the virtual scene; the camera coordinate system is a three-dimensional coordinate system determined by the view angle of a first user in the interactive application; the three-dimensional origin of the world coordinate system is obtained by projecting the three-dimensional origin of the camera coordinate system in a direction perpendicular to the ground;
Determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application under the world coordinate system;
Converting three-dimensional coordinates of the at least one object in the world coordinate system into pixel coordinates in a pixel coordinate system through the camera coordinate system, wherein the pixel coordinates are used for displaying the at least one object;
Converting, by the camera coordinate system, three-dimensional coordinates of the at least one object in the world coordinate system to pixel coordinates in a pixel coordinate system, comprising:
Determining a conversion relationship between the camera coordinate system and the world coordinate system; the conversion relation is used for determining three-dimensional coordinates of the at least one object under the camera coordinate system;
Determining imaging parameters in the camera coordinate system; the imaging parameters are used for converting the three-dimensional coordinates of the at least one object in the camera coordinate system into two-dimensional pixel coordinates in the pixel coordinate system;
And determining pixel coordinates of the at least one object according to the conversion relation and the imaging parameters.
2. The method of claim 1, wherein the at least one object is a scene object within the virtual scene;
determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application in the world coordinate system comprises:
And constructing a geometric function of the scene object under the world coordinate system according to the geometric figure of the scene object, so as to determine the three-dimensional coordinate of the scene object under the world coordinate system.
3. The method of claim 1, wherein the at least one object is a user window within the virtual scene;
determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application in the world coordinate system comprises:
Determining a user window in the virtual scene;
and determining the three-dimensional coordinates of the user window in the world coordinate system, thereby obtaining the three-dimensional coordinates of the user window in the world coordinate system.
4. The method of claim 1, wherein the at least one object is a second user within the virtual scene; the second user is any user except the first user in a plurality of users;
determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application in the world coordinate system comprises:
Acquiring an interaction picture of the second user from the video picture of the second user;
Projecting the interaction picture of the second user into a user window of the second user;
And determining the three-dimensional coordinates of the user window of the second user in the world coordinate system, so as to obtain the three-dimensional coordinates of the second user in the world coordinate system.
5. The method of any of claims 1 to 4, wherein constructing a camera coordinate system of the interactive application comprises:
acquiring a camera coordinate system corresponding to a switching view angle instruction according to the switching view angle instruction of the first user; a camera coordinate system with a plurality of visual angles is preset in the interactive application; or (b)
Creating the camera coordinate system according to the view angle creation instruction of the first user; or (b)
And creating a camera coordinate system according to the camera coordinate system preset in the interactive application.
6. A device for displaying a picture, comprising:
The construction module is used for constructing a camera coordinate system and a world coordinate system of the interactive application; the interaction application is used for a plurality of users to interact in the virtual scene; the camera coordinate system is a three-dimensional coordinate system determined by the view angle of a first user in the interactive application; the three-dimensional origin of the world coordinate system is obtained by projecting the three-dimensional origin of the camera coordinate system in a direction perpendicular to the ground;
The processing module is used for determining three-dimensional coordinates of at least one object in a picture to be displayed of the interactive application under the world coordinate system;
converting three-dimensional coordinates of the at least one object in the world coordinate system into pixel coordinates in a pixel coordinate system through the camera coordinate system; the pixel coordinates are used for displaying the at least one object;
optionally, the processing module is specifically configured to:
Determining a conversion relationship between the camera coordinate system and the world coordinate system; the conversion relation is used for determining three-dimensional coordinates of the at least one object under the camera coordinate system;
Determining imaging parameters in the camera coordinate system; the imaging parameters are used for converting the three-dimensional coordinates of the at least one object in the camera coordinate system into two-dimensional pixel coordinates in the pixel coordinate system;
And determining pixel coordinates of the at least one object according to the conversion relation and the imaging parameters.
7. A computing device, comprising:
a memory for storing program instructions;
a processor for invoking program instructions stored in said memory to perform the method of any of claims 1-5 in accordance with the obtained program.
8. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011137849.5A CN113476835B (en) | 2020-10-22 | 2020-10-22 | Picture display method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011137849.5A CN113476835B (en) | 2020-10-22 | 2020-10-22 | Picture display method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113476835A CN113476835A (en) | 2021-10-08 |
CN113476835B true CN113476835B (en) | 2024-06-07 |
Family
ID=77932614
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011137849.5A Active CN113476835B (en) | 2020-10-22 | 2020-10-22 | Picture display method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113476835B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017092307A1 (en) * | 2015-12-01 | 2017-06-08 | 乐视控股(北京)有限公司 | Model rendering method and device |
CN107396037A (en) * | 2016-05-16 | 2017-11-24 | 杭州海康威视数字技术股份有限公司 | Video frequency monitoring method and device |
CN108334199A (en) * | 2018-02-12 | 2018-07-27 | 华南理工大学 | The multi-modal exchange method of movable type based on augmented reality and device |
CN108830894A (en) * | 2018-06-19 | 2018-11-16 | 亮风台(上海)信息科技有限公司 | Remote guide method, apparatus, terminal and storage medium based on augmented reality |
CN109454634A (en) * | 2018-09-20 | 2019-03-12 | 广东工业大学 | A kind of Robotic Hand-Eye Calibration method based on flat image identification |
CN110827376A (en) * | 2018-08-09 | 2020-02-21 | 北京微播视界科技有限公司 | Augmented reality multi-plane model animation interaction method, device, equipment and storage medium |
CN110989825A (en) * | 2019-09-10 | 2020-04-10 | 中兴通讯股份有限公司 | Augmented reality interaction implementation method and system, augmented reality device and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8840466B2 (en) * | 2011-04-25 | 2014-09-23 | Aquifi, Inc. | Method and system to create three-dimensional mapping in a two-dimensional game |
-
2020
- 2020-10-22 CN CN202011137849.5A patent/CN113476835B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017092307A1 (en) * | 2015-12-01 | 2017-06-08 | 乐视控股(北京)有限公司 | Model rendering method and device |
CN107396037A (en) * | 2016-05-16 | 2017-11-24 | 杭州海康威视数字技术股份有限公司 | Video frequency monitoring method and device |
CN108334199A (en) * | 2018-02-12 | 2018-07-27 | 华南理工大学 | The multi-modal exchange method of movable type based on augmented reality and device |
CN108830894A (en) * | 2018-06-19 | 2018-11-16 | 亮风台(上海)信息科技有限公司 | Remote guide method, apparatus, terminal and storage medium based on augmented reality |
CN110827376A (en) * | 2018-08-09 | 2020-02-21 | 北京微播视界科技有限公司 | Augmented reality multi-plane model animation interaction method, device, equipment and storage medium |
CN109454634A (en) * | 2018-09-20 | 2019-03-12 | 广东工业大学 | A kind of Robotic Hand-Eye Calibration method based on flat image identification |
CN110989825A (en) * | 2019-09-10 | 2020-04-10 | 中兴通讯股份有限公司 | Augmented reality interaction implementation method and system, augmented reality device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113476835A (en) | 2021-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018188499A1 (en) | Image processing method and device, video processing method and device, virtual reality device and storage medium | |
US10109113B2 (en) | Pattern and method of virtual reality system based on mobile devices | |
US9818228B2 (en) | Mixed reality social interaction | |
US11282264B2 (en) | Virtual reality content display method and apparatus | |
US9654734B1 (en) | Virtual conference room | |
US10192363B2 (en) | Math operations in mixed or virtual reality | |
US10573060B1 (en) | Controller binding in virtual domes | |
CN108282648B (en) | VR rendering method and device, wearable device and readable storage medium | |
CN110889890A (en) | Image processing method and device, processor, electronic device and storage medium | |
CN110178370A (en) | Use the light stepping and this rendering of virtual view broadcasting equipment progress for solid rendering | |
US20170186219A1 (en) | Method for 360-degree panoramic display, display module and mobile terminal | |
US20120212491A1 (en) | Indirect lighting process for virtual environments | |
JP2009252240A (en) | System, method and program for incorporating reflection | |
CN101631257A (en) | Method and device for realizing three-dimensional playing of two-dimensional video code stream | |
CN110568923A (en) | unity 3D-based virtual reality interaction method, device, equipment and storage medium | |
US20230290043A1 (en) | Picture generation method and apparatus, device, and medium | |
US10740957B1 (en) | Dynamic split screen | |
CN116057577A (en) | Map for augmented reality | |
US10909752B2 (en) | All-around spherical light field rendering method | |
CN112891940A (en) | Image data processing method and device, storage medium and computer equipment | |
CN108986228B (en) | Method and device for displaying interface in virtual reality | |
CN110060349B (en) | Method for expanding field angle of augmented reality head-mounted display equipment | |
CN113476835B (en) | Picture display method and device | |
JP2023171298A (en) | Adaptation of space and content for augmented reality and composite reality | |
CN115908755A (en) | AR projection method, system and AR projector |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Country or region after: China Address after: 266555, No. 218, Bay Road, Qingdao economic and Technological Development Zone, Shandong Applicant after: Hisense Group Holding Co.,Ltd. Address before: 266555, No. 218, Bay Road, Qingdao economic and Technological Development Zone, Shandong Applicant before: QINGDAO HISENSE ELECTRONIC INDUSTRY HOLDING Co.,Ltd. Country or region before: China |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant |