CN111757081A - Movement limiting method for virtual scene, client, server and computing equipment - Google Patents

Movement limiting method for virtual scene, client, server and computing equipment Download PDF

Info

Publication number
CN111757081A
CN111757081A CN202010462746.XA CN202010462746A CN111757081A CN 111757081 A CN111757081 A CN 111757081A CN 202010462746 A CN202010462746 A CN 202010462746A CN 111757081 A CN111757081 A CN 111757081A
Authority
CN
China
Prior art keywords
virtual scene
virtual
reachable
rendering
virtual camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010462746.XA
Other languages
Chinese (zh)
Other versions
CN111757081B (en
Inventor
李文辉
贾清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Chezhiyi Communication Information Technology Co ltd
Original Assignee
Hainan Chezhiyi Communication Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan Chezhiyi Communication Information Technology Co ltd filed Critical Hainan Chezhiyi Communication Information Technology Co ltd
Priority to CN202010462746.XA priority Critical patent/CN111757081B/en
Publication of CN111757081A publication Critical patent/CN111757081A/en
Application granted granted Critical
Publication of CN111757081B publication Critical patent/CN111757081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a movement limiting method for a virtual scene, which comprises the following steps: moving the virtual camera to a target position; acquiring a projection point of a target position on a preset plane; determining corresponding pixel points of the projection points on the reachable area image of the virtual scene based on the corresponding conversion proportion of the virtual scene; determining whether the target position is reachable based on a predetermined channel value of a corresponding pixel point in the reachable region image; and rendering the virtual scene based on the last position reached by the virtual camera under the condition that the target position is not reachable. The embodiment of the invention also discloses a corresponding client, a server, a system and a computing device.

Description

Movement limiting method for virtual scene, client, server and computing equipment
Technical Field
The invention relates to the technical field of three-dimensional visualization, in particular to a movement limiting method, a client, a server and computing equipment for a virtual scene.
Background
The internet-based virtual reality (WEB 3D) technology is developed along with the development of internet and virtual reality (virtual reality) technologies, and aims to establish a virtual 3D scene on the internet and enable people to know real objects more clearly and clearly. With the rapid development of technologies such as HTML5 and WEBGL, the WEB 3D technology has become mature gradually. Currently, the Web 3D technology is widely used in the fields of e-commerce, education, entertainment, and the like.
The basic principle of the virtual reality technology is to create a set of three-dimensional models in advance in a computing device to form a three-dimensional scene. There are one or more virtual camera objects in the scene whose positions in the scene can be controlled by a user through an external device such as a mouse, keyboard, etc. Thus, when the user operates the external device, the virtual camera object in the scene moves and rotates along with the external device. The computing device renders a series of rapidly switching screen images in real-time by the position of the virtual camera, so that the user feels as if he is roaming in a virtual scene at a first-person perspective.
However, movement of the first-person perspective within a 3D scene typically requires a determination of whether an area to which it wants to move is reachable, thereby limiting movement. It is common practice to add a bounding box to an obstacle, detect whether the emission ray intersects the bounding box when the virtual camera moves, and then determine whether the collision occurs with the obstacle, thereby determining whether the area is reachable. However, this approach requires that a bounding box be created for each object in the 3D scene, which increases the complexity of the scene and reduces the rendering efficiency. In addition, since a large amount of calculation is required during collision detection, the device generates heat, power consumption is accelerated, and even the bottleneck of the whole 3D rendering occurs, and the frame rate is lowered.
Therefore, a more advanced motion limiting scheme for virtual scenes is desired.
Disclosure of Invention
To this end, embodiments of the present invention provide a movement limiting method, a client, a server and a computing device for a virtual scene, in an effort to solve or at least alleviate the above existing problems.
According to an aspect of an embodiment of the present invention, there is provided a movement restriction method for a virtual scene, including: moving the virtual camera to a target position; acquiring a projection point of a target position on a preset plane; determining corresponding pixel points of the projection points on the reachable area image of the virtual scene based on the corresponding conversion proportion of the virtual scene; determining whether the target position is reachable based on a predetermined channel value of a corresponding pixel point in the reachable region image; and rendering the virtual scene based on the last position reached by the virtual camera under the condition that the target position is not reachable.
Optionally, in the method according to the embodiment of the present invention, in a case where the target location is reachable, the virtual scene is rendered based on the target location.
Optionally, in the method according to the embodiment of the present invention, determining, based on a conversion ratio corresponding to a virtual scene, a corresponding pixel point of a projection point on a reachable area image of the virtual scene includes: and converting the coordinates of the projection points in the virtual scene based on the conversion proportion corresponding to the virtual scene to obtain the coordinates of the corresponding pixel points in the reachable area image.
Optionally, in the method according to the embodiment of the present invention, the method further includes: and acquiring the reachable area image and the conversion ratio corresponding to the virtual scene from the server.
Optionally, in the method according to the embodiment of the present invention, the server renders the virtual scene from the reachable area image corresponding to the virtual scene, moves the virtual camera to the predetermined position, renders the virtual scene in the predetermined direction based on the predetermined position, and then generates the virtual scene based on the rendered image frame.
Optionally, in the method according to the embodiment of the present invention, the conversion ratio corresponding to the virtual scene is obtained by the server based on the view angle width of the virtual camera and the pixel width of the reachable area image.
Optionally, in the method according to the embodiment of the present invention, the coordinate system of the virtual scene includes an X axis, a Y axis, and a Z axis, and the predetermined plane is a plane determined by the X axis and the Z axis.
Optionally, in a method according to an embodiment of the invention, the predetermined channel value is an Alpha channel value.
Optionally, in a method according to an embodiment of the invention, the virtual camera comprises a quadrature camera.
According to another aspect of the embodiments of the present invention, there is provided a movement restriction method for a virtual scene, including: moving the virtual camera to a predetermined position; rendering the virtual scene in a preset direction based on the preset position to obtain a top view image frame; generating a reachable area image corresponding to the virtual scene based on the top view image frame; and obtaining a conversion ratio corresponding to the virtual scene based on the view angle width of the virtual camera and the pixel width of the reachable region image.
Optionally, in the method according to the embodiment of the present invention, generating the reachable area image corresponding to the virtual scene based on the top view image frame includes: judging whether each pixel point in the overlooking image frame can not reach in the virtual scene or not; if yes, the preset channel value of the pixel point in the overlooking image frame is configured to be a first numerical value, and if not, the preset channel value is configured to be a second numerical value.
Optionally, in the method according to the embodiment of the present invention, determining whether the corresponding position of the pixel point in the virtual scene is unreachable includes: and judging whether the height of the pixel point in the virtual scene exceeds a preset threshold value.
Optionally, in the method according to the embodiment of the present invention, the method further includes: and sending the reachable area image and the conversion ratio corresponding to the virtual scene to the browser.
Optionally, in the method according to the embodiment of the present invention, the predetermined position is located above the virtual scene, and the predetermined direction is a direction pointing from above to the ground of the virtual scene.
Optionally, in a method according to an embodiment of the invention, the predetermined channel value is an Alpha channel value.
Optionally, in a method according to an embodiment of the invention, the virtual camera comprises a quadrature camera.
According to another aspect of the embodiments of the present invention, there is provided a client adapted to present a virtual scene, and including: a moving module adapted to move the virtual camera to a target position; the conversion module is suitable for acquiring a projection point of the target position on a preset plane; determining corresponding pixel points of the projection points on the reachable area image of the virtual scene based on the corresponding conversion proportion of the virtual scene; the judging module is suitable for determining whether the target position can be reached or not based on the preset channel value of the corresponding pixel point in the reachable area image; the rendering module is suitable for rendering the virtual scene based on the position of the virtual camera at preset time intervals; and rendering the virtual scene based on the last position reached by the virtual camera if the judging module determines that the target position is not reachable after the moving module moves the virtual camera to the target position.
According to another aspect of the embodiments of the present invention, there is provided a server, including: a moving module adapted to move the virtual camera to a predetermined position; the rendering module is suitable for rendering the virtual scene based on the position of the virtual camera at intervals of preset time; the virtual camera is further suitable for rendering the virtual scene in the preset direction based on the preset position after the virtual camera is moved to the preset position by the moving module, and a top view image frame is obtained; the generating module is suitable for generating a reachable area image corresponding to the virtual scene based on the top view image frame; and the method is also suitable for obtaining the corresponding conversion ratio of the virtual scene based on the view angle width of the virtual camera and the pixel width of the reachable area image.
According to another aspect of an embodiment of the present invention, there is provided a rendering system including: a client according to an embodiment of the present invention; and a server according to an embodiment of the present invention.
According to still another aspect of an embodiment of the present invention, there is provided a computing device including: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing the methods according to embodiments of the invention.
According to the movement limiting scheme of the embodiment of the invention, the movement limiting scheme for the virtual scene of the embodiment of the invention simply achieves the purpose of detecting whether the target position is reachable or not through the reachable area image and the conversion ratio, reduces a large amount of real-time calculation, and solves the problems of frame rate reduction, equipment heating and the like caused by real-time calculation of reachable areas of a computing device (such as a mobile terminal).
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the embodiments of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a schematic diagram of a rendering system 100 according to one embodiment of the invention;
FIG. 2 shows a schematic diagram of a computing device 200, according to one embodiment of the invention;
FIG. 3 illustrates a flow diagram of a method 300 for motion restriction of a virtual scene, according to one embodiment of the invention;
FIG. 4 illustrates a flow diagram of a method 400 for motion restriction of a virtual scene, according to one embodiment of the invention;
FIG. 5 shows a schematic diagram of a client 120 according to one embodiment of the invention; and
fig. 6 shows a schematic diagram of a server 140 according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
FIG. 1 shows a schematic diagram of a rendering system 100 according to one embodiment of the invention. As shown in fig. 1, rendering system 100 may include a client 120 and a server 140. In other embodiments, the rendering system 100 may include different and/or additional modules.
The client 120 may render a virtual two-dimensional and/or three-dimensional scene for presentation to a user for viewing. The server 140 may store data related to the virtual scene and communicate with the client 120 via the network 160, for example, to send scene resources related to the virtual scene to the client 120. Network 160 may include wired and/or wireless communication paths.
According to an embodiment of the present invention, each component (client, server, etc.) in the rendering system 100 described above may be implemented by the computing device 200 described below.
FIG. 2 shows a schematic diagram of a computing device 200, according to one embodiment of the invention. As shown in FIG. 2, in a basic configuration 202, a computing device 200 typically includes a system memory 206 and one or more processors 204. A memory bus 208 may be used for communication between the processor 204 and the system memory 206.
Depending on the desired configuration, the processor 204 may be any type of processor, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. The processor 204 may include one or more levels of cache, such as a level one cache 210 and a level two cache 212, a processor core 214, and registers 216. Example processor cores 214 may include Arithmetic Logic Units (ALUs), Floating Point Units (FPUs), digital signal processing cores (DSP cores), or any combination thereof. The example memory controller 218 may be used with the processor 204, or in some implementations the memory controller 218 may be an internal part of the processor 204.
Depending on the desired configuration, system memory 206 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 206 may include an operating system 220, one or more applications 222, and program data 224. In some implementations, the application 222 can be arranged to execute instructions on the operating system with the program data 224 by the one or more processors 204.
Computing device 200 may also include an interface bus 240 that facilitates communication from various interface devices (e.g., output devices 242, peripheral interfaces 244, and communication devices 246) to the basic configuration 202 via the bus/interface controller 230. The example output device 242 includes a graphics processing unit 248 and an audio processing unit 250. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 252. Example peripheral interfaces 244 can include a serial interface controller 254 and a parallel interface controller 256, which can be configured to facilitate communications with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 258. An example communication device 246 may include a network controller 260, which may be arranged to facilitate communications with one or more other computing devices 262 over a network communication link via one or more communication ports 264.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
Computing device 200 may be implemented as a server, such as a database server, an application server, a WEB server, and the like, or as a personal computer including desktop and notebook computer configurations. Of course, computing device 200 may also be implemented as at least a portion of a small-sized portable (or mobile) electronic device.
In embodiments consistent with the invention, computing device 200 may be implemented as client 120 and/or server 140 and configured to perform movement restriction methods 300 and/or 400 for virtual scenes in accordance with embodiments of the invention. The application 222 of the computing device 200 includes a plurality of instructions for executing the movement limiting method 300 and/or 400 for a virtual scene according to the embodiment of the present invention, and the program data 224 may further store the configuration data of the rendering system 100 and other contents.
FIG. 3 illustrates a flow diagram of a method 300 for motion restriction of a virtual scene, according to one embodiment of the invention. The movement restriction method 300 is adapted to be executed in the client 120.
As shown in fig. 3, the motion limiting method 300 begins at step S310. In step S310, the virtual camera in the virtual scene is moved to the target position. Generally, the virtual camera may be an orthogonal camera, the position of which in the virtual scene may be controlled by the user through an external device such as a mouse, keyboard, etc. Thus, when the user operates the external device, the virtual camera object in the scene moves and rotates along with the external device.
Then, in step S320, a projected point of the target position on a predetermined plane is acquired. For example, the target position may be projected onto a predetermined plane, resulting in a projected point. In some embodiments, the virtual scene is a three-dimensional scene, the coordinate system of which includes an X-axis, a Y-axis and a Z-axis, and the predetermined plane may be a two-dimensional plane defined by the X-axis and the Z-axis, i.e., an XZ plane. Assuming that the coordinates of the target position in the virtual scene are (x, y, z), the coordinates of its projected point on the predetermined plane are (x, z).
Then, in step S330, a corresponding pixel point of the projection point on the reachable area image of the virtual scene is determined based on the conversion ratio corresponding to the virtual scene.
In some embodiments, the coordinates of the projection point in the virtual scene may be converted based on a conversion ratio corresponding to the virtual scene, and the obtained coordinates are the coordinates of the corresponding pixel point in the reachable region image. For example, the conversion ratio is a ratio of the pixel width of the reachable region image divided by the view angle width of the virtual camera, and then the coordinates of the projection point in the virtual scene may be multiplied by the ratio to obtain the coordinates of the corresponding pixel point in the reachable region image.
For example, when the client 120 performs rendering for the first time, it needs to request the server 140 to acquire the scene resources (e.g., models, etc.), the reachable area image, and the conversion ratio corresponding to the virtual scene to be rendered. The generation process of the reachable area image and the conversion ratio will be described below.
Then, in step S340, it may be determined whether the target position is reachable based on the predetermined channel value of the corresponding pixel point in the reachable region image. The predetermined channel value of each pixel in the reachable region image may indicate whether the pixel is reachable. In some embodiments, the predetermined channel values may be Alpha channel values. Of course, other channel values are also possible, and the invention is not limited in this regard.
If the target location is not reachable, the virtual camera may be returned to the last location it reached and the virtual scene rendered based on that last location in step S350. If the target location is reachable, the virtual scene may be rendered based on the target location.
It should be noted that the virtual scene is rendered based on the position of the virtual camera, typically at predetermined intervals. For example, 20 renderings per second. In the process of moving the virtual camera, before each rendering, the movement limiting method 300 of the embodiment of the present invention may be used to determine whether the moved position is reachable, and perform rendering based on the determination result.
The following describes a conversion ratio corresponding to a virtual scene and a process of generating an reachable area image.
FIG. 4 illustrates a flow diagram of a method 400 for motion restriction of a virtual scene, according to one embodiment of the invention. The movement restriction method 400 is adapted to be executed in the server 140.
As shown in fig. 4, the motion limiting method 400 begins at step S410. In step S410, the virtual camera is moved to a predetermined position. In the coordinate system of the virtual scene, the Y-axis may represent height, with positive values representing above. The ground of the virtual scene is typically set to y-0. In some embodiments, the X-axis coordinate and the Z-axis coordinate of the predetermined position may both be 0, and the Y-axis coordinate may be dependent on the height of the virtual scene. For example, the coordinates of the predetermined position are (0, 5, 0).
Then, in step 420, the virtual scene is rendered in a predetermined direction based on the predetermined position, resulting in a top view image frame. In some embodiments, the predetermined direction may be directly below, i.e. from above towards the ground of the virtual scene.
Then, in step S430, a reachable area image corresponding to the rendered virtual scene may be generated based on the top view image frame. Specifically, the top view image frame may be subjected to channel value editing, so as to obtain a reachable area image. In some embodiments, for each pixel point in the top-view image frame, it may be determined whether the pixel point is unreachable in the virtual scene. For example, it is determined whether the height (i.e., Y-axis coordinate) of the pixel point in the virtual scene exceeds a predetermined threshold, and if so, it is determined that the pixel point is unreachable, and if not, it is reachable. For example, it is determined whether the coordinates of the pixel point in the virtual scene are located in the reachable area range or the unreachable area range (the range may be configured in advance).
It should be noted that only two specific examples of whether the server 140 determines the reachability are given above, and those skilled in the art can conceive various ways for determining the reachability based on the above examples, and all of these ways are within the scope of the present invention.
In the event that the pixel point is determined to be unreachable, the predetermined channel value of the pixel point in the top-view image frame may be configured to be a first value (e.g., to be 0). In the case where it is determined that the pixel point is reachable, the predetermined channel value of the pixel point in the top-view image frame may be configured to be a second value (e.g., configured to be 1).
In step S440, a conversion ratio corresponding to the virtual scene is obtained based on the view angle width of the virtual camera and the pixel width of the reachable area image. For example, the conversion ratio is obtained by dividing the pixel width of the reachable area image by the view angle width of the virtual camera.
Finally, the obtained reachable area image and the conversion scale are stored in association with the virtual scene.
In addition, in some embodiments, the scene resources corresponding to the virtual scene may also be sent to the client 120, and/or the reachable area image and the conversion ratio corresponding to the virtual scene may also be sent to the client 120.
Fig. 5 shows a schematic diagram of a client 120 according to an embodiment of the invention. As shown in fig. 5, the client 120 may include a rendering module 510, a moving module 520, a converting module 530, and a determining module 540.
The rendering module 510 is adapted to render the virtual scene based on the position of the virtual camera at predetermined time intervals. The moving module 520 is adapted to move the virtual camera to the target position. The transformation module 530 is adapted to obtain a projection point of the target position on the predetermined plane, and is further adapted to determine a corresponding pixel point of the projection point on the reachable area image of the virtual scene based on a transformation ratio corresponding to the virtual scene. The determining module 540 is adapted to determine whether the target location is reachable based on the predetermined channel values of the corresponding pixel points in the reachable region image.
After the moving module 520 moves the virtual camera to the target position, if the determining module 540 determines that the target position is not reachable, the moving module 520 is further adapted to return the virtual camera to the last position it reached, and the rendering module 510 is further adapted to render the virtual scene based on the last position the virtual camera reached.
For the detailed processing logic and implementation procedure of each module in the client 500, reference may be made to the foregoing description of the mobility restriction methods 300 and 400 in conjunction with fig. 1 to 4, which is not described herein again.
Fig. 6 shows a schematic diagram of a server 140 according to one embodiment of the invention. As shown in fig. 5, the server 140 may include a rendering module 610, a moving module 620, and a generating module 630.
The rendering module 610 is adapted to render the virtual scene based on the position of the virtual camera at predetermined time intervals. The moving module 620 is adapted to move the virtual camera to a predetermined position.
The rendering module 610 is further adapted to render the virtual scene to a predetermined direction based on the predetermined position after the moving module 620 moves the virtual camera to the predetermined position, resulting in an overhead image frame.
The generating module 630 is adapted to generate a reachable area image corresponding to the virtual scene based on the top view image frame, and is further adapted to obtain a conversion ratio corresponding to the virtual scene based on the view angle width of the virtual camera and the pixel width of the reachable area image.
For the detailed processing logic and implementation procedure of each module in the server 140, reference may be made to the foregoing description of the movement limiting methods 300 and 400 in conjunction with fig. 1 to 4, which is not described herein again.
In summary, according to the movement restriction scheme for the virtual scene in the embodiment of the present invention, the purpose of detecting whether the target position is reachable is simply achieved through the reachable area image and the conversion ratio, so that a large amount of real-time calculation is reduced, and the problems of frame rate reduction, device heating, and the like caused by real-time calculation of the reachable area by the computing device (e.g., the mobile terminal) are solved.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of embodiments of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing embodiments of the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the methods of embodiments of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of embodiments of the invention. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of embodiments of the present invention as described herein, and specific languages are described above to disclose embodiments of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
The invention also includes: a6, the method as in a4, wherein the conversion scale corresponding to the virtual scene is derived by the server based on the view width of the virtual camera and the pixel width of the reachable region image. A7, the method as in any A1-6, wherein the coordinate system of the virtual scene includes X, Y and Z axes, and the predetermined plane is a plane determined by the X and Z axes. A8, the method of any one of A1-6, wherein the predetermined channel values are Alpha channel values. A9, the method of any one of A1-6, wherein the virtual camera comprises an orthogonal camera.
B11, the method as in B10, wherein generating the reachable area image corresponding to the virtual scene based on the top view image frame comprises: judging whether each pixel point in the overlook image frame is unreachable in the virtual scene or not; if yes, configuring the preset channel value of the pixel point in the top-view image frame as a first numerical value, and otherwise, configuring the preset channel value as a second numerical value. The method of B12, as set forth in B11, wherein determining whether the corresponding position of the pixel point in the virtual scene is unreachable comprises: and judging whether the height of the pixel point in the virtual scene exceeds a preset threshold value. B13, the method of B10, further comprising: and sending the reachable area image and the conversion ratio corresponding to the virtual scene to a browser. B14, the method according to any of B10-13, wherein the predetermined position is above the virtual scene and the predetermined direction is a direction pointing from above towards the ground of the virtual scene. B15, the method of any one of B10-14, wherein the predetermined channel values are Alpha channel values. B16, the method of any one of B10-15, wherein the virtual camera comprises an orthogonal camera.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of an embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of and form different embodiments of the invention. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the above embodiments are described herein as a method or combination of elements of a method that can be performed by a processor of a computer system or by other means for performing the functions described above. A processor having the necessary instructions for carrying out the method or method elements described above thus forms a means for carrying out the method or method elements. Furthermore, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While embodiments of the invention have been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the embodiments of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive embodiments. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present embodiments are disclosed by way of illustration and not limitation, the scope of embodiments of the invention being defined by the appended claims.

Claims (10)

1. A movement limiting method for a virtual scene, comprising:
moving the virtual camera to a target position;
acquiring a projection point of a target position on a preset plane;
determining corresponding pixel points of the projection points on the reachable area image of the virtual scene based on the corresponding conversion proportion of the virtual scene;
determining whether a target position is reachable based on a predetermined channel value of the corresponding pixel point in the reachable region image; and
in the event that the target location is not reachable, the virtual scene will be rendered based on the last location that the virtual camera reached.
2. The method of claim 1, further comprising:
rendering the virtual scene based on the target location if the target location is reachable.
3. The method of claim 1, wherein determining a corresponding pixel point of the projection point on the reachable region image of the virtual scene based on the corresponding transformation ratio of the virtual scene comprises:
and converting the coordinates of the projection points in the virtual scene based on the conversion proportion corresponding to the virtual scene to obtain the coordinates of the corresponding pixel points in the reachable area image.
4. The method of claim 1, further comprising:
and acquiring the reachable area image and the conversion ratio corresponding to the virtual scene from the server.
5. The method of claim 4, wherein the reachable area image corresponding to the virtual scene is generated based on an image frame obtained by rendering after rendering the virtual scene to a predetermined direction based on a predetermined position by the server by moving a virtual camera to the predetermined position.
6. A movement limiting method for a virtual scene, comprising:
moving the virtual camera to a predetermined position;
rendering the virtual scene in a preset direction based on a preset position to obtain a top view image frame;
generating a reachable area image corresponding to the virtual scene based on the overlook image frame; and
and obtaining a conversion ratio corresponding to the virtual scene based on the view angle width of the virtual camera and the pixel width of the reachable region image.
7. A client adapted to present a virtual scene and comprising:
a moving module adapted to move the virtual camera to a target position;
the conversion module is suitable for acquiring a projection point of the target position on a preset plane; determining corresponding pixel points of the projection points on the reachable area image of the virtual scene based on the corresponding conversion proportion of the virtual scene;
the judging module is suitable for determining whether the target position can be reached or not based on the preset channel value of the corresponding pixel point in the reachable area image; and
the rendering module is suitable for rendering the virtual scene based on the position of the virtual camera at intervals of preset time; and after the moving module moves the virtual camera to the target position, if the judging module determines that the target position is not reachable, rendering the virtual scene based on the last position reached by the virtual camera.
8. A server, comprising:
a moving module adapted to move the virtual camera to a predetermined position;
the rendering module is suitable for rendering the virtual scene based on the position of the virtual camera at intervals of preset time; the virtual camera is further suitable for rendering the virtual scene to a preset direction based on a preset position after the virtual camera is moved to the preset position by the moving module, and a top view image frame is obtained; and
the generating module is suitable for generating a reachable area image corresponding to the virtual scene based on the overlook image frame; and the method is also suitable for obtaining the corresponding conversion ratio of the virtual scene based on the view angle width of the virtual camera and the pixel width of the reachable area image.
9. A rendering system, comprising:
the client of claim 7; and
the server of claim 8.
10. A computing device, comprising:
one or more processors; and
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-6.
CN202010462746.XA 2020-05-27 2020-05-27 Movement limiting method for virtual scene, client, server and computing equipment Active CN111757081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010462746.XA CN111757081B (en) 2020-05-27 2020-05-27 Movement limiting method for virtual scene, client, server and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010462746.XA CN111757081B (en) 2020-05-27 2020-05-27 Movement limiting method for virtual scene, client, server and computing equipment

Publications (2)

Publication Number Publication Date
CN111757081A true CN111757081A (en) 2020-10-09
CN111757081B CN111757081B (en) 2022-07-08

Family

ID=72674052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010462746.XA Active CN111757081B (en) 2020-05-27 2020-05-27 Movement limiting method for virtual scene, client, server and computing equipment

Country Status (1)

Country Link
CN (1) CN111757081B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450747A (en) * 2017-07-25 2017-12-08 腾讯科技(深圳)有限公司 The displacement control method and device of virtual role
US20180373412A1 (en) * 2017-06-26 2018-12-27 Facebook, Inc. Virtual reality safety bounding box
CN109697002A (en) * 2017-10-23 2019-04-30 腾讯科技(深圳)有限公司 A kind of method, relevant device and the system of the object editing in virtual reality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180373412A1 (en) * 2017-06-26 2018-12-27 Facebook, Inc. Virtual reality safety bounding box
CN107450747A (en) * 2017-07-25 2017-12-08 腾讯科技(深圳)有限公司 The displacement control method and device of virtual role
CN109697002A (en) * 2017-10-23 2019-04-30 腾讯科技(深圳)有限公司 A kind of method, relevant device and the system of the object editing in virtual reality

Also Published As

Publication number Publication date
CN111757081B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
CN111815755B (en) Method and device for determining blocked area of virtual object and terminal equipment
CN110276829B (en) Three-dimensional representation by multi-scale voxel hash processing
CN108038897B (en) Shadow map generation method and device
US9330466B2 (en) Methods and apparatus for 3D camera positioning using a 2D vanishing point grid
JP2018106711A (en) Fast rendering of quadrics
US8854392B2 (en) Circular scratch shader
CN112766027A (en) Image processing method, device, equipment and storage medium
US9805499B2 (en) 3D-consistent 2D manipulation of images
CN111915714A (en) Rendering method for virtual scene, client, server and computing equipment
Tian et al. Real time stable haptic rendering of 3D deformable streaming surface
CN111757081B (en) Movement limiting method for virtual scene, client, server and computing equipment
CN112528707A (en) Image processing method, device, equipment and storage medium
CN113920282B (en) Image processing method and device, computer readable storage medium, and electronic device
JP2023527438A (en) Geometry Recognition Augmented Reality Effect Using Real-time Depth Map
CN113379763A (en) Image data processing method, model generating method and image segmentation processing method
CN116228949B (en) Three-dimensional model processing method, device and storage medium
CN111429581A (en) Method and device for determining outline of game model and adding special effect of game
Hauswiesner et al. Temporal coherence in image-based visual hull rendering
CN110827411A (en) Self-adaptive environment augmented reality model display method, device, equipment and storage medium
CN111905365B (en) Method and device for dragging game scene and electronic equipment
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
CN116612228A (en) Method, apparatus and storage medium for smoothing object edges
CN115168826A (en) Projection verification method and device, electronic equipment and computer readable storage medium
CN116824105A (en) Display adjustment method, device and equipment for building information model and storage medium
CN114419286A (en) Panoramic roaming method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant