CN111275611B - Method, device, terminal and storage medium for determining object depth in three-dimensional scene - Google Patents

Method, device, terminal and storage medium for determining object depth in three-dimensional scene Download PDF

Info

Publication number
CN111275611B
CN111275611B CN202010032598.8A CN202010032598A CN111275611B CN 111275611 B CN111275611 B CN 111275611B CN 202010032598 A CN202010032598 A CN 202010032598A CN 111275611 B CN111275611 B CN 111275611B
Authority
CN
China
Prior art keywords
observation distance
height
distance
virtual camera
maximum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010032598.8A
Other languages
Chinese (zh)
Other versions
CN111275611A (en
Inventor
张烨妮
刘华
陈继超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huaorange Digital Technology Co ltd
Original Assignee
Shenzhen Huaorange Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huaorange Digital Technology Co ltd filed Critical Shenzhen Huaorange Digital Technology Co ltd
Priority to CN202010032598.8A priority Critical patent/CN111275611B/en
Publication of CN111275611A publication Critical patent/CN111275611A/en
Application granted granted Critical
Publication of CN111275611B publication Critical patent/CN111275611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/067Reshaping or unfolding 3D tree structures onto 2D planes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a method, a device, a terminal and a storage medium for determining object depth in a three-dimensional scene, wherein the method comprises the following steps: obtaining the distance between a target object and a virtual camera in a three-dimensional scene; detecting whether the three-dimensional scene is a close-range display scene or a distant-range display scene; when the three-dimensional scene is a close-range display scene, calculating a depth value of the target object by using the distance, and a first preset minimum observation distance and a first preset maximum observation distance of the preset virtual camera; when the three-dimensional scene is a long-range scene, the height of the virtual camera from the ground is obtained, the current minimum observation distance and the current maximum observation distance of the virtual camera are determined based on the height, and the depth value of the target object is calculated by combining the distance, the current minimum observation distance and the current maximum observation distance. By the mode, the method and the device can improve the precision of the depth buffer area of the overlapped picture area in the three-dimensional display scene, and avoid the problem that the picture flicker occurs in the overlapped area.

Description

Method, device, terminal and storage medium for determining object depth in three-dimensional scene
Technical Field
The present invention relates to the field of three-dimensional display technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for determining object depth in a three-dimensional scene.
Background
With the development of technology, three-dimensional scenes are increasingly applied to various industries. In applications with three-dimensional scenes, where a computer is required to convert data describing the three-dimensional scene into two-dimensional data for viewing on an electronic display screen, this process of converting the three-dimensional scene into a two-dimensional image is often referred to as rendering, which is typically divided into three phases: the method comprises an application stage, a geometric stage and a rasterization stage, wherein the rasterization stage carries out difference values on each fixed point of the primitive obtained in the previous stage to generate pixels on a screen, and renders a final image.
The task of the rasterization stage is mainly to determine which pixels in each rendering primitive should be drawn on the screen, in this process, the rendering depth value will determine which object rendering result is displayed on a certain pixel point, and in some cases, two planes overlap in the scene, in which case when the accuracy of the rendering depth values of the two objects is insufficient to distinguish which plane is displayed, the renderer will randomly display the two planes, so that the converted two-dimensional image will flash continuously in the overlapping area of the two planes.
In the prior art, the problems are mainly solved in two ways, one is that the precision of the depth buffer area is improved by improving the bit number of the depth value, but the current display card supports the depth value of 32 bits at the highest, so that the precision of the depth buffer area can only achieve the precision of single-precision floating point numbers; another way is to increase the accuracy of the buffer by adjusting the minimum viewing distance of the virtual camera and the maximum viewing distance of the virtual camera. However, the two modes have certain defects, the first mode is limited by hardware conditions, and the precision can only be accurate to 7 bits after decimal places; and the second type can be cut out when the object which is far from the virtual camera and is smaller than the observation distance or larger than the maximum observation distance of the virtual camera is present, so that the range of the scene seen by people is reduced, and the user experience effect is poor.
Disclosure of Invention
The application provides a method, a device, a terminal and a storage medium for determining object depth in a three-dimensional scene, so as to solve the problem that an overlapping picture in the three-dimensional scene flickers due to low accuracy of the existing three-dimensional scene depth value.
In order to solve the technical problems, one technical scheme adopted by the application is as follows: the method for determining the depth of the object in the three-dimensional scene comprises the following steps: obtaining the distance between a target object and a virtual camera in a three-dimensional scene; detecting whether the three-dimensional scene is a close-range display scene or a distant-range display scene; when the three-dimensional scene is a close-range display scene, calculating a depth value of the target object by using the distance, and a first preset minimum observation distance and a first preset maximum observation distance of the preset virtual camera; when the three-dimensional scene is a long-range scene, the height of the virtual camera from the ground is obtained, the current minimum observation distance and the current maximum observation distance of the virtual camera are determined based on the height, and the depth value of the target object is calculated by combining the distance, the current minimum observation distance and the current maximum observation distance.
As a further improvement of the present invention, the step of determining a current minimum observation distance and a current maximum observation distance of the virtual camera based on the altitude comprises: judging whether the height is smaller than a preset threshold value or not; if yes, taking a preset second preset minimum observation distance and a preset second maximum observation distance as a current minimum observation distance and a current maximum observation distance of the virtual camera; if not, determining the current minimum observation distance and the current maximum observation distance of the virtual camera from a pre-configured parameter table by using the height.
As a further improvement of the present invention, further comprising constructing a parameter table, the step of constructing the parameter table comprising: acquiring a plurality of preset heights; respectively calculating a corresponding minimum observation distance and a corresponding maximum observation distance under each height; rendering the test scene based on the minimum observation distance and the maximum observation distance corresponding to each height to obtain a display picture; detecting whether a depth conflict occurs on a display picture or a nearby object is cut off; if depth conflict occurs or near objects are cut off, the minimum observation distance and the maximum observation distance corresponding to the height are readjusted until the display picture is displayed normally, and then the height, the corresponding minimum observation distance and the maximum observation distance are saved; if depth conflict does not occur and the near object is not cut, the height, the corresponding minimum observation distance and the corresponding maximum observation distance are saved; and constructing a parameter table according to the stored height and the corresponding minimum observation distance and maximum observation distance.
As a further improvement of the present invention, the calculation formula of the minimum observation distance is:
N=0.1*H;
wherein N is the minimum observation distance and H is the height;
the calculation formula of the maximum observation distance is as follows:
F=3569.6*sqrt(H);
wherein F is the maximum viewing distance.
As a further improvement of the invention, the method further comprises:
and rendering the three-dimensional scene to a two-dimensional screen according to the depth value of the target object.
In order to solve the technical problems, another technical scheme adopted by the application is as follows: the method for determining the depth of the object in the three-dimensional scene comprises the following steps: acquiring the distance between a target object and a virtual camera in a three-dimensional scene and the height between the virtual camera and the ground; calculating a current minimum observation distance and a current maximum observation distance of the virtual camera in real time based on the height; and calculating the depth value of the target object by using the distance, the current minimum observation distance and the current maximum observation distance.
In order to solve the technical problems, another technical scheme adopted by the application is as follows: there is provided an object depth determining apparatus in a three-dimensional scene, the apparatus comprising: the first acquisition module is used for acquiring the distance between a target object and the virtual camera in the three-dimensional scene; the detection module is coupled with the first acquisition module and is used for detecting whether the three-dimensional scene is a close-range display scene or a distant-range display scene; the first calculation module is coupled with the detection module and is used for calculating the depth value of the target object by utilizing the distance, the first preset minimum observation distance and the first preset maximum observation distance of the virtual camera when the three-dimensional scene is a close-range display scene; the second calculation module is coupled with the detection module and is used for acquiring the height of the virtual camera from the ground when the three-dimensional scene is a long-range display scene, determining the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height, and calculating the depth value of the target object by combining the distance, the current minimum observation distance and the current maximum observation distance.
In order to solve the technical problems, another technical scheme adopted by the application is as follows: there is provided an object depth determining apparatus in a three-dimensional scene, the apparatus comprising: the second acquisition module is used for acquiring the distance between the target object and the virtual camera in the three-dimensional scene and the height between the virtual camera and the ground; the parameter calculation module is coupled with the second acquisition module and used for calculating the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height in real time; and the depth value calculation module is coupled with the parameter calculation module and is used for calculating the depth value of the target object by using the distance, the current minimum observation distance and the current maximum observation distance.
In order to solve the technical problem, a further technical scheme adopted by the application is as follows: providing a terminal comprising a processor and a memory coupled with the processor, wherein the memory stores program instructions for realizing the object depth determination method in the three-dimensional scene; the processor is configured to execute the program instructions stored by the memory to calculate depth values for objects in the three-dimensional scene.
In order to solve the technical problem, a further technical scheme adopted by the application is as follows: there is provided a storage medium storing a program file capable of realizing the above method for determining object depth in a three-dimensional scene.
The beneficial effects of this application are: according to the invention, by confirming whether the three-dimensional scene is a close-range display scene or a distant-range display scene, when the three-dimensional scene is the close-range display scene, a set of fixed first preset minimum observation distance and first preset maximum observation distance are set, and the depth value of an object in the scene is calculated by using the two preset parameters, because the distance seen by a virtual camera is very close, the situation that an overlapped area picture flickers does not occur, and when the three-dimensional scene is distant-range display scene, the farthest distance seen in the distant-range scene is farther because the height is higher, the current minimum observation distance and the current maximum observation distance are determined according to the height of the virtual camera in the three-dimensional scene from the ground, so that the depth value accuracy of the object calculated according to the current minimum observation distance and the current maximum observation distance is higher, and the problem that the overlapped area picture flickers is avoided.
Drawings
FIG. 1 is a flow chart of a method for determining depth of an object in a three-dimensional scene according to a first embodiment of the invention;
FIG. 2 is a schematic diagram of a virtual camera and near and far planes according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method for determining depth of an object in a three-dimensional scene according to a second embodiment of the invention;
FIG. 4 is a flow chart of constructing a parameter table according to an embodiment of the present invention;
FIG. 5 is a flow chart of a method for determining depth of an object in a three-dimensional scene according to a third embodiment of the invention;
FIG. 6 is a schematic structural view of an object depth determining apparatus in a three-dimensional scene according to a first embodiment of the present invention;
FIG. 7 is a schematic structural view of an object depth determining apparatus in a three-dimensional scene according to a second embodiment of the present invention;
fig. 8 is a schematic structural view of a terminal according to an embodiment of the present invention;
fig. 9 is a schematic structural view of a storage medium according to an embodiment of the present invention.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms "first," "second," "third," and the like in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", and "a third" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. All directional indications (such as up, down, left, right, front, back … …) in the embodiments of the present application are merely used to explain the relative positional relationship, movement, etc. between the components in a particular gesture (as shown in the drawings), and if the particular gesture changes, the directional indication changes accordingly. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Fig. 1 is a flowchart of a method for determining depth of an object in a three-dimensional scene according to a first embodiment of the present invention. It should be noted that, if there are substantially the same results, the method of the present invention is not limited to the flow sequence shown in fig. 1. As shown in fig. 1, the method comprises the steps of:
step S101: and obtaining the distance between the target object and the virtual camera in the three-dimensional scene.
In step S101, after the three-dimensional space is constructed, a three-dimensional space coordinate system with the virtual camera as an origin is constructed, the coordinates of the target object in the three-dimensional space coordinate system are determined through space conversion, and then the distance between the target object and the virtual camera is calculated.
Step S102: detecting whether the three-dimensional scene is a close-range display scene or a distant-range display scene. When the three-dimensional scene is a close-up display scene, executing step S103; when the three-dimensional scene is a distant view display scene, step S104 is performed.
Note that, the close-up display scene refers to a scene that is closer to the virtual camera than the farthest position that the virtual camera can observe, for example: in the room, in the close-up display scene, the virtual camera observes a relatively close distance due to the occlusion of an object (e.g., a wall); a long-range view scene refers to a scene that is far away from the furthest position that can be observed by a virtual camera, for example: plain, sea level, mountain top, etc., in which the virtual camera can see a fairly remote location without object obstruction.
Step S103: and calculating the depth value of the target object by using the distance, the preset first minimum observation distance and the preset first maximum observation distance of the virtual camera.
It should be noted that the first preset minimum observation distance and the first preset maximum observation distance of the virtual camera are preset. Generally, referring to fig. 2, when a three-dimensional scene is rendered on a two-dimensional screen, a near plane and a far plane are formed in a three-dimensional space by the position and the opening angle of a virtual camera, each object in the near plane and the far plane has a Z coordinate value, the Z coordinate value represents the actual depth of the object, then the sequence of displaying pictures is determined by comparing the depth value Z with a depth buffer zone, the rendering result is displayed on the two-dimensional plane, and finally a two-dimensional picture seen by a pair of eyes is formed, wherein the distance from the virtual camera to the near plane is the minimum observation distance, and the distance from the virtual camera to the far plane is the maximum observation distance. It will be appreciated that in a close-up display scene, the virtual camera sees very close distances, and thus a set of fixed minimum and maximum viewing distances may be set, for example: the minimum viewing distance may be set to 0.05 m and the maximum viewing distance may be set to 100 m, and since objects seen by the virtual camera are all relatively close to each other, the problem of flicker of the picture in the overlapping area does not substantially occur.
In step S103, when the three-dimensional scene is a close-range display scene, a preset first preset minimum observation distance and a preset first preset maximum observation distance are obtained, and a depth value of the target object is calculated by combining the distance between the target object and the virtual camera, and a specific calculation formula is as follows:
where Fd is a depth value, N is a minimum observation distance, F is a maximum observation distance, z (N < = z < = F) is a distance from the target object and the virtual camera, fd=0.0 when z=n, and fd=1.0 when z=f.
Step S104: and acquiring the height of the virtual camera from the ground, determining the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height, and calculating the depth value of the target object by combining the distance, the current minimum observation distance and the current maximum observation distance.
In step S104, in the perspective display scene, since the earth is spherical, the furthest distance observed by the virtual camera is related to the height of the virtual camera from the ground in the three-dimensional scene and the earth curvature, the farther the furthest distance observed by the virtual camera is, so in this embodiment, when the three-dimensional scene is the perspective display scene, the height of the virtual camera from the ground in the three-dimensional scene is obtained, and the optimal current minimum observation distance and the current maximum observation distance are determined based on the height to render the three-dimensional scene, so as to avoid the problem of screen flicker in the object overlapping area.
According to the method for determining the depth of the object in the three-dimensional scene, when the three-dimensional scene is the close-range display scene or the distant-range display scene, a group of fixed first preset minimum observation distance and first preset maximum observation distance are set when the three-dimensional scene is the close-range display scene, the depth value of the object in the scene is calculated according to the two preset parameters, because the distance seen by the virtual camera is very close, the situation that the picture in the overlapped area flickers does not occur, and when the distant-range display scene is the three-dimensional scene, because the farther the height is, the farther the farthest the distance seen in the distant-range scene is, the current minimum observation distance and the current maximum observation distance are determined according to the height of the virtual camera from the ground in the three-dimensional scene, so that the depth value of the object calculated according to the current minimum observation distance and the current maximum observation distance is higher in accuracy, and the problem that the picture in the overlapped area flickers is avoided.
Further, on the basis of the first embodiment, in other embodiments, after calculating the depth value of the target object, the method further includes the following steps:
and rendering the three-dimensional scene to a two-dimensional screen according to the depth value of the target object.
In this embodiment, the three-dimensional scene is rendered onto the two-dimensional screen by using the depth value of the target object calculated in the first embodiment, so that the problem of flicker of the two-dimensional screen is avoided.
Fig. 3 is a flowchart of a method for determining depth of an object in a three-dimensional scene according to a second embodiment of the present invention. It should be noted that, if there are substantially the same results, the method of the present invention is not limited to the flow sequence shown in fig. 3. As shown in fig. 3, the method comprises the steps of:
step S201: and obtaining the distance between the target object and the virtual camera in the three-dimensional scene.
In this embodiment, step S201 in fig. 3 is similar to step S101 in fig. 1, and is not described here again for brevity.
Step S202: detecting whether the three-dimensional scene is a close-range display scene or a distant-range display scene. When the three-dimensional scene is a close-up display scene, executing step S203; when the three-dimensional scene is a distant view display scene, steps S204 to S208 are executed.
In this embodiment, step S202 in fig. 3 is similar to step S102 in fig. 1, and is not described herein for brevity.
Step S203: and calculating the depth value of the target object by using the distance, the preset first minimum observation distance and the preset first maximum observation distance of the virtual camera.
In this embodiment, step S203 in fig. 3 is similar to step S103 in fig. 1, and is not described herein for brevity.
Step S204: the height of the virtual camera from the ground is obtained.
Step S205: and judging whether the height is smaller than a preset threshold value. If yes, go to step S206; if not, step S207 is performed.
The preset threshold is preset.
Step S206: and taking the preset second preset minimum observing distance and the preset second maximum observing distance as the current minimum observing distance and the current maximum observing distance of the virtual camera.
In step S206, the second preset minimum observation distance and the second preset maximum observation distance are preset. In the long-range view display scene, as the earth is spherical, the furthest distance seen by the virtual camera is related to the curvature of the earth and the height of the virtual camera, so that the furthest distance seen by the virtual camera can be calculated as follows:
F=3569.6*sqrt(H);
wherein F is the furthest distance, H is the height of the virtual camera, and from the above formula, when h=1 meter, the furthest distance seen by the virtual camera is 3569.6 meters, and in the urban scene, according to the characteristics of the urban scene, if the virtual camera is nearer to the ground, the distance that the virtual camera can observe is about several hundred meters due to the shielding of the building, and because the observed objects are all nearer, no flicker basically occurs, at this time, a set of fixed second preset minimum observation distance and second preset maximum observation distance can be set, for example: the minimum viewing distance was 0.1 meter and the maximum viewing distance was 3569 meters.
Step S207: the current minimum observation distance and the current maximum observation distance of the virtual camera are determined from a pre-configured parameter table by using the height.
In step S207, before the depth value calculation, a parameter table is constructed, in which a plurality of heights are recorded, and in the long-range view scene, the optimum minimum observation distance and maximum observation distance for each height are recorded. Specifically, referring to fig. 4, constructing the parameter table includes the steps of:
step S301: a plurality of preset heights are obtained.
Step S302: and respectively calculating the corresponding minimum observation distance and maximum observation distance at each height.
In step S302, the calculation formula of the minimum observation distance is:
N=0.1*H;
wherein N is the minimum observation distance, and H is the height value;
the calculation formula of the maximum observation distance is as follows:
F=3569.6*sqrt(H);
wherein F is the maximum viewing distance.
Step S303: and rendering the test scene based on the minimum observation distance and the maximum observation distance corresponding to each height to obtain a display picture.
In step S303, after the minimum observation distance and the maximum observation distance corresponding to each height are calculated, a test scene is rendered according to each group of the minimum observation distance and the maximum observation distance, so as to obtain a display picture, where the test scene is a distant view display scene.
Step S304: and detecting whether the display picture has depth conflict or cuts out the near object. If depth conflict occurs or the nearby object is cut, step S305 is performed; if no depth conflict occurs and the nearby object is not clipped, step S306 is performed.
In step S304, when a depth conflict occurs in the display screen or an object near the virtual camera is cut out, it is indicated that a set of minimum observation distance and maximum observation distance corresponding to the display screen is not optimal, and step S305 is executed at this time, otherwise step S306 is executed.
Step S305: and readjusting the minimum observation distance and the maximum observation distance corresponding to the height until the display picture is displayed normally, and saving the height, the corresponding minimum observation distance and the corresponding maximum observation distance.
In step S305, the minimum observation distance and the maximum observation distance at the height are readjusted until the depth conflict of the display screen no longer occurs or the object near the virtual camera is no longer cut out, and then the height and the adjusted minimum observation distance and the adjusted maximum observation distance are saved.
Step S306: the height, and corresponding minimum and maximum viewing distances, are saved.
Step S307: and constructing a parameter table according to the stored height and the corresponding minimum observation distance and maximum observation distance.
In this embodiment, after the minimum observation distance and the maximum observation distance corresponding to different heights are obtained through calculation, the minimum observation distance and the maximum observation distance of each height are verified, so that the data which does not reach the standard are removed, and the minimum observation distance and the maximum observation distance which are finally found according to the parameter table are guaranteed to be optimal.
Step S208: and calculating the depth value of the target object by combining the distance, the current minimum observation distance and the current maximum observation distance.
According to the method for determining the depth of the object in the three-dimensional scene, on the basis of the first embodiment, in the long-range scene display, the current minimum observation distance and the current maximum observation distance are determined through the height of the virtual camera, when the height is smaller than the preset threshold value, the preset second preset minimum observation distance and the preset second maximum observation distance are used as the current minimum observation distance and the current maximum observation distance of the virtual camera, when the height is larger than or equal to the preset threshold value, the height lookup parameter table is used for determining the current minimum observation distance and the current maximum observation distance of the virtual camera, the parameter table is prepared by performing multiple experiments in advance, wherein the minimum observation distance and the maximum observation distance corresponding to each height can ensure that the situation that a picture flicker does not occur when the three-dimensional scene is rendered into a two-dimensional picture, so that the user experience is optimal, moreover, the minimum observation distance and the maximum observation distance are obtained through the lookup parameter table, calculation is not needed when the picture is performed again, and therefore the processing speed is improved, and the image processing performance is further improved.
Fig. 5 is a flowchart of a method for determining depth of an object in a three-dimensional scene according to a third embodiment of the present invention. It should be noted that, if there are substantially the same results, the method of the present invention is not limited to the flow sequence shown in fig. 5. As shown in fig. 5, the method comprises the steps of:
step S401: and acquiring the distance between the target object and the virtual camera in the three-dimensional scene and the height between the virtual camera and the ground.
In step S401, the distance between the target object and the virtual camera in the three-dimensional scene and the height between the virtual camera and the ground are obtained in real time, wherein the distance between the target object and the virtual camera can be obtained by constructing a three-dimensional space coordinate system, determining the coordinates of the target object and the virtual camera in the three-dimensional space coordinate system, and then calculating the distance between the target object and the virtual camera.
Step S402: the current minimum and maximum viewing distances of the virtual camera are calculated in real time based on the altitude.
In step S402, the calculation of the current minimum observation distance and the current maximum observation distance refer to step S302 in the embodiment of constructing the parameter table, which is not described herein.
Step S403: and calculating the depth value of the target object by using the distance, the current minimum observation distance and the current maximum observation distance.
In step S402, the calculation of the depth value is described in step S103 in the first embodiment, and will not be described herein.
According to the method for determining the object depth in the three-dimensional scene, the height of the virtual camera is obtained through implementation, the current minimum observation distance and the current maximum observation distance are obtained through height calculation, the optimal minimum observation distance and the maximum observation distance are accurately calculated for each virtual camera position, and the best rendering display effect under the current minimum observation distance and the current maximum observation distance is ensured.
Fig. 6 shows a schematic structural diagram of an object depth determining apparatus in a three-dimensional scene according to a first embodiment of the present invention. As shown in fig. 6, in the present embodiment, the object depth determining apparatus 10 in the three-dimensional scene includes a first acquisition module 11, a detection module 12, a first calculation module 13, and a second calculation module 14.
A first obtaining module 11, configured to obtain a distance between a target object and a virtual camera in a three-dimensional scene;
the detection module 12 is coupled to the first acquisition module 11, and is configured to detect whether the three-dimensional scene is a close-range display scene or a distant-range display scene;
a first calculation module 13, coupled to the detection module 12, for calculating a depth value of the target object using the distance and a first preset minimum observation distance and a first preset maximum observation distance of the virtual camera when the three-dimensional scene is a close-range display scene;
the second calculation module 14 is coupled to the detection module 12, and is configured to obtain a height of the virtual camera from the ground when the three-dimensional scene is a long-range display scene, determine a current minimum observation distance and a current maximum observation distance of the virtual camera based on the height, and calculate a depth value of the target object by combining the distance, the current minimum observation distance and the current maximum observation distance.
Optionally, after the operation of determining the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height, the second calculation module 14 further includes: judging whether the height is smaller than a preset threshold value or not; if yes, taking a preset second preset minimum observation distance and a preset second maximum observation distance as a current minimum observation distance and a current maximum observation distance of the virtual camera; if not, determining the current minimum observation distance and the current maximum observation distance of the virtual camera from a pre-configured parameter table by using the height. Wherein, still include the operation of constructing the parameter table, the operation of constructing the parameter table includes: acquiring a plurality of preset heights; respectively calculating a corresponding minimum observation distance and a corresponding maximum observation distance under each height; rendering the test scene based on the minimum observation distance and the maximum observation distance corresponding to each height to obtain a display picture, wherein the calculation formula of the minimum observation distance is as follows: n=0.1×h, where N is the minimum observation distance and H is the height; the calculation formula of the maximum observation distance is as follows: f=3569.6×sqrt (H), wherein F is the maximum viewing distance; detecting whether a depth conflict occurs on a display picture or a nearby object is cut off; if depth conflict occurs or near objects are cut off, the minimum observation distance and the maximum observation distance corresponding to the height are readjusted until the display picture is displayed normally, and then the height, the corresponding minimum observation distance and the maximum observation distance are saved; if depth conflict does not occur and the near object is not cut, the height, the corresponding minimum observation distance and the corresponding maximum observation distance are saved; and constructing a parameter table according to the stored height and the corresponding minimum observation distance and maximum observation distance.
Optionally, the first calculation module 13 or the second calculation module 14 further includes, after the operation of calculating the depth value of the target object: and rendering the three-dimensional scene to a two-dimensional screen according to the depth value of the target object.
Fig. 7 shows a schematic structural diagram of an object depth determining apparatus in a three-dimensional scene according to a second embodiment of the present invention. As shown in fig. 7, in the present embodiment, the object depth determining apparatus 20 in the three-dimensional scene includes a second acquisition module 21, a parameter calculation module 22, and a depth value calculation module 23.
A second obtaining module 21, configured to obtain a distance between a target object and a virtual camera in the three-dimensional scene and a height between the virtual camera and the ground;
a parameter calculation module 22, coupled to the second acquisition module 21, for calculating a current minimum observation distance and a current maximum observation distance of the virtual camera based on the altitude in real time;
the depth value calculating module 23 is coupled to the parameter calculating module 22, and is configured to calculate a depth value of the target object using the distance, the current minimum observation distance, and the current maximum observation distance.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the invention. As shown in fig. 8, the terminal 60 includes a processor 61 and a memory 62 coupled to the processor 61.
The memory 62 stores program instructions for implementing the method for determining the depth of an object in a three-dimensional scene according to any of the embodiments described above.
The processor 61 is arranged to execute program instructions stored by the memory 62 to calculate depth values of objects in the three-dimensional scene.
The processor 61 may also be referred to as a CPU (Central Processing Unit ). The processor 61 may be an integrated circuit chip with signal processing capabilities. Processor 61 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a storage medium according to an embodiment of the present invention. The storage medium according to the embodiment of the present invention stores a program file 71 capable of implementing all the methods described above, where the program file 71 may be stored in the storage medium in the form of a software product, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The foregoing is only the embodiments of the present application, and not the patent scope of the present application is limited by the foregoing description, but all equivalent structures or equivalent processes using the contents of the present application and the accompanying drawings, or directly or indirectly applied to other related technical fields, which are included in the patent protection scope of the present application.

Claims (8)

1. A method for determining depth of an object in a three-dimensional scene, the method comprising:
obtaining the distance between a target object and a virtual camera in a three-dimensional scene;
detecting whether the three-dimensional scene is a close-range display scene or a distant-range display scene;
when the three-dimensional scene is the close-range display scene, calculating the depth value of the target object by using the distance, a preset first preset minimum observation distance and a preset first preset maximum observation distance of the virtual camera;
when the three-dimensional scene is the long-range scene, acquiring the height of the virtual camera from the ground, determining the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height, and calculating the depth value of the target object by combining the distance, the current minimum observation distance and the current maximum observation distance;
wherein the step of determining a current minimum viewing distance and a current maximum viewing distance of the virtual camera based on the altitude comprises:
judging whether the height is smaller than a preset threshold value or not;
if yes, taking a preset second preset minimum observation distance and a preset second maximum observation distance as a current minimum observation distance and a current maximum observation distance of the virtual camera;
if not, determining the current minimum observation distance and the current maximum observation distance of the virtual camera from a pre-configured parameter table by utilizing the height;
the method further comprises the step of constructing the parameter table, wherein the step of constructing the parameter table comprises the following steps:
acquiring a plurality of preset heights;
respectively calculating a corresponding minimum observation distance and a corresponding maximum observation distance under each height;
rendering the test scene based on the minimum observation distance and the maximum observation distance corresponding to each height to obtain a display picture;
detecting whether the display picture has depth conflict or cuts out a nearby object;
if depth conflict occurs or near objects are cut off, the minimum observation distance and the maximum observation distance corresponding to the height are readjusted until the display picture is displayed normally, and then the height, the corresponding minimum observation distance and the maximum observation distance are saved;
if depth conflict does not occur and the near object is not cut, the height, the corresponding minimum observation distance and the corresponding maximum observation distance are saved;
and constructing the parameter table according to the stored height and the corresponding minimum observation distance and maximum observation distance.
2. The method for determining the depth of an object in a three-dimensional scene according to claim 1, wherein the calculation formula of the minimum observation distance is:
N=0.1*H;
wherein N is the minimum observation distance and H is the height;
the calculation formula of the maximum observation distance is as follows:
F=3569.6*sqrt(H);
wherein F is the maximum viewing distance.
3. The method of determining object depth in a three-dimensional scene according to claim 1, further comprising:
and rendering the three-dimensional scene to the two-dimensional screen according to the depth value of the target object.
4. A method for determining depth of an object in a three-dimensional scene, the method comprising:
acquiring the distance between a target object and a virtual camera in a three-dimensional scene and the height between the virtual camera and the ground;
calculating a current minimum observation distance and a current maximum observation distance of the virtual camera in real time based on the height;
calculating a depth value of the target object by using the distance, the current minimum observation distance and the current maximum observation distance;
the step of calculating the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height in real time comprises the following steps:
judging whether the height is smaller than a preset threshold value or not;
if yes, taking a preset second preset minimum observation distance and a preset second maximum observation distance as a current minimum observation distance and a current maximum observation distance of the virtual camera;
if not, determining the current minimum observation distance and the current maximum observation distance of the virtual camera from a pre-configured parameter table by utilizing the height;
the method further comprises the step of constructing the parameter table, wherein the step of constructing the parameter table comprises the following steps:
acquiring a plurality of preset heights;
respectively calculating a corresponding minimum observation distance and a corresponding maximum observation distance under each height;
rendering the test scene based on the minimum observation distance and the maximum observation distance corresponding to each height to obtain a display picture;
detecting whether the display picture has depth conflict or cuts out a nearby object;
if depth conflict occurs or near objects are cut off, the minimum observation distance and the maximum observation distance corresponding to the height are readjusted until the display picture is displayed normally, and then the height, the corresponding minimum observation distance and the maximum observation distance are saved;
if depth conflict does not occur and the near object is not cut, the height, the corresponding minimum observation distance and the corresponding maximum observation distance are saved;
and constructing the parameter table according to the stored height and the corresponding minimum observation distance and maximum observation distance.
5. An object depth determination apparatus in a three-dimensional scene, the apparatus comprising:
the first acquisition module is used for acquiring the distance between a target object and the virtual camera in the three-dimensional scene;
the detection module is coupled with the first acquisition module and is used for detecting whether the three-dimensional scene is a close-range display scene or a distant-range display scene;
the first calculation module is coupled with the detection module and is used for calculating the depth value of the target object by utilizing the distance, the first preset minimum observation distance and the first preset maximum observation distance of the virtual camera when the three-dimensional scene is the close-range display scene;
the second calculation module is coupled with the detection module and is used for acquiring the height of the virtual camera from the ground when the three-dimensional scene is the long-range scene, determining the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height, and calculating the depth value of the target object by combining the distance, the current minimum observation distance and the current maximum observation distance; judging whether the height is smaller than a preset threshold value or not; if yes, taking a preset second preset minimum observation distance and a preset second maximum observation distance as a current minimum observation distance and a current maximum observation distance of the virtual camera; if not, determining the current minimum observation distance and the current maximum observation distance of the virtual camera from a pre-configured parameter table by using the height, wherein the method further comprises the operation of constructing the parameter table, and the operation of constructing the parameter table comprises the following steps: acquiring a plurality of preset heights; respectively calculating a corresponding minimum observation distance and a corresponding maximum observation distance under each height; rendering the test scene based on the minimum observation distance and the maximum observation distance corresponding to each height to obtain a display picture; detecting whether a depth conflict occurs on a display picture or a nearby object is cut off; if depth conflict occurs or near objects are cut off, the minimum observation distance and the maximum observation distance corresponding to the height are readjusted until the display picture is displayed normally, and then the height, the corresponding minimum observation distance and the maximum observation distance are saved; if depth conflict does not occur and the near object is not cut, the height, the corresponding minimum observation distance and the corresponding maximum observation distance are saved; and constructing a parameter table according to the stored height and the corresponding minimum observation distance and maximum observation distance.
6. An object depth determination apparatus in a three-dimensional scene, the apparatus comprising:
the second acquisition module is used for acquiring the distance between a target object and the virtual camera in the three-dimensional scene and the height between the virtual camera and the ground;
a parameter calculation module, coupled to the second acquisition module, for calculating a current minimum observation distance and a current maximum observation distance of the virtual camera in real time based on the altitude; judging whether the height is smaller than a preset threshold value or not; if yes, taking a preset second preset minimum observation distance and a preset second maximum observation distance as a current minimum observation distance and a current maximum observation distance of the virtual camera; if not, determining the current minimum observation distance and the current maximum observation distance of the virtual camera from a pre-configured parameter table by utilizing the height; the method further comprises the step of constructing the parameter table, wherein the step of constructing the parameter table comprises the following steps: acquiring a plurality of preset heights; respectively calculating a corresponding minimum observation distance and a corresponding maximum observation distance under each height; rendering the test scene based on the minimum observation distance and the maximum observation distance corresponding to each height to obtain a display picture; detecting whether the display picture has depth conflict or cuts out a nearby object; if depth conflict occurs or near objects are cut off, the minimum observation distance and the maximum observation distance corresponding to the height are readjusted until the display picture is displayed normally, and then the height, the corresponding minimum observation distance and the maximum observation distance are saved; if depth conflict does not occur and the near object is not cut, the height, the corresponding minimum observation distance and the corresponding maximum observation distance are saved; constructing the parameter table according to the stored height and the corresponding minimum observation distance and maximum observation distance;
and the depth value calculation module is coupled with the parameter calculation module and is used for calculating the depth value of the target object by using the distance, the current minimum observation distance and the current maximum observation distance.
7. A terminal comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions for implementing the method for determining object depth in a three-dimensional scene according to any one of claims 1 to 6;
the processor is configured to execute the program instructions stored by the memory to calculate depth values for objects in a three-dimensional scene.
8. A storage medium in which a program file capable of realizing the method of determining the depth of an object in a three-dimensional scene according to any one of claims 1 to 6 is stored.
CN202010032598.8A 2020-01-13 2020-01-13 Method, device, terminal and storage medium for determining object depth in three-dimensional scene Active CN111275611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010032598.8A CN111275611B (en) 2020-01-13 2020-01-13 Method, device, terminal and storage medium for determining object depth in three-dimensional scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010032598.8A CN111275611B (en) 2020-01-13 2020-01-13 Method, device, terminal and storage medium for determining object depth in three-dimensional scene

Publications (2)

Publication Number Publication Date
CN111275611A CN111275611A (en) 2020-06-12
CN111275611B true CN111275611B (en) 2024-02-06

Family

ID=71000187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010032598.8A Active CN111275611B (en) 2020-01-13 2020-01-13 Method, device, terminal and storage medium for determining object depth in three-dimensional scene

Country Status (1)

Country Link
CN (1) CN111275611B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686989A (en) * 2021-01-04 2021-04-20 北京高因科技有限公司 Three-dimensional space roaming implementation method
CN113941147A (en) * 2021-10-25 2022-01-18 腾讯科技(深圳)有限公司 Picture generation method, device, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011029209A2 (en) * 2009-09-10 2011-03-17 Liberovision Ag Method and apparatus for generating and processing depth-enhanced images
CN105894567A (en) * 2011-01-07 2016-08-24 索尼互动娱乐美国有限责任公司 Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
CN106228613A (en) * 2016-06-12 2016-12-14 深圳超多维光电子有限公司 Construction method, device and the stereoscopic display device of a kind of virtual three-dimensional scene
CN109829981A (en) * 2019-02-16 2019-05-31 深圳市未来感知科技有限公司 Three-dimensional scenic rendering method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9041774B2 (en) * 2011-01-07 2015-05-26 Sony Computer Entertainment America, LLC Dynamic adjustment of predetermined three-dimensional video settings based on scene content

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011029209A2 (en) * 2009-09-10 2011-03-17 Liberovision Ag Method and apparatus for generating and processing depth-enhanced images
CN105894567A (en) * 2011-01-07 2016-08-24 索尼互动娱乐美国有限责任公司 Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
CN106228613A (en) * 2016-06-12 2016-12-14 深圳超多维光电子有限公司 Construction method, device and the stereoscopic display device of a kind of virtual three-dimensional scene
CN109829981A (en) * 2019-02-16 2019-05-31 深圳市未来感知科技有限公司 Three-dimensional scenic rendering method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李鹏飞 ; 周杨 ; 许继伟 ; 胡校飞 ; 薛现光 ; .跨尺度三维场景显示精度提高技术研究.系统仿真学报.2016,(第09期),全文. *
王进成 ; 金一丞 ; 曹士连 ; .使用三维场景绘制技术模拟雷达图像.大连海事大学学报.2014,(第04期),全文. *

Also Published As

Publication number Publication date
CN111275611A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
US20180324415A1 (en) Real-time automatic vehicle camera calibration
US8654151B2 (en) Apparatus and method for providing augmented reality using synthesized environment map
CN107646109B (en) Managing feature data for environment mapping on an electronic device
CN111639626A (en) Three-dimensional point cloud data processing method and device, computer equipment and storage medium
US8369578B2 (en) Method and system for position determination using image deformation
TW200912512A (en) Augmenting images for panoramic display
JP2018503066A (en) Accuracy measurement of image-based depth detection system
CN111275611B (en) Method, device, terminal and storage medium for determining object depth in three-dimensional scene
CN110084797B (en) Plane detection method, plane detection device, electronic equipment and storage medium
US9595125B2 (en) Expanding a digital representation of a physical plane
CN112529097B (en) Sample image generation method and device and electronic equipment
CN116363082A (en) Collision detection method, device, equipment and program product for map elements
CN108197531A (en) A kind of road curve detection method, device and terminal
CN110096143B (en) Method and device for determining attention area of three-dimensional model
US20130147801A1 (en) Electronic apparatus, method for producing augmented reality image, and computer-readable recording medium
CN115334247B (en) Camera module calibration method, visual positioning method and device and electronic equipment
US10339702B2 (en) Method for improving occluded edge quality in augmented reality based on depth camera
CN114723894B (en) Three-dimensional coordinate acquisition method and device and electronic equipment
KR101912241B1 (en) Augmented reality service providing apparatus for providing an augmented image relating to three-dimensional shape of real estate and method for the same
CN112529769B (en) Method and system for adapting two-dimensional image to screen, computer equipment and storage medium
US20230326147A1 (en) Helper data for anchors in augmented reality
CN112150527B (en) Measurement method and device, electronic equipment and storage medium
CN115004683A (en) Imaging apparatus, imaging method, and program
EP3023932A1 (en) Method and device for correction of depth values in a depth map
CN104410793B (en) A kind of image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant