CN111275611A - Method, device, terminal and storage medium for determining depth of object in three-dimensional scene - Google Patents

Method, device, terminal and storage medium for determining depth of object in three-dimensional scene Download PDF

Info

Publication number
CN111275611A
CN111275611A CN202010032598.8A CN202010032598A CN111275611A CN 111275611 A CN111275611 A CN 111275611A CN 202010032598 A CN202010032598 A CN 202010032598A CN 111275611 A CN111275611 A CN 111275611A
Authority
CN
China
Prior art keywords
observation distance
distance
virtual camera
height
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010032598.8A
Other languages
Chinese (zh)
Other versions
CN111275611B (en
Inventor
张烨妮
刘华
陈继超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huaorange Digital Technology Co ltd
Original Assignee
Shenzhen Huaorange Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huaorange Digital Technology Co ltd filed Critical Shenzhen Huaorange Digital Technology Co ltd
Priority to CN202010032598.8A priority Critical patent/CN111275611B/en
Publication of CN111275611A publication Critical patent/CN111275611A/en
Application granted granted Critical
Publication of CN111275611B publication Critical patent/CN111275611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/067
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a method, a device, a terminal and a storage medium for determining the depth of an object in a three-dimensional scene, wherein the method comprises the following steps: acquiring the distance between a target object and a virtual camera in a three-dimensional scene; detecting whether the three-dimensional scene is a close view display scene or a distant view display scene; when the three-dimensional scene is a close-range display scene, calculating the depth value of the target object by using the distance, a preset first preset minimum observation distance and a preset first maximum observation distance of the virtual camera; when the three-dimensional scene is a distant view display scene, the height of the virtual camera from the ground is obtained, the current minimum observation distance and the current maximum observation distance of the virtual camera are determined based on the height, and the depth value of the target object is calculated by combining the distance, the current minimum observation distance and the current maximum observation distance. By the mode, the accuracy of the depth buffer area of the overlapped picture area in the three-dimensional display scene can be improved, and the problem of picture flicker in the overlapped area is avoided.

Description

Method, device, terminal and storage medium for determining depth of object in three-dimensional scene
Technical Field
The present application relates to the field of three-dimensional display technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for determining a depth of an object in a three-dimensional scene.
Background
With the development of science and technology, three-dimensional scenes are more and more applied to various industries. In applications with three-dimensional scenes, a computer needs to convert data describing the three-dimensional scene into two-dimensional data for viewing on an electronic display screen, and this process of generating a two-dimensional image from the three-dimensional scene is generally referred to as rendering, which is generally divided into three stages: the method comprises an application stage, a geometric stage and a rasterization stage, wherein the rasterization stage performs difference on all fixed points of a primitive obtained in the last stage to generate pixels on a screen and renders a final image.
The task of the rasterization stage is mainly to determine which pixels in each rendering primitive should be drawn on a screen, and in the process, a result of which object rendering is displayed on a certain pixel point is determined by a rendering depth value, and in some cases, a situation that two planes are overlapped occurs in a scene, and in such a situation, when the rendering depth values of two objects are not accurate enough to distinguish which plane the display is heavier, a renderer can randomly display the two planes, so that a converted two-dimensional image flickers continuously in an overlapping area of the two planes.
The existing scheme mainly solves the problems in two ways, one is that the precision of a depth buffer area is improved by improving the number of bits of a depth value, but the highest depth value of 32 bits is supported by the current display card, so that the precision of the depth buffer area can only achieve the precision of a single-precision floating point number; another way is to increase the accuracy of the buffer by adjusting the minimum viewing distance of the virtual camera up and the maximum viewing distance of the virtual camera down. However, the two modes have certain defects, the first mode is limited by hardware conditions, and the precision can only be accurate to 7 bits after decimal point; and in the second type, objects which are far from the virtual camera and are less than the observation distance or more than the maximum observation distance of the virtual camera are cut, so that the range of the scene seen by people is reduced, and the user experience effect is poor.
Disclosure of Invention
The application provides a method, a device, a terminal and a storage medium for determining the depth of an object in a three-dimensional scene, which are used for solving the problem that the existing three-dimensional scene has low depth value accuracy to cause flicker of overlapped pictures in the three-dimensional scene.
In order to solve the technical problem, the application adopts a technical scheme that: a method for determining the depth of an object in a three-dimensional scene is provided, and the method comprises the following steps: acquiring the distance between a target object and a virtual camera in a three-dimensional scene; detecting whether the three-dimensional scene is a close view display scene or a distant view display scene; when the three-dimensional scene is a close-range display scene, calculating the depth value of the target object by using the distance, a preset first preset minimum observation distance and a preset first maximum observation distance of the virtual camera; when the three-dimensional scene is a distant view display scene, the height of the virtual camera from the ground is obtained, the current minimum observation distance and the current maximum observation distance of the virtual camera are determined based on the height, and the depth value of the target object is calculated by combining the distance, the current minimum observation distance and the current maximum observation distance.
As a further improvement of the present invention, the step of determining the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height comprises: judging whether the height is smaller than a preset threshold value or not; if so, taking a preset second preset minimum observation distance and a preset second maximum observation distance as the current minimum observation distance and the current maximum observation distance of the virtual camera; if not, determining the current minimum observation distance and the current maximum observation distance of the virtual camera by using the height from a pre-configured parameter table.
As a further improvement of the invention, the method also comprises a parameter table construction step, and the parameter table construction step comprises the following steps: acquiring a plurality of preset heights; respectively calculating the corresponding minimum observation distance and the maximum observation distance under each height; rendering the test scene based on the minimum observation distance and the maximum observation distance corresponding to each height to obtain a display picture; detecting whether a display picture has a depth conflict or cuts off a nearby object; if the depth conflict occurs or a nearby object is cut off, readjusting the minimum observation distance and the maximum observation distance corresponding to the height until the display picture is displayed normally, and then storing the height and the corresponding minimum observation distance and the maximum observation distance; if the depth conflict does not occur and the nearby object is not cut off, the height, the corresponding minimum observation distance and the corresponding maximum observation distance are saved; and constructing a parameter table according to the stored height and the corresponding minimum observation distance and maximum observation distance.
As a further improvement of the present invention, the calculation formula of the minimum observation distance is:
N=0.1*H;
wherein N is the minimum observation distance and H is the height;
the maximum observation distance is calculated by the formula:
F=3569.6*sqrt(H);
wherein F is the maximum viewing distance.
As a further improvement of the present invention, the method further comprises:
and rendering the three-dimensional scene to a two-dimensional screen according to the depth value of the target object.
In order to solve the above technical problem, another technical solution adopted by the present application is: a method for determining the depth of an object in a three-dimensional scene is provided, and the method comprises the following steps: acquiring the distance between a target object and a virtual camera in a three-dimensional scene and the height between the virtual camera and the ground; calculating the current minimum observation distance and the current maximum observation distance of the virtual camera in real time based on the height; and calculating the depth value of the target object by using the distance, the current minimum observation distance and the current maximum observation distance.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided an apparatus for determining depth of an object in a three-dimensional scene, the apparatus comprising: the first acquisition module is used for acquiring the distance between a target object and the virtual camera in the three-dimensional scene; the detection module is coupled with the first acquisition module and used for detecting whether the three-dimensional scene is a close view display scene or a long view display scene; the first calculation module is coupled with the detection module and used for calculating the depth value of the target object by using the distance and the first preset minimum observation distance and the first preset maximum observation distance of the virtual camera when the three-dimensional scene is a close-range display scene; and the second calculation module is coupled with the detection module and used for acquiring the height of the virtual camera from the ground when the three-dimensional scene is a distant view display scene, determining the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height, and calculating the depth value of the target object by combining the distance, the current minimum observation distance and the current maximum observation distance.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided an apparatus for determining depth of an object in a three-dimensional scene, the apparatus comprising: the second acquisition module is used for acquiring the distance between a target object and the virtual camera in the three-dimensional scene and the height between the virtual camera and the ground; the parameter calculation module is coupled with the second acquisition module and used for calculating the current minimum observation distance and the current maximum observation distance of the virtual camera in real time based on the height; and the depth value calculation module is coupled with the parameter calculation module and used for calculating the depth value of the target object by utilizing the distance, the current minimum observation distance and the current maximum observation distance.
In order to solve the above technical problem, the present application adopts another technical solution that: providing a terminal, which comprises a processor and a memory coupled with the processor, wherein the memory stores program instructions for implementing the method for determining the depth of an object in the three-dimensional scene; the processor is operable to execute the memory stored program instructions to calculate depth values for objects in the three-dimensional scene.
In order to solve the above technical problem, the present application adopts another technical solution that: there is provided a storage medium storing a program file capable of implementing the method for determining the depth of an object in a three-dimensional scene.
The beneficial effect of this application is: the invention confirms whether the three-dimensional scene is a close view display scene or a distant view display scene, when the three-dimensional scene is the close view display scene, setting a set of fixed first preset minimum observation distance and first preset maximum observation distance, calculating the depth value of the object in the scene according to the two preset parameters, because the distance seen by the virtual camera is very close, the situation that the picture of the overlapping area flickers does not occur, and when the scene is displayed in a long shot in a three-dimensional scene, the height is higher, the farther the farthest distance is seen in the distant scene, and therefore, the current minimum and maximum viewing distances are determined from the height of the virtual camera from the ground in the three-dimensional scene, the depth value precision of the object calculated according to the current minimum observation distance and the current maximum observation distance is higher, and the problem of flicker of pictures in an overlapping area is avoided.
Drawings
FIG. 1 is a flowchart illustrating a method for determining depth of an object in a three-dimensional scene according to a first embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a virtual camera and a near plane and a far plane according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for determining depth of an object in a three-dimensional scene according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating the construction of a parameter table according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for determining depth of an object in a three-dimensional scene according to a third embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an apparatus for determining depth of an object in a three-dimensional scene according to a first embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an apparatus for determining depth of an object in a three-dimensional scene according to a second embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a storage medium according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. All directional indications (such as up, down, left, right, front, and rear … …) in the embodiments of the present application are only used to explain the relative positional relationship between the components, the movement, and the like in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indication is changed accordingly. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Fig. 1 is a flowchart illustrating a method for determining depth of an object in a three-dimensional scene according to a first embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 1 if the results are substantially the same. As shown in fig. 1, the method comprises the steps of:
step S101: and acquiring the distance between the target object and the virtual camera in the three-dimensional scene.
In step S101, after the three-dimensional space is constructed, a three-dimensional space coordinate system with the virtual camera as an origin is constructed, coordinates of the target object in the three-dimensional space coordinate system are determined through space transformation, and then a distance between the target object and the virtual camera is calculated.
Step S102: and detecting whether the three-dimensional scene is a close view display scene or a distant view display scene. When the three-dimensional scene is a close-range display scene, executing step S103; when the three-dimensional scene is a distant view display scene, step S104 is performed.
Note that the close-range display scene is a scene in which the farthest position that can be observed by the virtual camera is closer to the virtual camera, for example: indoor, in the close-up display scene, due to the occlusion of objects (such as walls), the distance observed by the virtual camera is close; the distant view display scene refers to a scene which is far away from the farthest position and can be observed by the virtual camera, for example: in the long-range view display scene, no object is shielded, and the virtual camera can see a quite far position.
Step S103: and calculating the depth value of the target object by using the distance and a preset first minimum observation distance and a preset first maximum observation distance of the virtual camera.
It should be noted that the first preset minimum observation distance and the first preset maximum observation distance of the virtual camera are preset. Generally, referring to fig. 2, when rendering a three-dimensional scene on a two-dimensional screen, a near plane and a far plane are formed in a three-dimensional space by a position and an opening angle of a virtual camera, each object in the near plane and the far plane has a Z coordinate value, which represents an actual depth of the object, then the depth value Z is compared with a depth buffer to determine a display sequence of the picture, and a rendering result is displayed on the two-dimensional plane to finally form a two-dimensional picture seen by a human eye, wherein a distance from the virtual camera to the near plane is a minimum observation distance, and a distance from the virtual camera to the far plane is a maximum observation distance. It should be understood that in a close-up display scene, the distance observed by the virtual camera is very close, and therefore, a set of fixed minimum and maximum observation distances may be set, for example: the minimum observation distance may be set to 0.05 m and the maximum observation distance may be set to 100 m, and since objects seen by the virtual cameras are relatively close to each other, the problem of flickering of the pictures in the overlapping area does not substantially occur.
In step S103, when the three-dimensional scene is a close-range display scene, obtaining a preset first preset minimum observation distance and a preset first maximum observation distance, and calculating a depth value of the target object by combining the distance between the target object and the virtual camera, where the specific calculation formula is as follows:
Figure BDA0002364872860000071
where Fd is a depth value, N is a minimum observation distance, F is a maximum observation distance, z (N < ═ z < ═ F) is a distance from the target object to the virtual camera, and when z equals N, Fd equals 0.0, and when z equals F, Fd equals 1.0.
Step S104: and acquiring the height of the virtual camera from the ground, determining the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height, and calculating the depth value of the target object by combining the distance, the current minimum observation distance and the current maximum observation distance.
In step S104, in the distant view display scene, since the earth is a spherical shape, the farthest distance observed by the virtual camera is related to the height of the virtual camera from the ground in the three-dimensional scene and the curvature of the earth, and the farther the height is, the farther the farthest distance observed by the virtual camera is, in this embodiment, when the three-dimensional scene is the distant view display scene, the height of the virtual camera from the ground in the three-dimensional scene is obtained, and the optimal current minimum observation distance and the optimal current maximum observation distance are determined based on the height to render the three-dimensional scene, so as to avoid the problem of flicker of the image in the object overlapping area.
The method for determining depth of an object in a three-dimensional scene according to the first embodiment of the present invention determines whether the three-dimensional scene is a near view display scene or a far view display scene, sets a set of a fixed first preset minimum viewing distance and a first preset maximum viewing distance when the three-dimensional scene is the near view display scene, calculates the depth value of the object in the scene using the two preset parameters, because the distance viewed by the virtual camera is very close, and therefore, no flicker occurs in the image of the overlap area, and when the three-dimensional scene is the far view display scene, because the height is higher, the farthest distance viewed in the far view scene is farther, and therefore, the current minimum viewing distance and the current maximum viewing distance are determined according to the height of the virtual camera from the ground in the three-dimensional scene, so that the depth value of the object calculated according to the current minimum viewing distance and the current maximum viewing distance is more accurate, thereby avoiding the problem of flicker of the pictures in the overlapped area.
Further, on the basis of the first embodiment, in other embodiments, after the depth value of the target object is calculated, the method further includes the following steps:
and rendering the three-dimensional scene to a two-dimensional screen according to the depth value of the target object.
In this embodiment, the three-dimensional scene is rendered onto the two-dimensional screen by using the depth value of the target object calculated in the first embodiment, so that the problem of screen flicker in a two-dimensional picture displayed on the two-dimensional screen is avoided.
Fig. 3 is a flowchart illustrating a method for determining depth of an object in a three-dimensional scene according to a second embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 3 if the results are substantially the same. As shown in fig. 3, the method comprises the steps of:
step S201: and acquiring the distance between the target object and the virtual camera in the three-dimensional scene.
In this embodiment, step S201 in fig. 3 is similar to step S101 in fig. 1, and for brevity, is not described herein again.
Step S202: and detecting whether the three-dimensional scene is a close view display scene or a distant view display scene. When the three-dimensional scene is a close-range display scene, executing step S203; when the three-dimensional scene is a distant view display scene, step S204 to step S208 are executed.
In this embodiment, step S202 in fig. 3 is similar to step S102 in fig. 1, and for brevity, is not described herein again.
Step S203: and calculating the depth value of the target object by using the distance and a preset first minimum observation distance and a preset first maximum observation distance of the virtual camera.
In this embodiment, step S203 in fig. 3 is similar to step S103 in fig. 1, and for brevity, is not described herein again.
Step S204: and acquiring the height of the virtual camera from the ground.
Step S205: and judging whether the height is smaller than a preset threshold value. If yes, go to step S206; if not, go to step S207.
It should be noted that the preset threshold is preset.
Step S206: and taking the preset second preset minimum observation distance and the preset second maximum observation distance as the current minimum observation distance and the current maximum observation distance of the virtual camera.
In step S206, the second preset minimum observation distance and the second preset maximum observation distance are preset. In the distant view display scene, because the earth is spherical, the farthest distance seen by the virtual camera is related to the curvature of the earth and the height of the virtual camera, so that the farthest distance seen by the virtual camera can be calculated as follows:
F=3569.6*sqrt(H);
wherein F is the farthest distance, H is the height of the virtual camera, and as can be seen from the above formula, when H is 1 meter, the farthest distance seen by the virtual camera is 3569.6 meters, and in the urban scene, according to the characteristics of the urban scene, if the virtual camera is close to the ground, due to the shielding of the building, the distance that the virtual camera can observe is close, about several hundred meters, because the observed objects are all close, basically no flicker occurs, at this time, a set of fixed second preset minimum observation distance and second preset maximum observation distance may be set, for example: the minimum observation distance is 0.1 meter and the maximum observation distance is 3569 meters.
Step S207: the current minimum and maximum viewing distances of the virtual camera are determined from a pre-configured parameter table using the height.
In step S207, it should be noted that before the depth value calculation is performed, a parameter table is constructed, and the parameter table records a plurality of heights, and an optimal minimum observation distance and an optimal maximum observation distance are associated with each height in the long-distance display scene. Specifically, referring to fig. 4, constructing the parameter table includes the following steps:
step S301: a plurality of preset heights are obtained.
Step S302: and respectively calculating the corresponding minimum observation distance and the maximum observation distance at each height.
In step S302, the minimum observation distance is calculated by the formula:
N=0.1*H;
wherein N is the minimum observation distance, and H is the height value;
the maximum observation distance is calculated by the formula:
F=3569.6*sqrt(H);
wherein F is the maximum viewing distance.
Step S303: rendering the test scene based on the minimum observation distance and the maximum observation distance corresponding to each height to obtain a display picture.
In step S303, after the minimum observation distance and the maximum observation distance corresponding to each height are calculated, a test scene is rendered according to each set of the minimum observation distance and the maximum observation distance, respectively, to obtain a display screen, where the test scene is a long-range display scene.
Step S304: and detecting whether the display picture has depth conflict or cuts off nearby objects. If the depth conflict occurs or the nearby object is cut out, executing step S305; if no depth conflict occurs and no nearby object is cut, step S306 is executed.
In step S304, when a depth conflict occurs in the display screen or an object near the virtual camera is cut off, it indicates that a set of minimum observation distance and maximum observation distance corresponding to the display screen is not optimal, and then step S305 is executed, otherwise step S306 is executed.
Step S305: and readjusting the minimum observation distance and the maximum observation distance corresponding to the height until the display picture is displayed normally, and then storing the height and the corresponding minimum observation distance and maximum observation distance.
In step S305, the minimum observation distance and the maximum observation distance at the height are readjusted until the display screen no longer has a depth conflict or the object near the virtual camera is not cut out, and the height and the adjusted minimum observation distance and the adjusted maximum observation distance are saved.
Step S306: the height, and the corresponding minimum and maximum viewing distances are saved.
Step S307: and constructing a parameter table according to the stored height and the corresponding minimum observation distance and maximum observation distance.
In this embodiment, after the minimum observation distance and the maximum observation distance corresponding to different heights are obtained through calculation, the minimum observation distance and the maximum observation distance of each height are verified to remove the data which do not reach the standard, so that the minimum observation distance and the maximum observation distance which are finally found according to the parameter table are ensured to be optimal.
Step S208: the depth value of the target object is calculated in combination with the distance, the current minimum observation distance, and the current maximum observation distance.
The method for determining the depth of an object in a three-dimensional scene according to the second embodiment of the present invention is based on the first embodiment, in a distant view display scene, determining a current minimum observation distance and a current maximum observation distance by the height of a virtual camera, when the height is smaller than a preset threshold, using a preset second preset minimum observation distance and a preset second preset maximum observation distance as the current minimum observation distance and the current maximum observation distance of the virtual camera, when the height is greater than or equal to the preset threshold, determining the current minimum observation distance and the current maximum observation distance of the virtual camera by using a height lookup parameter table, wherein the parameter table is configured by performing a plurality of times of tests in advance, wherein the minimum observation distance and the maximum observation distance corresponding to each height can ensure that no flicker occurs when the three-dimensional scene is rendered into a two-dimensional picture, thereby enabling the user experience to be optimal, in addition, the minimum observation distance and the maximum observation distance are obtained by searching the parameter table, and calculation is not needed to be carried out again during image rendering, so that the processing speed is increased, and the image processing performance is further improved.
Fig. 5 is a flowchart illustrating a method for determining depth of an object in a three-dimensional scene according to a third embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 5 if the results are substantially the same. As shown in fig. 5, the method includes the steps of:
step S401: and acquiring the distance between the target object and the virtual camera in the three-dimensional scene and the height between the virtual camera and the ground.
In step S401, a distance between the target object and the virtual camera in the three-dimensional scene and a height of the virtual camera from the ground are obtained in real time, wherein the distance between the target object and the virtual camera may be obtained by constructing a three-dimensional coordinate system, determining coordinates of the target object and the virtual camera in the three-dimensional coordinate system, and then calculating to obtain a distance between the target object and the virtual camera.
Step S402: the current minimum and maximum viewing distances of the virtual camera are calculated in real time based on the altitude.
In step S402, please refer to step S302 in the parameter table constructing embodiment for calculating the current minimum observation distance and the current maximum observation distance, which is not described herein again.
Step S403: and calculating the depth value of the target object by using the distance, the current minimum observation distance and the current maximum observation distance.
In step S402, please refer to step S103 in the first embodiment for calculating the depth value, which is not described herein.
The method for determining the depth of the object in the three-dimensional scene according to the third embodiment of the invention obtains the height of the virtual camera by implementation, obtains the current minimum observation distance and the current maximum observation distance by height calculation, and accurately calculates the optimal minimum observation distance and the optimal maximum observation distance for each virtual camera position, thereby ensuring the optimal rendering and displaying effects under the current minimum observation distance and the current maximum observation distance.
Fig. 6 shows a schematic structural diagram of an apparatus for determining depth of an object in a three-dimensional scene according to a first embodiment of the present invention. As shown in fig. 6, in the present embodiment, the apparatus 10 for determining the depth of an object in a three-dimensional scene includes a first obtaining module 11, a detecting module 12, a first calculating module 13, and a second calculating module 14.
The first acquisition module 11 is configured to acquire a distance between a target object in a three-dimensional scene and a virtual camera;
the detection module 12 is coupled to the first obtaining module 11 and configured to detect whether the three-dimensional scene is a close-range display scene or a long-range display scene;
a first calculating module 13, coupled to the detecting module 12, for calculating a depth value of the target object by using the distance and a first preset minimum observing distance and a first preset maximum observing distance of the virtual camera when the three-dimensional scene is a close-up display scene;
and a second calculating module 14, coupled to the detecting module 12, for obtaining a height of the virtual camera from the ground when the three-dimensional scene is a distant view display scene, determining a current minimum observation distance and a current maximum observation distance of the virtual camera based on the height, and calculating a depth value of the target object by combining the distance, the current minimum observation distance and the current maximum observation distance.
Optionally, after the operation of determining the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height, the second calculation module 14 further includes: judging whether the height is smaller than a preset threshold value or not; if so, taking a preset second preset minimum observation distance and a preset second maximum observation distance as the current minimum observation distance and the current maximum observation distance of the virtual camera; if not, determining the current minimum observation distance and the current maximum observation distance of the virtual camera by using the height from a pre-configured parameter table. The method further comprises the operation of constructing the parameter table, wherein the operation of constructing the parameter table comprises the following steps: acquiring a plurality of preset heights; respectively calculating the corresponding minimum observation distance and the maximum observation distance under each height; rendering the test scene based on the minimum observation distance and the maximum observation distance corresponding to each height to obtain a display picture, wherein the calculation formula of the minimum observation distance is as follows: n is 0.1 × H, where N is the minimum viewing distance and H is the height; the maximum observation distance is calculated by the formula: 3569.6 sqrt (h), where F is the maximum viewing distance; detecting whether a display picture has a depth conflict or cuts off a nearby object; if the depth conflict occurs or a nearby object is cut off, readjusting the minimum observation distance and the maximum observation distance corresponding to the height until the display picture is displayed normally, and then storing the height and the corresponding minimum observation distance and the maximum observation distance; if the depth conflict does not occur and the nearby object is not cut off, the height, the corresponding minimum observation distance and the corresponding maximum observation distance are saved; and constructing a parameter table according to the stored height and the corresponding minimum observation distance and maximum observation distance.
Optionally, the first calculation module 13 or the second calculation module 14 further includes, after the operation of calculating the depth value of the target object: and rendering the three-dimensional scene to a two-dimensional screen according to the depth value of the target object.
Fig. 7 shows a schematic structural diagram of an apparatus for determining depth of an object in a three-dimensional scene according to a second embodiment of the present invention. As shown in fig. 7, in the present embodiment, the apparatus 20 for determining the depth of an object in a three-dimensional scene includes a second obtaining module 21, a parameter calculating module 22 and a depth value calculating module 23.
The second obtaining module 21 is configured to obtain a distance between the target object and the virtual camera in the three-dimensional scene and a height of the virtual camera from the ground;
a parameter calculating module 22, coupled to the second obtaining module 21, for calculating a current minimum observation distance and a current maximum observation distance of the virtual camera in real time based on the height;
and a depth value calculating module 23, coupled to the parameter calculating module 22, for calculating a depth value of the target object by using the distance, the current minimum observation distance, and the current maximum observation distance.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention. As shown in fig. 8, the terminal 60 includes a processor 61 and a memory 62 coupled to the processor 61.
The memory 62 stores program instructions for implementing the method for determining depth of an object in a three-dimensional scene according to any of the embodiments described above.
The processor 61 is operative to execute program instructions stored in the memory 62 to calculate depth values for objects in the three-dimensional scene.
The processor 61 may also be referred to as a CPU (Central Processing Unit). The processor 61 may be an integrated circuit chip having signal processing capabilities. The processor 61 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a storage medium according to an embodiment of the invention. The storage medium of the embodiment of the present invention stores a program file 71 capable of implementing all the methods described above, wherein the program file 71 may be stored in the storage medium in the form of a software product, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (10)

1. A method for determining depth of an object in a three-dimensional scene, the method comprising:
acquiring the distance between a target object and a virtual camera in a three-dimensional scene;
detecting whether the three-dimensional scene is a close view display scene or a distant view display scene;
when the three-dimensional scene is the close-range display scene, calculating the depth value of the target object by using the distance, a preset first preset minimum observation distance and a preset first preset maximum observation distance of the virtual camera;
and when the three-dimensional scene is the distant view display scene, acquiring the height of the virtual camera from the ground, determining the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height, and calculating the depth value of the target object by combining the distance, the current minimum observation distance and the current maximum observation distance.
2. The method of claim 1, wherein the step of determining a current minimum viewing distance and a current maximum viewing distance of the virtual camera based on the height comprises:
judging whether the height is smaller than a preset threshold value or not;
if so, taking a preset second preset minimum observation distance and a preset second maximum observation distance as the current minimum observation distance and the current maximum observation distance of the virtual camera;
if not, determining the current minimum observation distance and the current maximum observation distance of the virtual camera from a pre-configured parameter table by using the height.
3. The method of claim 2, further comprising constructing the parameter table, wherein the step of constructing the parameter table comprises:
acquiring a plurality of preset heights;
respectively calculating the corresponding minimum observation distance and the maximum observation distance under each height;
rendering the test scene based on the minimum observation distance and the maximum observation distance corresponding to each height to obtain a display picture;
detecting whether the display picture has depth conflict or cuts off nearby objects;
if the depth conflict occurs or a nearby object is cut off, readjusting the minimum observation distance and the maximum observation distance corresponding to the height until the display picture is displayed normally, and then storing the height and the corresponding minimum observation distance and the maximum observation distance;
if the depth conflict does not occur and the nearby object is not cut off, storing the height, and the corresponding minimum observation distance and the maximum observation distance;
and constructing the parameter table according to the stored height and the corresponding minimum observation distance and maximum observation distance.
4. The method of claim 3, wherein the minimum viewing distance is calculated by the formula:
N=0.1*H;
wherein N is the minimum observation distance and H is the height;
the maximum observation distance is calculated according to the formula:
F=3569.6*sqrt(H);
wherein F is the maximum viewing distance.
5. The method of claim 1, further comprising:
rendering the three-dimensional scene to the two-dimensional screen according to the depth value of the target object.
6. A method for determining depth of an object in a three-dimensional scene, the method comprising:
acquiring the distance between a target object and a virtual camera in a three-dimensional scene and the height of the virtual camera from the ground;
calculating a current minimum observation distance and a current maximum observation distance of the virtual camera in real time based on the height;
and calculating the depth value of the target object by using the distance, the current minimum observation distance and the current maximum observation distance.
7. An apparatus for determining depth of an object in a three-dimensional scene, the apparatus comprising:
the first acquisition module is used for acquiring the distance between a target object and the virtual camera in the three-dimensional scene;
the detection module is coupled with the first acquisition module and used for detecting whether the three-dimensional scene is a close view display scene or a long view display scene;
a first calculation module, coupled to the detection module, for calculating a depth value of the target object using the distance and a first preset minimum observation distance and a first preset maximum observation distance of a virtual camera when the three-dimensional scene is the close-up display scene;
and the second calculation module is coupled with the detection module and used for acquiring the height of the virtual camera from the ground when the three-dimensional scene is the distant view display scene, determining the current minimum observation distance and the current maximum observation distance of the virtual camera based on the height, and calculating the depth value of the target object by combining the distance, the current minimum observation distance and the current maximum observation distance.
8. An apparatus for determining depth of an object in a three-dimensional scene, the apparatus comprising:
the second acquisition module is used for acquiring the distance between a target object and the virtual camera in the three-dimensional scene and the height of the virtual camera from the ground;
a parameter calculation module, coupled to the second acquisition module, for calculating a current minimum observation distance and a current maximum observation distance of the virtual camera in real time based on the height;
and the depth value calculation module is coupled with the parameter calculation module and used for calculating the depth value of the target object by utilizing the distance, the current minimum observation distance and the current maximum observation distance.
9. A terminal, comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions for implementing a method for object depth determination in a three-dimensional scene as claimed in any one of claims 1-6;
the processor is to execute the program instructions stored by the memory to calculate depth values for objects in a three-dimensional scene.
10. A storage medium, characterized in that a program file enabling the method for determining the depth of an object in a three-dimensional scene according to any one of claims 1-6 is stored.
CN202010032598.8A 2020-01-13 2020-01-13 Method, device, terminal and storage medium for determining object depth in three-dimensional scene Active CN111275611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010032598.8A CN111275611B (en) 2020-01-13 2020-01-13 Method, device, terminal and storage medium for determining object depth in three-dimensional scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010032598.8A CN111275611B (en) 2020-01-13 2020-01-13 Method, device, terminal and storage medium for determining object depth in three-dimensional scene

Publications (2)

Publication Number Publication Date
CN111275611A true CN111275611A (en) 2020-06-12
CN111275611B CN111275611B (en) 2024-02-06

Family

ID=71000187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010032598.8A Active CN111275611B (en) 2020-01-13 2020-01-13 Method, device, terminal and storage medium for determining object depth in three-dimensional scene

Country Status (1)

Country Link
CN (1) CN111275611B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686989A (en) * 2021-01-04 2021-04-20 北京高因科技有限公司 Three-dimensional space roaming implementation method
WO2023071586A1 (en) * 2021-10-25 2023-05-04 腾讯科技(深圳)有限公司 Picture generation method and apparatus, device, and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011029209A2 (en) * 2009-09-10 2011-03-17 Liberovision Ag Method and apparatus for generating and processing depth-enhanced images
US20120176473A1 (en) * 2011-01-07 2012-07-12 Sony Computer Entertainment America Llc Dynamic adjustment of predetermined three-dimensional video settings based on scene content
CN105894567A (en) * 2011-01-07 2016-08-24 索尼互动娱乐美国有限责任公司 Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
CN106228613A (en) * 2016-06-12 2016-12-14 深圳超多维光电子有限公司 Construction method, device and the stereoscopic display device of a kind of virtual three-dimensional scene
CN109829981A (en) * 2019-02-16 2019-05-31 深圳市未来感知科技有限公司 Three-dimensional scenic rendering method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011029209A2 (en) * 2009-09-10 2011-03-17 Liberovision Ag Method and apparatus for generating and processing depth-enhanced images
US20120176473A1 (en) * 2011-01-07 2012-07-12 Sony Computer Entertainment America Llc Dynamic adjustment of predetermined three-dimensional video settings based on scene content
CN105894567A (en) * 2011-01-07 2016-08-24 索尼互动娱乐美国有限责任公司 Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
CN106228613A (en) * 2016-06-12 2016-12-14 深圳超多维光电子有限公司 Construction method, device and the stereoscopic display device of a kind of virtual three-dimensional scene
CN109829981A (en) * 2019-02-16 2019-05-31 深圳市未来感知科技有限公司 Three-dimensional scenic rendering method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李鹏飞;周杨;许继伟;胡校飞;薛现光;: "跨尺度三维场景显示精度提高技术研究" *
王进成;金一丞;曹士连;: "使用三维场景绘制技术模拟雷达图像" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686989A (en) * 2021-01-04 2021-04-20 北京高因科技有限公司 Three-dimensional space roaming implementation method
WO2023071586A1 (en) * 2021-10-25 2023-05-04 腾讯科技(深圳)有限公司 Picture generation method and apparatus, device, and medium

Also Published As

Publication number Publication date
CN111275611B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
US10694175B2 (en) Real-time automatic vehicle camera calibration
CN111639626B (en) Three-dimensional point cloud data processing method and device, computer equipment and storage medium
CN108154548B (en) Image rendering method and device
JP6657214B2 (en) Accuracy measurement of image-based depth detection system
EP3335153B1 (en) Managing feature data for environment mapping on an electronic device
CN111275611B (en) Method, device, terminal and storage medium for determining object depth in three-dimensional scene
CN114895796B (en) Space interaction method and device based on panoramic image and application
JP2013050883A (en) Information processing program, information processing system, information processor, and information processing method
KR102317182B1 (en) Apparatus for generating composite image using 3d object and 2d background
US10417743B2 (en) Image processing device, image processing method and computer readable medium
JP7195238B2 (en) Systems and methods for augmented reality applications
CN112634366B (en) Method for generating position information, related device and computer program product
CN112197708B (en) Measuring method and device, electronic device and storage medium
CN115334247B (en) Camera module calibration method, visual positioning method and device and electronic equipment
CN110096143B (en) Method and device for determining attention area of three-dimensional model
CN108205820B (en) Plane reconstruction method, fusion method, device, equipment and storage medium
US20230162442A1 (en) Image processing apparatus, image processing method, and storage medium
CN112973121B (en) Reflection effect generation method and device, storage medium and computer equipment
CN115511944A (en) Single-camera-based size estimation method, device, equipment and storage medium
CN115004683A (en) Imaging apparatus, imaging method, and program
CN112529769B (en) Method and system for adapting two-dimensional image to screen, computer equipment and storage medium
CN112146628B (en) Measurement method and device, electronic equipment and storage medium
CN112150527B (en) Measurement method and device, electronic equipment and storage medium
US20230326147A1 (en) Helper data for anchors in augmented reality
CN115439331B (en) Corner correction method and generation method and device of three-dimensional model in meta universe

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant