WO2020228592A1 - 渲染方法及装置 - Google Patents
渲染方法及装置 Download PDFInfo
- Publication number
- WO2020228592A1 WO2020228592A1 PCT/CN2020/089114 CN2020089114W WO2020228592A1 WO 2020228592 A1 WO2020228592 A1 WO 2020228592A1 CN 2020089114 W CN2020089114 W CN 2020089114W WO 2020228592 A1 WO2020228592 A1 WO 2020228592A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- rendering
- rendered
- spatial distance
- dimensional
- accuracy
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
Definitions
- This application relates to the field of computer technology, in particular to a rendering method and device.
- the 3D digital large screen In order to ensure the quality of the picture displayed on the large screen, the 3D digital large screen often uses high resolution to render the elements in the entire city map. However, this rendering method is computationally expensive and consumes more system resources.
- the embodiments of the present application show a rendering method and device.
- an embodiment of the present application shows a rendering method, and the method includes:
- the determining the rendering accuracy for rendering the three-dimensional element to be rendered according to the first spatial distance includes:
- the rendering accuracy corresponding to the first spatial distance is searched and used as the first rendering accuracy.
- the determining the rendering accuracy for rendering the three-dimensional element to be rendered according to the first spatial distance includes:
- the rendering accuracy corresponding to the spatial distance interval is searched and used as the first rendering accuracy.
- the acquiring three-dimensional elements to be rendered includes:
- the acquiring three-dimensional elements located in the three-dimensional space to be rendered includes:
- search for the three-dimensional element corresponding to the spatial identifier In the correspondence between the spatial identifier of the three-dimensional space and the three-dimensional element located in the three-dimensional space, search for the three-dimensional element corresponding to the spatial identifier.
- the method further includes:
- the method further includes:
- the step of determining the second rendering accuracy for rendering the three-dimensional element to be rendered according to the second spatial distance is performed.
- an embodiment of the present application shows a rendering method, and the method includes:
- an embodiment of the present application shows a rendering device, and the device includes:
- the first determining module is used to determine the base point of the rendering perspective
- the first obtaining module is used to obtain three-dimensional elements to be rendered
- the second acquiring module is configured to acquire the first spatial distance between the base point of the rendering perspective and the three-dimensional element to be rendered;
- a second determining module configured to determine, according to the first spatial distance, a first rendering accuracy for rendering the three-dimensional element to be rendered
- the first rendering module is configured to render the 3D elements to be rendered according to the first rendering accuracy.
- the second determining module includes:
- the first searching unit is configured to search for the rendering accuracy corresponding to the first spatial distance in the correspondence between the spatial distance and the rendering accuracy, and use it as the first rendering accuracy.
- the second determining module includes:
- the first determining unit is configured to determine the spatial distance interval in which the first spatial distance is located in the correspondence between the spatial distance interval and the rendering accuracy;
- the second searching unit is configured to search for the rendering accuracy corresponding to the spatial distance interval in the corresponding relationship between the spatial distance interval and the rendering accuracy, and use it as the first rendering accuracy.
- the first obtaining module includes:
- the second determining unit is configured to determine the three-dimensional space to be rendered in the preset three-dimensional space
- the first acquiring unit is configured to acquire a three-dimensional element located in the three-dimensional space to be rendered, and use it as the three-dimensional element to be rendered.
- the first acquiring unit includes:
- the search subunit is used to search for the three-dimensional element corresponding to the spatial identifier in the correspondence between the spatial identifier of the three-dimensional space and the three-dimensional element located in the three-dimensional space.
- the device further includes:
- the third determining module is used to determine whether the position of the base point of the rendering perspective has changed
- the third acquiring module is configured to acquire the second spatial distance between the changed rendering perspective base point and the three-dimensional element to be rendered if the position of the rendering perspective base point changes;
- a fourth determining module configured to determine, according to the second spatial distance, a second rendering accuracy for rendering the three-dimensional element to be rendered
- the second rendering module is configured to re-render the 3D elements to be rendered according to the second rendering accuracy.
- the device further includes:
- a fifth determining module configured to determine whether the second spatial distance is less than the first spatial distance
- the fourth determining module is further configured to determine a second rendering accuracy for rendering the three-dimensional element to be rendered according to the second spatial distance if the second spatial distance is less than the first spatial distance.
- an embodiment of the present application shows a rendering device, which includes:
- the sixth determining module is used to determine the base point of the rendering perspective
- the fourth obtaining module is used to obtain the elements to be rendered
- a fifth acquiring module configured to acquire the first spatial distance between the base point of the rendering perspective and the element to be rendered
- a seventh determining module configured to determine the first rendering accuracy for rendering the element to be rendered according to the first spatial distance
- the second rendering module is configured to render the element to be rendered according to the first rendering accuracy.
- an embodiment of the present application shows an electronic device, and the electronic device includes:
- the memory has executable code stored thereon, and when the executable code is executed, the processor is caused to execute the rendering method as described in the first aspect.
- the embodiments of the present application show one or more machine-readable media on which executable code is stored, and when the executable code is executed, the processor is caused to execute the rendering method as described in the first aspect .
- an embodiment of the present application shows an electronic device, and the electronic device includes:
- the memory has executable code stored thereon, and when the executable code is executed, the processor is caused to execute the rendering method as described in the second aspect.
- the embodiments of the present application show one or more machine-readable media having executable code stored thereon, and when the executable code is executed, the processor is caused to execute the rendering method as described in the second aspect .
- the embodiments of the present application include the following advantages:
- the first spatial distance between the base point of the rendering perspective and the 3D elements to be rendered can be used to determine the first rendering accuracy for rendering the 3D elements to be rendered, and then according to the first rendering accuracy Render the 3D features to be rendered.
- the rendering accuracy can be flexibly configured according to the spatial distance between the base point of the rendering perspective and the three-dimensional element to be rendered, so that the user can see the details of the real appearance of the three-dimensional element to be rendered, or reduce the amount of calculation to save system resources.
- the three-dimensional element to be rendered can be rendered with higher precision, so that the user can see the details of the real appearance of the three-dimensional element to be rendered clearly.
- the volume of the rendered 3D elements to be rendered is smaller that the user sees.
- the 3D elements to be rendered are rendered with higher precision, it is not easy for the user to see clearly the 3D elements to be rendered.
- the details of the true appearance of the user that is, whether the three-dimensional elements to be rendered are rendered according to the higher rendering accuracy or the three-dimensional elements to be rendered are rendered according to the lower rendering accuracy, there is no substantial difference for the user. Therefore, At this time, there is no need to render all the real appearances of the three-dimensional elements to be rendered during rendering. Therefore, the three-dimensional elements to be rendered can be rendered according to the lower accuracy, thereby saving system resources.
- Fig. 1 is a flow chart showing a rendering method according to an exemplary embodiment
- FIG. 2 is a flowchart showing a rendering method according to an exemplary embodiment
- Fig. 3 is a flow chart showing a method for acquiring three-dimensional elements according to an exemplary embodiment
- Fig. 4 is a flowchart showing a rendering method according to an exemplary embodiment
- Fig. 5 is a block diagram showing a rendering device according to an exemplary embodiment
- Fig. 6 is a block diagram showing a rendering device according to an exemplary embodiment
- Fig. 7 is a block diagram showing a rendering device according to an exemplary embodiment.
- Fig. 1 is a flow chart showing a rendering method according to an exemplary embodiment. As shown in Fig. 1, the method is used in an electronic device.
- the electronic device includes a large screen or a virtual reality device.
- the method includes the following steps.
- step S101 the base point of the rendering perspective is determined
- step S102 the elements to be rendered are acquired
- step S103 the first spatial distance between the base point of the rendering perspective and the element to be rendered is acquired
- step S104 the first rendering accuracy for rendering the elements to be rendered is determined according to the first spatial distance
- different spatial distances correspond to different rendering precisions, or different spatial distance sets correspond to different rendering precisions, or different spatial distance intervals correspond to different rendering precisions.
- the spatial distance set may include multiple spatial distances, for example, may include multiple discrete spatial distances.
- the spatial distance interval may include multiple spatial distances, for example, may include multiple consecutive spatial distances.
- step S105 the elements to be rendered are rendered according to the first rendering accuracy.
- the first spatial distance between the base point of the rendering perspective and the element to be rendered can be determined according to the first rendering accuracy for rendering the element to be rendered, and then the element to be rendered can be rendered according to the first rendering accuracy Elements.
- the rendering accuracy can be flexibly configured according to the spatial distance between the base point of the rendering perspective and the element to be rendered, so that the user can see the details of the true appearance of the element to be rendered, or reduce the amount of calculation to save system resources.
- the element to be rendered can be rendered with higher accuracy, so that the user can see the details of the real appearance of the element to be rendered clearly.
- the first space distance is larger, the volume of the elements to be rendered that the user sees is smaller.
- the elements to be rendered are rendered with higher precision, it is not easy for the user to see the true appearance of the elements to be rendered.
- there is no substantial difference whether it is to render the elements to be rendered according to a higher rendering accuracy or to render elements to be rendered according to a lower rendering accuracy. Therefore, there is no need During rendering, all the real appearances of the elements to be rendered are rendered. Therefore, the elements to be rendered can be rendered according to a lower precision, thereby saving system resources.
- the elements to be rendered in the embodiment shown in FIG. 1 include elements of multiple dimensions, for example, one-dimensional elements, two-dimensional elements, and three-dimensional elements. That is, the method of the embodiment shown in FIG. 1 can be applied to Elements of various dimensions.
- this application takes a three-dimensional element to be rendered as an example for illustration, but it is not used as a limitation on the protection scope of this application, for example, not as a limitation on the number of dimensions of the elements in this application.
- Fig. 2 is a flow chart showing a rendering method according to an exemplary embodiment. As shown in Fig. 2, the method is used in an electronic device.
- the electronic device includes a virtual reality device or a large screen.
- the method includes the following steps.
- step S201 the base point of the rendering perspective is determined
- the user can view the three-dimensional elements in the preset three-dimensional space through electronic devices such as VR (Virtual Reality) devices or three-dimensional screens.
- electronic devices such as VR (Virtual Reality) devices or three-dimensional screens.
- the preset three-dimensional space includes many rendering elements.
- the three-dimensional space is a three-dimensional map of a city
- the three-dimensional map includes buildings, rivers, trees, and roads in counties/districts in each city. Wait.
- the rendering perspective base point may be located in the preset three-dimensional space.
- the user when viewing a three-dimensional map of Beijing, the user wants to observe the face of Zhongguancun at the location of the "National Library", then the location of the "National Library” is the base point of the rendering perspective, and it is located in each three-dimensional area of Zhongguancun.
- the element is the three-dimensional element to be rendered in this application.
- step S202 obtain three-dimensional elements to be rendered
- all three-dimensional elements in the preset three-dimensional space can be rendered. In this way, all three-dimensional elements in the preset three-dimensional space can be acquired and used as the three-dimensional elements to be rendered.
- a part of the three-dimensional elements that the user needs to watch may be obtained according to the actual needs of the user, and used as the to-be-rendered Three-dimensional elements. For the specific acquisition process, refer to the embodiment shown in FIG. 3 later, which is not described in detail here.
- step S203 the first spatial distance between the base point of the rendering perspective and the three-dimensional element to be rendered is acquired
- the base point of the rendering perspective has a position in the preset three-dimensional space, such as latitude and longitude coordinates, and each three-dimensional element in the preset three-dimensional space has its own fixed position in the preset three-dimensional space, such as latitude and longitude. Coordinates etc.
- the position of the base point of the rendering perspective in the preset three-dimensional space can be obtained, and then the position of the three-dimensional element to be rendered in the preset three-dimensional space can be obtained, and then the position of the base point of the rendering perspective in the preset three-dimensional space can be obtained And the position of the three-dimensional element to be rendered in the preset three-dimensional space, calculate the first spatial distance between the base point of the rendering perspective and the three-dimensional element to be rendered.
- step S204 the first rendering accuracy for rendering the three-dimensional elements to be rendered is determined according to the first spatial distance
- different spatial distances correspond to different rendering precisions, or different spatial distance sets correspond to different rendering precisions, or different spatial distance intervals correspond to different rendering precisions.
- the spatial distance set may include multiple spatial distances, for example, may include multiple discrete spatial distances.
- the spatial distance interval may include multiple spatial distances, for example, may include multiple consecutive spatial distances.
- the staff can manually evaluate the rendering accuracy suitable for the spatial distance in advance, and then the spatial distance and the rendering accuracy suitable for the spatial distance can be combined into a corresponding table item, and Stored in the correspondence between spatial distance and rendering accuracy.
- the spatial distance is large, it means that the distance between the base point of view and the three-dimensional element is far. At this time, the three-dimensional element that the user sees is smaller in the screen, even if the actual appearance of the three-dimensional element is changed during rendering. After all rendering, the user cannot see all the details of the true appearance of the 3D elements. That is to say, whether the 3D elements are rendered according to higher rendering accuracy or the 3D elements are rendered according to lower rendering accuracy, for the user, There is no substantial difference. Therefore, there is no need to render all the true appearances of the 3D elements during rendering. Therefore, in order to save system resources, the lower rendering accuracy can be used as the rendering accuracy suitable for the spatial distance.
- the spatial distance is small, it means that the distance between the base point of view and the three-dimensional element is relatively close. At this time, the three-dimensional element that the user sees is larger in the screen, and the user often needs to see the true appearance of the element clearly. All details. Therefore, in order for the user to see all the details of the true appearance of the 3D elements, all the true appearances of the 3D elements need to be rendered during rendering. Therefore, higher rendering accuracy can be used as the rendering suitable for the spatial distance. Accuracy.
- the spatial distance is inversely proportional to the rendering accuracy. That is, a larger spatial distance corresponds to a lower rendering accuracy, and a smaller spatial distance corresponds to a higher rendering accuracy.
- the rendering accuracy corresponding to the first spatial distance can be searched for and used as the first rendering accuracy.
- the staff can divide a plurality of different spatial distance intervals in advance, for example, (0, 1km) as a spatial distance interval, (1km, 2km) as a Spatial distance interval and (2km, 3km) as a spatial distance interval and so on.
- the staff can manually evaluate the rendering accuracy applicable to the spatial distance interval, and compose the corresponding table items of the spatial distance interval and the rendering accuracy applicable to the spatial distance interval, and store them in the spatial distance interval and Correspondence between rendering accuracy.
- the space distance interval is large, it means that the distance between the base point of view and the three-dimensional element is relatively long.
- the three-dimensional element that the user sees is smaller in the screen, even if the real three-dimensional element is rendered The face is completely rendered, and the user cannot see all the details of the true face of the 3D elements. That is to say, whether the 3D elements are rendered according to higher rendering accuracy or the 3D elements are rendered according to lower rendering accuracy, it is for the user , There is no substantial difference. Therefore, at this time, there is no need to render all the true appearances of the three-dimensional elements during rendering. Therefore, in order to save system resources, lower rendering accuracy can be used as the rendering suitable for this spatial distance interval.
- the spatial distance interval is small, it means that the distance between the base point of view and the three-dimensional element is relatively close. At this time, the three-dimensional element that the user sees is larger in the screen, and the user often needs to see the true appearance of the element clearly Therefore, in order for the user to see all the details of the true appearance of the 3D elements, all the true appearances of the 3D elements need to be rendered during rendering. Therefore, higher rendering accuracy can be used as the space distance interval Rendering accuracy.
- the spatial distance interval is inversely proportional to the rendering accuracy. That is, a larger spatial distance interval corresponds to a lower rendering accuracy, and a smaller spatial distance interval corresponds to a higher rendering accuracy.
- the spatial distance interval where the first spatial distance is located can be determined; then, in the correspondence between the spatial distance interval and the rendering accuracy, search for and The rendering accuracy corresponding to the spatial distance interval is used as the first rendering accuracy.
- step S205 the three-dimensional elements to be rendered are rendered according to the first rendering accuracy.
- the rendering material of the three-dimensional element when rendering the three-dimensional element, can be used for rendering, wherein if the three-dimensional element needs to be rendered according to higher rendering accuracy , The three-dimensional element can be rendered according to the objectively existing rendering material of the three-dimensional element. If the three-dimensional element needs to be rendered according to a lower rendering accuracy, part of the rendering can be selected from the objectively existing rendering materials of the three-dimensional element Material, and then render the 3D elements according to the selected partial rendering materials.
- the rendering materials of the building include windows, doors entering and exiting the building, and the floor edge lines on the outer surface of the first-level building; the number of windows, the number of doors and the floor edge lines of the building that exist objectively The quantity etc. are fixed.
- rendering the building if you need to render with a higher rendering accuracy, you can use the objectively existing rendering material of the building for rendering; or, when rendering the building, if you need to render with a lower rendering accuracy , You can select partial rendering materials such as partial windows, partial doors, and partial floor edge lines from the objectively existing rendering materials of the building, and then render the 3D elements according to the selected partial rendering materials.
- the rendering material suitable for the three-dimensional element is different.
- the rendering precision can be the same as the rendering material of the three-dimensional element suitable for the rendering precision.
- the corresponding table items are composed and stored in the corresponding relationship between the rendering accuracy and the rendering material corresponding to the three-dimensional element. The same is true for every other rendering accuracy.
- the rendering material corresponding to the first rendering accuracy can be found, and then the three-dimensional element to be rendered can be rendered according to the rendering material.
- the first spatial distance between the base point of the rendering perspective and the 3D elements to be rendered can be used to determine the first rendering accuracy for rendering the 3D elements to be rendered, and then according to the first rendering accuracy Render the 3D features to be rendered.
- the rendering accuracy can be flexibly configured according to the spatial distance between the base point of the rendering perspective and the three-dimensional element to be rendered, so that the user can see the details of the real appearance of the three-dimensional element to be rendered, or reduce the amount of calculation to save system resources.
- the three-dimensional element to be rendered can be rendered with higher precision, so that the user can see the details of the real appearance of the three-dimensional element to be rendered clearly.
- the volume of the rendered 3D elements to be rendered is smaller that the user sees.
- the 3D elements to be rendered are rendered with higher precision, it is not easy for the user to see clearly the 3D elements to be rendered.
- the details of the true appearance of the user that is, whether the three-dimensional elements to be rendered are rendered according to the higher rendering accuracy or the three-dimensional elements to be rendered are rendered according to the lower rendering accuracy, there is no substantial difference for the user. Therefore, At this time, there is no need to render all the real appearances of the 3D elements during rendering. Therefore, the 3D elements to be rendered can be rendered according to the lower accuracy, which can save system resources.
- part of the 3D elements that the user needs to view can be acquired according to the actual needs of the user and used as the 3D elements to be rendered. See Figure 3 for the specific process.
- step S301 determine the three-dimensional space to be rendered in the preset three-dimensional space
- the preset three-dimensional space can be divided into multiple three-dimensional spaces in advance.
- the preset three-dimensional space is a three-dimensional map of Beijing
- the three-dimensional map of Beijing includes the three-dimensional map of Haidian District and the three-dimensional map of Chaoyang District.
- the user can specify at least one of the preset three-dimensional spaces as the three-dimensional space to be rendered in the electronic device, and then the electronic device obtains the three-dimensional space to be rendered specified by the user For example, if the user only needs to look at the three-dimensional map of Haidian District in Beijing, the three-dimensional map of Haidian District can be specified in the electronic device. The electronic device obtains the three-dimensional map of Haidian District specified by the user and uses it as the three-dimensional map to be rendered.
- the electronic device may determine the area that can be viewed at the base point of the rendering view based on the position of the base point of the rendering view in the preset three-dimensional space, the view angle and direction of the view of the electronic device, and use it as the three-dimensional space to be rendered.
- step S302 a three-dimensional element located in the three-dimensional space to be rendered is acquired and used as the three-dimensional element to be rendered.
- the preset three-dimensional space can be divided into multiple three-dimensional spaces in advance.
- the space of a cell can be used as a three-dimensional space, or the area surrounded by several adjacent roads can be used as a three-dimensional space.
- the space identification of the three-dimensional space to be rendered can be obtained; then, in the correspondence between the space identification of the three-dimensional space and the three-dimensional elements located in the three-dimensional space, the three-dimensional element corresponding to the space identification is searched, and As a three-dimensional element to be rendered. Therefore, the location of each three-dimensional element may not be obtained, or the three-dimensional element located within the range of the location may not be searched according to the location of the three-dimensional element, thereby saving system resources and improving rendering efficiency.
- the user may adjust the position of the base point of the rendering perspective in real time as required. After adjusting the position of the base point of the rendering perspective, the scene viewed by the user will change, for example, increasing or decreasing the distance between the base point of the rendering perspective and certain rendering elements.
- the position of the base point of the rendering perspective can be adjusted so that the distance between the adjusted rendering perspective base point and the rendering element is closer.
- the angle of view is fixed, and therefore, the user can view the rendering element with a larger volume, so that the details of the rendering element can be carefully viewed.
- the position of the base point of the rendering perspective can be adjusted so that the distance between the adjusted rendering perspective base point and the rendering element is farther, because The angle of view of the electronic device is fixed, so the user can view a larger range of three-dimensional space.
- the distance between the 3D elements to be rendered and the adjusted rendering view base point may change. If the 3D elements to be rendered and the adjusted rendering view If the distance between the base points changes, in order to make the rendering accuracy of the 3D elements to be rendered adapt to the changed distance, you can adjust the rendering accuracy of the 3D elements to be rendered, for example, re-render according to the rendering accuracy corresponding to the changed distance Three-dimensional features to be rendered.
- the method further includes:
- step S401 it is determined whether the position of the base point of the rendering perspective changes
- step S402 the second spatial distance between the changed rendering perspective base point and the three-dimensional element to be rendered is acquired
- the position of the base point of the changed rendering perspective in the preset three-dimensional space can be obtained, and then the position of the three-dimensional element to be rendered in the preset three-dimensional space can be obtained.
- Set the position in the three-dimensional space and the position of the three-dimensional element to be rendered in the preset three-dimensional space and calculate the second spatial distance between the changed rendering perspective base point and the three-dimensional element to be rendered.
- step S403 determine the second rendering accuracy for rendering the three-dimensional element to be rendered according to the second spatial distance
- the rendering accuracy corresponding to the second spatial distance may be searched for and used as the second rendering accuracy.
- the spatial distance interval where the second spatial distance is located may be determined; then the correspondence between the spatial distance interval and the rendering accuracy , Find the rendering accuracy corresponding to the spatial distance interval and use it as the second rendering accuracy.
- step S404 the three-dimensional elements to be rendered are re-rendered according to the second rendering accuracy.
- the rendering material corresponding to the second rendering accuracy can be found, and then the rendering material can be rendered on the screen according to the rendering material.
- the user may adjust the position of the base point of the rendering perspective in real time when viewing the rendered three-dimensional element to be rendered.
- the scene viewed by the user will change, for example, increasing or decreasing the distance between the base point of the rendering perspective and certain rendering elements.
- the position of the adjusted rendering perspective base point is closer to the rendering element, it is usually for a closer look at the three-dimensional element to be rendered.
- the method further includes:
- Fig. 5 is a block diagram showing a rendering device according to an exemplary embodiment. As shown in Fig. 5, the device includes:
- the first determining module 11 is used to determine the base point of the rendering perspective
- the first obtaining module 12 is used to obtain three-dimensional elements to be rendered
- the second acquiring module 13 is configured to acquire the first spatial distance between the base point of the rendering perspective and the three-dimensional element to be rendered;
- the second determining module 14 is configured to determine the first rendering accuracy for rendering the three-dimensional element to be rendered according to the first spatial distance;
- the first rendering module 15 is configured to render the three-dimensional element to be rendered according to the first rendering accuracy.
- the second determining module 14 includes:
- the first searching unit is configured to search for the rendering accuracy corresponding to the first spatial distance in the correspondence between the spatial distance and the rendering accuracy, and use it as the first rendering accuracy.
- the second determining module 14 includes:
- the first determining unit is configured to determine the spatial distance interval in which the first spatial distance is located in the correspondence between the spatial distance interval and the rendering accuracy;
- the second search unit is configured to search for the rendering accuracy corresponding to the spatial distance interval in the correspondence between the spatial distance interval and the rendering accuracy, and use it as the first rendering accuracy.
- the first obtaining module 12 includes:
- the second determining unit is configured to determine the three-dimensional space to be rendered in the preset three-dimensional space
- the first acquiring unit is configured to acquire a three-dimensional element located in the three-dimensional space to be rendered, and use it as the three-dimensional element to be rendered.
- the first acquiring unit includes:
- the search subunit is used to search for the three-dimensional element corresponding to the spatial identifier in the correspondence between the spatial identifier of the three-dimensional space and the three-dimensional element located in the three-dimensional space.
- the device further includes:
- the third determining module is used to determine whether the position of the base point of the rendering perspective has changed
- the third acquiring module is configured to acquire the second spatial distance between the changed rendering perspective base point and the three-dimensional element to be rendered if the position of the rendering perspective base point changes;
- a fourth determining module configured to determine, according to the second spatial distance, a second rendering accuracy for rendering the three-dimensional element to be rendered
- the second rendering module is configured to re-render the 3D elements to be rendered according to the second rendering accuracy.
- the device further includes:
- a fifth determining module configured to determine whether the second spatial distance is less than the first spatial distance
- the fourth determining module is further configured to determine a second rendering accuracy for rendering the three-dimensional element to be rendered according to the second spatial distance if the second spatial distance is less than the first spatial distance.
- the first spatial distance between the base point of the rendering perspective and the 3D elements to be rendered can be used to determine the first rendering accuracy for rendering the 3D elements to be rendered, and then according to the first rendering accuracy Render the 3D features to be rendered.
- the rendering accuracy can be flexibly configured according to the spatial distance between the base point of the rendering perspective and the three-dimensional element to be rendered, so that the user can see the details of the real appearance of the three-dimensional element to be rendered, or reduce the amount of calculation to save system resources.
- the three-dimensional element to be rendered can be rendered with higher precision, so that the user can see the details of the real appearance of the three-dimensional element to be rendered clearly.
- the volume of the rendered 3D elements to be rendered is smaller that the user sees.
- the 3D elements to be rendered are rendered with higher precision, it is not easy for the user to see clearly the 3D elements to be rendered.
- the details of the true appearance of the user that is, whether the three-dimensional elements to be rendered are rendered according to the higher rendering accuracy or the three-dimensional elements to be rendered are rendered according to the lower rendering accuracy, there is no substantial difference for the user. Therefore, At this time, there is no need to render all the real appearances of the three-dimensional elements to be rendered during rendering. Therefore, the three-dimensional elements to be rendered can be rendered according to the lower accuracy, thereby saving system resources.
- Fig. 6 is a block diagram showing a rendering device according to an exemplary embodiment. As shown in Fig. 6, the device includes:
- the sixth determining module 21 is used to determine the base point of the rendering perspective
- the fourth obtaining module 22 is used to obtain the elements to be rendered
- the fifth acquiring module 23 is configured to acquire the first spatial distance between the base point of the rendering perspective and the element to be rendered;
- the seventh determining module 24 is configured to determine the first rendering accuracy for rendering the element to be rendered according to the first spatial distance
- the second rendering module 25 is configured to render the element to be rendered according to the first rendering accuracy.
- the first spatial distance between the base point of the rendering perspective and the element to be rendered can be determined according to the first rendering accuracy for rendering the element to be rendered, and then the element to be rendered can be rendered according to the first rendering accuracy Elements.
- the rendering accuracy can be flexibly configured according to the spatial distance between the base point of the rendering perspective and the element to be rendered, so that the user can see the details of the true appearance of the element to be rendered, or reduce the amount of calculation to save system resources.
- the element to be rendered can be rendered with higher accuracy, so that the user can see the details of the real appearance of the element to be rendered clearly.
- the first space distance is larger, the volume of the elements to be rendered that the user sees is smaller.
- the elements to be rendered are rendered with higher precision, it is not easy for the user to see the true appearance of the elements to be rendered.
- there is no substantial difference whether it is to render the elements to be rendered according to a higher rendering accuracy or to render elements to be rendered according to a lower rendering accuracy. Therefore, there is no need During rendering, all the real appearances of the elements to be rendered are rendered. Therefore, the elements to be rendered can be rendered according to a lower precision, thereby saving system resources.
- the embodiment of the present application also provides a non-volatile readable storage medium, the storage medium stores one or more modules (programs), when the one or more modules are applied to a device, the device can execute Instructions for each method step in the embodiment of this application.
- the embodiments of the present application provide one or more machine-readable media on which instructions are stored.
- the electronic device executes the rendering method described in one or more of the above embodiments .
- the electronic device includes a server, a gateway, a sub-device, etc., and the sub-device is a device such as an IoT device.
- the embodiments disclosed in this specification can be implemented as a device that uses any appropriate hardware, firmware, software, or any combination thereof to perform a desired configuration.
- the device can include a server (cluster), terminal equipment, such as electronic equipment such as IoT equipment. .
- Figure 7 schematically illustrates an exemplary apparatus 1300 that can be used to implement the various embodiments described in this application.
- FIG. 7 shows an exemplary apparatus 1300 having one or more processors 1302, a control module (chipset) 1304 coupled to at least one of the processor(s) 1302 , The memory 1306 coupled to the control module 1304, the non-volatile memory (NVM)/storage device 1308 coupled to the control module 1304, one or more input/output devices 1310 coupled to the control module 1304, and The network interface 1312 is coupled to the control module 1306.
- a control module (chipset) 1304 coupled to at least one of the processor(s) 1302
- the memory 1306 coupled to the control module 1304, the non-volatile memory (NVM)/storage device 1308 coupled to the control module 1304, one or more input/output devices 1310 coupled to the control module 1304, and
- the network interface 1312 is coupled to the control module 1306.
- the processor 1302 may include one or more single-core or multi-core processors, and the processor 1302 may include any combination of general-purpose processors or special-purpose processors (such as graphics processors, application processors, baseband processors, etc.).
- the apparatus 1300 can be used as a server device such as a gateway or a controller in the embodiments of the present application.
- the apparatus 1300 may include one or more computer-readable media (for example, the memory 1306 or the NVM/storage device 1308) having instructions 1314 and be configured to be combined with the one or more computer-readable media
- One or more processors 1302 that execute instructions 1314 to implement modules to perform the actions described in the embodiments of this specification.
- control module 1304 may include any suitable interface controller to provide any suitable interface controller to at least one of the processor(s) 1302 and/or any suitable device or component in communication with the control module 1304. Interface.
- the control module 1304 may include a memory controller module to provide an interface to the memory 1306.
- the memory controller module may be a hardware module, a software module, and/or a firmware module.
- the memory 1306 may be used to load and store data and/or instructions 1314 for the device 1300, for example.
- the memory 1306 may include any suitable volatile memory, such as a suitable DRAM.
- the memory 1306 may include a double data rate type quad synchronous dynamic random access memory (DDR4 SDRAM).
- control module 1304 may include one or more input/output controllers to provide interfaces to the NVM/storage device 1308 and the input/output device(s) 1310.
- NVM/storage device 1308 may be used to store data and/or instructions 1314.
- NVM/storage device 1308 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard disk drives (HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
- HDD hard disk drives
- CD compact disc
- DVD digital versatile disc
- the NVM/storage device 1308 may include storage resources that are physically part of the device on which the apparatus 1300 is installed, or it may be accessible by the device and may not necessarily be a part of the device. For example, the NVM/storage device 1308 may be accessed via the input/output device(s) 1310 via the network.
- the input/output device(s) 1310 may provide an interface for the apparatus 1300 to communicate with any other suitable devices.
- the input/output device 1310 may include communication components, audio components, sensor components, and the like.
- the network interface 1312 can provide an interface for the device 1300 to communicate through one or more networks.
- the device 1300 can communicate with one or more of the wireless network standards and/or protocols according to any of the one or more wireless network standards and/or protocols. Components perform wireless communication, for example, access wireless networks based on communication standards, such as WiFi, 2G, 3G, 4G, 5G, etc., or a combination of them for wireless communication.
- At least one of the processor(s) 1302 may be packaged with the logic of one or more controllers (eg, memory controller modules) of the control module 1304.
- at least one of the processor(s) 1302 may be packaged with the logic of one or more controllers of the control module 1304 to form a system in package (SiP).
- SiP system in package
- at least one of the processor(s) 1302 may be integrated with the logic of one or more controllers of the control module 1304 on the same mold.
- at least one of the processor(s) 1302 may be integrated with the logic of one or more controllers of the control module 1304 on the same mold to form a system on chip (SoC).
- SoC system on chip
- the apparatus 1300 may be, but is not limited to, a terminal device such as a server, a desktop computing device, or a mobile computing device (for example, a laptop computing device, a handheld computing device, a tablet computer, a netbook, etc.).
- the device 1300 may have more or fewer components and/or different architectures.
- the device 1300 includes one or more cameras, keyboards, liquid crystal display (LCD) screens (including touchscreen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits ( ASIC) and speakers.
- LCD liquid crystal display
- ASIC application specific integrated circuits
- An embodiment of the present application provides an electronic device, including: one or more processors; and, one or more machine-readable media on which instructions are stored, when executed by the one or more processors, The processor is caused to execute the rendering method described in one or more of the embodiments of the present application.
- the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
- These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing terminal equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
- the instruction device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
一种渲染方法及装置。在渲染待渲染要素时,可以根据渲染视角基点与待渲染要素之间的第一空间距离,确定用于渲染待渲染要素的第一渲染精度,然后根据第一渲染精度渲染待渲染要素。从而实现根据渲染视角基点与待渲染要素之间的空间距离灵活配置渲染精度,进而使得用户可以看清楚待渲染要素的真实面貌的细节,或者,降低运算量以节省系统资源。
Description
本申请要求2019年05月10日递交的申请号为201910390011.8、发明名称为“渲染方法及装置”中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及计算机技术领域,特别是涉及一种渲染方法及装置。
随着三维数字大屏技术的发展,越来越多的用户通过三维数字大屏观看城市地图等。在三维数字大屏中,通过对城市地图中的诸如河流、道路以及房屋等要素进行三维渲染,使得用户能观看到三维立体的城市地图,可以给用户更加逼真的观看感受。
为了保证大屏显示的画面的质量,三维数字大屏往往使用高分辨率对整个城市地图中的要素进行渲染,然而,这种渲染方式计算量较大,耗费的系统资源较多。
发明内容
为解决上述技术问题,本申请实施例示出了一种渲染方法及装置。
第一方面,本申请实施例示出了一种渲染方法,所述方法包括:
确定渲染视角基点;
获取待渲染三维要素;
获取所述渲染视角基点与所述待渲染三维要素之间的第一空间距离;
根据所述第一空间距离确定用于渲染所述待渲染三维要素的第一渲染精度;
根据所述第一渲染精度渲染所述待渲染三维要素。
在一个可选的实现方式中,所述根据所述第一空间距离确定用于渲染所述待渲染三维要素的渲染精度,包括:
在空间距离与渲染精度之间的对应关系中,查找与所述第一空间距离相对应的渲染精度,并作为所述第一渲染精度。
在一个可选的实现方式中,所述根据所述第一空间距离确定用于渲染所述待渲染三维要素的渲染精度,包括:
在空间距离区间与渲染精度之间的对应关系中,确定所述第一空间距离所在的空间距离区间;
在空间距离区间与渲染精度之间的对应关系中,查找与所述空间距离区间相对应的渲染精度,并作为所述第一渲染精度。
在一个可选的实现方式中,所述获取待渲染三维要素,包括:
在预设三维空间中确定待渲染三维空间;
获取位于所述待渲染三维空间中的三维要素,并作为所述待渲染三维要素。
在一个可选的实现方式中,所述获取位于所述待渲染三维空间中的三维要素,包括:
获取所述待渲染三维空间的空间标识;
在三维空间的空间标识与位于三维空间中的三维要素之间的对应关系中,查找与所述空间标识相对应的三维要素。
在一个可选的实现方式中,所述方法还包括:
确定渲染视角基点的位置是否发生变化;
如果渲染视角基点的位置发生变化,则获取变化后的渲染视角基点与所述待渲染三维要素之间的第二空间距离;
根据所述第二空间距离确定用于渲染所述待渲染三维要素的第二渲染精度;
根据所述第二渲染精度重新渲染所述待渲染三维要素。
在一个可选的实现方式中,所述方法还包括:
确定所述第二空间距离是否小于所述第一空间距离;
如果所述第二空间距离小于所述第一空间距离,则执行所述根据所述第二空间距离确定用于渲染所述待渲染三维要素的第二渲染精度的步骤。
第二方面,本申请实施例示出了一种渲染方法,所述方法包括:
确定渲染视角基点;
获取待渲染要素;
获取所述渲染视角基点与所述待渲染要素之间的第一空间距离;
根据所述第一空间距离确定用于渲染所述待渲染要素的第一渲染精度;
根据所述第一渲染精度渲染所述待渲染要素。
第三方面,本申请实施例示出了一种渲染装置,所述装置包括:
第一确定模块,用于确定渲染视角基点;
第一获取模块,用于获取待渲染三维要素;
第二获取模块,用于获取所述渲染视角基点与所述待渲染三维要素之间的第一空间距离;
第二确定模块,用于根据所述第一空间距离确定用于渲染所述待渲染三维要素的第一渲染精度;
第一渲染模块,用于根据所述第一渲染精度渲染所述待渲染三维要素。
在一个可选的实现方式中,所述第二确定模块包括:
第一查找单元,用于在空间距离与渲染精度之间的对应关系中,查找与所述第一空间距离相对应的渲染精度,并作为所述第一渲染精度。
在一个可选的实现方式中,所述第二确定模块包括:
第一确定单元,用于在空间距离区间与渲染精度之间的对应关系中,确定所述第一空间距离所在的空间距离区间;
第二查找单元,用于在空间距离区间与渲染精度之间的对应关系中,查找与所述空间距离区间相对应的渲染精度,并作为所述第一渲染精度。
在一个可选的实现方式中,所述第一获取模块包括:
第二确定单元,用于在预设三维空间中确定待渲染三维空间;
第一获取单元,用于获取位于所述待渲染三维空间中的三维要素,并作为所述待渲染三维要素。
在一个可选的实现方式中,所述第一获取单元包括:
获取子单元,用于获取所述待渲染三维空间的空间标识;
查找子单元,用于在三维空间的空间标识与位于三维空间中的三维要素之间的对应关系中,查找与所述空间标识相对应的三维要素。
在一个可选的实现方式中,所述装置还包括:
第三确定模块,用于确定渲染视角基点的位置是否发生变化;
第三获取模块,用于如果渲染视角基点的位置发生变化,则获取变化后的渲染视角基点与所述待渲染三维要素之间的第二空间距离;
第四确定模块,用于根据所述第二空间距离确定用于渲染所述待渲染三维要素的第二渲染精度;
第二渲染模块,用于根据所述第二渲染精度重新渲染所述待渲染三维要素。
在一个可选的实现方式中,所述装置还包括:
第五确定模块,用于确定所述第二空间距离是否小于所述第一空间距离;
所述第四确定模块还用于如果所述第二空间距离小于所述第一空间距离,则根据所述第二空间距离确定用于渲染所述待渲染三维要素的第二渲染精度。
第四方面,本申请实施例示出了一种渲染装置,所述装置包括:
第六确定模块,用于确定渲染视角基点;
第四获取模块,用于获取待渲染要素;
第五获取模块,用于获取所述渲染视角基点与所述待渲染要素之间的第一空间距离;
第七确定模块,用于根据所述第一空间距离确定用于渲染所述待渲染要素的第一渲染精度;
第二渲染模块,用于根据所述第一渲染精度渲染所述待渲染要素。
第五方面,本申请实施例示出了一种电子设备,所述电子设备包括:
处理器;和
存储器,其上存储有可执行代码,当所述可执行代码被执行时,使得所述处理器执行如第一方面所述的渲染方法。
第六方面,本申请实施例示出了一个或多个机器可读介质,其上存储有可执行代码,当所述可执行代码被执行时,使得处理器执行如第一方面所述的渲染方法。
第七方面,本申请实施例示出了一种电子设备,所述电子设备包括:
处理器;和
存储器,其上存储有可执行代码,当所述可执行代码被执行时,使得所述处理器执行如第二方面所述的渲染方法。
第八方面,本申请实施例示出了一个或多个机器可读介质,其上存储有可执行代码,当所述可执行代码被执行时,使得处理器执行如第二方面所述的渲染方法。
与现有技术相比,本申请实施例包括以下优点:
通过本申请,在渲染待渲染三维要素时,可以根据渲染视角基点与待渲染三维要素之间的第一空间距离,确定用于渲染待渲染三维要素的第一渲染精度,然后根据第一渲染精度渲染待渲染三维要素。从而实现根据渲染视角基点与待渲染三维要素之间的空间距离灵活配置渲染精度,进而使得用户可以看清楚待渲染三维要素的真实面貌的细节,或者,降低运算量以节省系统资源。
例如,当第一空间距离越小时,可以根据越高的精度渲染待渲染三维要素,以使用户可以看清楚待渲染三维要素的真实面貌的细节。或者,当第一空间距离越大时,用户看到的渲染出的待渲染三维要素的体积越小,即使是根据越高的精度渲染待渲染三维要素,用户也不容易看清楚待渲染三维要素的真实面貌的细节,也即此时无论是根据较高的渲染精度渲染待渲染三维要素,还是根据较低的渲染精度渲染待渲染三维要素,对于 用户而言,并没有实质性区别,因此,此时也就无需在渲染时将待渲染三维要素的真实面貌全部渲染出来,所以,可以根据越低的精度渲染待渲染三维要素,从而可以节省系统资源。
图1是根据一示例性实施例示出的一种渲染方法的流程图;
图2是根据一示例性实施例示出的一种渲染方法的流程图;
图3是根据一示例性实施例示出的一种获取三维要素的方法的流程图;
图4是根据一示例性实施例示出的一种渲染方法的流程图;
图5是根据一示例性实施例示出的一种渲染装置的框图;
图6是根据一示例性实施例示出的一种渲染装置的框图;
图7是根据一示例性实施例示出的一种渲染装置的框图。
为使本申请的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本申请作进一步详细的说明。
图1是根据一示例性实施例示出的一种渲染方法的流程图,如图1所示,该方法用于电子设备中,电子设备包括大屏或虚拟现实设备等,该方法包括以下步骤。
在步骤S101中,确定渲染视角基点;
在步骤S102中,获取待渲染要素;
在步骤S103中,获取渲染视角基点与待渲染要素之间的第一空间距离;
在步骤S104中,根据第一空间距离确定用于渲染待渲染要素的第一渲染精度;
在本申请中,不同的空间距离对应的渲染精度不同,或者,不同的空间距离集合对应的渲染精度不同,或者,不同的空间距离区间对应的渲染精度不同等。
其中,空间距离集合中可以包括多个空间距离,例如,可以包括不连续的多个空间距离等。空间距离区间中可以包括多个空间距离,例如,可以包括连续的多个空间距离等。
在步骤S105中,根据第一渲染精度渲染待渲染要素。
通过本申请,在渲染待渲染要素时,可以根据渲染视角基点与待渲染要素之间的第 一空间距离,确定用于渲染待渲染要素的第一渲染精度,然后根据第一渲染精度渲染待渲染要素。从而实现根据渲染视角基点与待渲染要素之间的空间距离灵活配置渲染精度,进而使得用户可以看清楚待渲染要素的真实面貌的细节,或者,降低运算量以节省系统资源。
例如,当第一空间距离越小时,可以根据越高的精度渲染待渲染要素,以使用户可以看清楚待渲染要素的真实面貌的细节。或者,当第一空间距离越大时,用户看到的渲染出的待渲染要素的体积越小,即使是根据越高的精度渲染待渲染要素,用户也不容易看清楚待渲染要素的真实面貌的细节,也即此时无论是根据较高的渲染精度渲染待渲染要素,还是根据较低的渲染精度渲染待渲染要素,对于用户而言,并没有实质性区别,因此,此时也就无需在渲染时将待渲染要素的真实面貌全部渲染出来,所以,可以根据越低的精度渲染待渲染要素,从而可以节省系统资源。
图1所示的实施例中的待渲染要素包括多种维度的要素,例如,一维的要素、二维的要素以及三维的要素,也即,图1所示的实施例的方法可以应用于各种维度的要素。
在之后的实施例中,本申请以待渲染要素为三维的要素为例进行举例说明,但不作为对本申请保护范围的限制,例如不作为对本申请中的要素的维度的数量的限制。
图2是根据一示例性实施例示出的一种渲染方法的流程图,如图2所示,该方法用于电子设备中,电子设备包括虚拟现实设备或大屏等,该方法包括以下步骤。
在步骤S201中,确定渲染视角基点;
在本申请中,用户可以通过VR(Virtual Reality,虚拟现实)设备或者三维屏幕等电子设备来观看预设三维空间中的三维要素。
在本申请中,预设三维空间中包括很多个渲染要素,例如,假设三维空间为一个城市的三维地图,则在三维地图中,包括各个城市中的县/区的楼宇、河流、树木以及道路等。
在本申请中,渲染视角基点可以位于预设三维空间中,在观看预设三维空间中的三维要素时,可以选择在预设三维空间中的某一个位置上观看预设三维空间的三维要素,该位置即为渲染视角基点所在位置。
例如,在观看北京市的三维地图时,用户想要在“国家图书馆”所在的位置观察位于中关村的面貌,则“国家图书馆”所在的位置即为渲染视角基点,且位于中关村的各个三维要素即为本申请中的待渲染三维要素。
在步骤S202中,获取待渲染三维要素;
在本申请一个实施例中,可以渲染预设三维空间中的所有三维要素,如此,可以获取预设三维空间中的所有三维要素,并作为待渲染三维要素。
然而,有时候用户不需要观看预设三维空间中的所有三维要素,而只需观看预设三维空间中的部分三维要素,如此,为了节省系统资源,可以仅渲染用户需要观看的部分三维要素,而可以不渲染预设三维空间中的除部分三维要素以外的其他三维要素,因此,在本申请另一实施例中,可以根据用户的实际需求获取用户需要观看的部分三维要素,并作为待渲染三维要素。其中,具体获取流程可以参见之后图3所示的实施例,在此不做详述。
在步骤S203中,获取渲染视角基点与待渲染三维要素之间的第一空间距离;
在本申请中,渲染视角基点在预设三维空间中具备一个位置,例如经纬度坐标等,且预设三维空间中的每一个三维要素在预设三维空间中也都具备自己的固定位置,例如经纬度坐标等。
因此,在本步骤中,可以获取渲染视角基点在预设三维空间中的位置,然后获取待渲染三维要素在预设三维空间中的位置,然后可以根据渲染视角基点在预设三维空间中的位置与待渲染三维要素在预设三维空间中的位置,计算渲染视角基点与待渲染三维要素之间的第一空间距离。
在步骤S204中,根据第一空间距离确定用于渲染待渲染三维要素的第一渲染精度;
在本申请中,不同的空间距离对应的渲染精度不同,或者,不同的空间距离集合对应的渲染精度不同,或者,不同的空间距离区间对应的渲染精度不同等。
其中,空间距离集合中可以包括多个空间距离,例如,可以包括不连续的多个空间距离等。空间距离区间中可以包括多个空间距离,例如,可以包括连续的多个空间距离等。
在本申请一个实施例中,对于任意一个空间距离,工作人员事先可以人工评定适用于该空间距离的渲染精度,然后可以将该空间距离与适用于该空间距离的渲染精度组成对应表项,并存储在空间距离与渲染精度之间的对应关系中。
例如,如果该空间距离较大,则说明视角基点与三维要素之间的距离较远,此时用户看到的三维要素在画面中呈现的体积较小,即使在渲染时将三维要素的真实面貌全部渲染出来,用户也无法看清楚三维要素的真实面貌的全部细节,也即此时无论是根据较高的渲染精度渲染三维要素,还是根据较低的渲染精度渲染三维要素,对于用户而言,并没有实质性区别,因此,此时也就无需在渲染时将三维要素的真实面貌全部渲染出来, 所以,为了节省系统资源,可以将较低的渲染精度作为适用于该空间距离的渲染精度,
或者,如果该空间距离较小,则说明视角基点与三维要素之间的距离较近,此时用户看到的三维要素在画面中呈现的体积较大,用户往往需要看清楚要素的真实面貌的全部细节,因此,为了用户能够看清楚三维要素的真实面貌的全部细节,需要在渲染时将三维要素的真实面貌全部渲染出来,所以,可以将较高的渲染精度作为适用于该空间距离的渲染精度。
对于其他每一个空间距离,同样执行上述操作。
其中,在空间距离与渲染精度之间的对应关系中,空间距离与渲染精度成反比。也即,越大的空间距离对应的渲染精度越低,越小的空间距离对应的渲染精度越高。
如此,在本步骤中,可以在空间距离与渲染精度之间的对应关系中,查找与第一空间距离相对应的渲染精度,并作为第一渲染精度。
然而,在实际中,客观存在的空间距离有很多,例如0.5km、0.8km、1km、1.5m以及2km等等,如此,即使有至少两个不同的空间距离适用同一个渲染精度,但在空间距离与渲染精度之间的对应关系中,有几个空间距离,就会有几个对应表项,如此会占用较大的存储空间。
因此,为了节省存储空间,在本申请另一实施例中,工作人员事先可以划分出多个不同空间距离区间,例如,将(0,1km)作为一个空间距离区间,(1km,2km)作为一个空间距离区间以及(2km,3km)作为一个空间距离区间等等。
对于任意一个空间距离区间,工作人员可以人工评定适用于该空间距离区间的渲染精度,并将该空间距离区间与适用于该空间距离区间的渲染精度组成对应表项,并存储在空间距离区间与渲染精度之间的对应关系中。
例如,如果该空间距离区间较大,则说明视角基点与三维要素之间的距离较远,此时用户看到的三维要素在画面中呈现的体积较小,即使在渲染时将三维要素的真实面貌全部渲染出来,用户也无法看清楚三维要素的真实面貌的全部细节,也即此时无论是根据较高的渲染精度渲染三维要素,还是根据较低的渲染精度渲染三维要素,对于用户而言,并没有实质性区别,因此,此时也就无需在渲染时将三维要素的真实面貌全部渲染出来,所以,为了节省系统资源,可以将较低的渲染精度作为适用于该空间距离区间的渲染精度,
或者,如果该空间距离区间较小,则说明视角基点与三维要素之间的距离较近,此时用户看到的三维要素在画面中呈现的体积较大,用户往往需要看清楚要素的真实面貌 的全部细节,因此,为了用户能够看清楚三维要素的真实面貌的全部细节,需要在渲染时将三维要素的真实面貌全部渲染出来,所以,可以将较高的渲染精度作为适用于该空间距离区间的渲染精度。
对于其他每一个空间距离区间,同样如此。
其中,在空间距离区间与渲染精度之间的对应关系中,空间距离区间与渲染精度成反比。也即,越大的空间距离区间对应的渲染精度越低,越小的空间距离区间对应的渲染精度越高。
如此,在本步骤中,可以在空间距离区间与渲染精度之间的对应关系中,确定第一空间距离所在的空间距离区间;然后在空间距离区间与渲染精度之间的对应关系中,查找与该空间距离区间相对应的渲染精度,并作为第一渲染精度。
在步骤S205中,根据第一渲染精度渲染待渲染三维要素。
在本申请中,对于位于预设三维空间中的任意一个三维要素,在渲染该三维要素时,可以使用该三维要素的渲染材质进行渲染,其中,如果需要根据较高的渲染精度渲染该三维要素,则可以根据该三维要素的客观真实存在的渲染材质渲染该三维要素,如果需要根据较低的渲染精度渲染该三维要素,则可以在该三维要素的客观真实存在的渲染材质中挑选出部分渲染材质,然后根据挑选出部分渲染材质渲染该三维要素。
例如,假设该三维要素为楼宇,楼宇的渲染材质包括窗户、进出楼宇的大门一级楼宇外表面的楼层边缘线等;客观真实存在的该楼宇的窗户的数量、大门的数量以及楼层边缘线的数量等都是固定的。然而,在渲染该楼宇时,如果需要使用较高的渲染精度渲染,则可以使用客观真实存在的该楼宇的渲染材质进行渲染;或者,在渲染该楼宇时,如果需要使用较低的渲染精度渲染,则可以从客观真实存在的该楼宇的渲染材质中挑选出部分窗户、部分大门以及部分楼层边缘线等部分渲染材质,然后根据选择出来的部分渲染材质渲染该三维要素。
因此,当根据不同的精度渲染该三维要素时,适用于该三维要素的渲染材质是不一样的,对于任意一个渲染精度,可以将该渲染精度与适用于该渲染精度的该三维要素的渲染材质组成对应表项,并存储在该三维要素对应的、渲染精度与渲染材质之间的对应关系中。对于其他每一个渲染精度,同样如此。
对于位于预设三维空间中的其他每一个三维要素,同样如此。
如此,可以在待渲染三维要素对应的、渲染精度与渲染材质之间的对应关系中,查找与第一渲染精度相对应的渲染材质,然后根据该渲染材质渲染待渲染三维要素。
通过本申请,在渲染待渲染三维要素时,可以根据渲染视角基点与待渲染三维要素之间的第一空间距离,确定用于渲染待渲染三维要素的第一渲染精度,然后根据第一渲染精度渲染待渲染三维要素。从而实现根据渲染视角基点与待渲染三维要素之间的空间距离灵活配置渲染精度,进而使得用户可以看清楚待渲染三维要素的真实面貌的细节,或者,降低运算量以节省系统资源。
例如,当第一空间距离越小时,可以根据越高的精度渲染待渲染三维要素,以使用户可以看清楚待渲染三维要素的真实面貌的细节。或者,当第一空间距离越大时,用户看到的渲染出的待渲染三维要素的体积越小,即使是根据越高的精度渲染待渲染三维要素,用户也不容易看清楚待渲染三维要素的真实面貌的细节,也即此时无论是根据较高的渲染精度渲染待渲染三维要素,还是根据较低的渲染精度渲染待渲染三维要素,对于用户而言,并没有实质性区别,因此,此时也就无需在渲染时将三维要素的真实面貌全部渲染出来,所以,可以根据越低的精度渲染待渲染三维要素,从而可以节省系统资源。
在本申请另一实施例中,在获取待渲染三维要素时,为了节省系统资源,可以根据用户的实际需求获取用户需要观看的部分三维要素,并作为待渲染三维要素,参见图3,具体流程包括:
在步骤S301中,在预设三维空间中确定待渲染三维空间;
在本申请中,事先可以将预设三维空间划分为多个三维空间,例如,假设预设三维空间为北京市的三维地图,北京市的三维地图中包括海淀区的三维地图、朝阳区的三维度图、丰台区的三维地图以及西城区的三维地图等,用户可以在电子设备中指定预设三维空间中的至少一个三维空间作为待渲染三维空间,然后电子设备获取用户指定的待渲染三维空间,例如,用户只需要看北京市的海淀区的三维地图,则可以在电子设备中指定海淀区的三维地图,电子设备获取用户指定的海淀区的三维地图,并作为待渲染三维地图。
或者,电子设备根据渲染视角基点在预设三维空间中的位置,电子设备的视场角以及视角方向,可以确定在渲染视角基点上所能观看到的区域,并作为待渲染三维空间。
然而,对于预设三维空间中的除该区域以外的三维空间,由于用户不需要观看除该区域以外的三维空间中的三维要素,因此,也就无需渲染除该区域以外的三维空间中的三维要素,进而也就无需将除该区域以外的三维空间作为待渲染三维空间。
在步骤S302中,获取位于待渲染三维空间中的三维要素,并作为待渲染三维要素。
在本申请中,在一个实施例中,需要确定待渲染三维空间的所包括的位置范围,然 后获取各个三维要素所在的位置,并根据三维要素所在的位置查找位于该位置范围内的三维要素,并作为位于待渲染三维空间中的待渲染三维要素。
然而,预设三维空间中往往存在非常多的三维要素,例如一个城市的三维地图中存在成千上万个楼宇以及树木等,每一个三维要素都在城市的三维地图中具备各自的位置,如此,上述获取各个三维要素所在的位置并根据三维要素所在的位置查找位于该位置范围内的三维要素的过程,会耗费大量的时间进而降低渲染效率。
因此,为了节省系统资源以及提高渲染效率,事先可以将预设三维空间划分为多个三维空间,例如,将一个小区的空间作为一个三维空间,或者,将相邻的几条道路围城的区域作为一个三维空间等,或者,将一个行政区/县,或者行政乡/镇作为一个三维空间,然后对于预设三维空间中的任意一个三维空间,可以确定位于该三维空间中的三维要素,再生成一个该三维空间的空间标识,然后将该空间标识与位于该三维空间中的三维要素组成对应表项,并存储在三维空间的空间标识与位于三维空间中的待渲染三维要素之间的对应关系中。对于预设三维空间中的其他每一个三维空间,同样如此。
其中,不同的三维空间的空间标识不同。
如此,在本步骤中,可以获取待渲染三维空间的空间标识;然后在三维空间的空间标识与位于三维空间中的三维要素之间的对应关系中,查找与空间标识相对应的三维要素,并作为待渲染三维要素。从而可以不获取各个三维要素所在的位置,也可以不根据三维要素所在的位置查找位于该位置范围内的三维要素,因此可以节省系统资源以及可以提高渲染效率。
在本申请中,根据第一渲染精度渲染待渲染三维要素之后,用户在观看渲染出的待渲染三维要素时,可能会根据需求实时调整渲染视角基点的位置。在调整渲染视角基点的位置之后,用户观看到的场景就会发生了变化,例如,增加或减少了渲染视角基点与某些渲染要素之间的距离。
例如,如果用户需要仔细观看较远处的某一渲染要素的细节,则可以调整渲染视角基点的位置,以使调整后的渲染视角基点与该渲染要素之间的距离越近,由于电子设备的视场角的固定不变的,因此,用户可以观看到体积更大的该渲染要素,从而可以仔细观看该渲染要素的细节。
或者,为了使得能够看到某一渲染要素所在的区域的更大范围的三维空间,可以调整渲染视角基点的位置,以使调整后的渲染视角基点与该渲染要素之间的距离越远,由于电子设备的视场角是固定不变的,因此,用户可以观看到更大范围的三维空间。
因此,对于待渲染三维要素,在调整渲染视角基点的位置之后,待渲染三维要素与该调整后的渲染视角基点之间的距离可能会发生变化,如果待渲染三维要素与该调整后的渲染视角基点之间的距离发生变化,则为了使得待渲染三维要素的渲染精度能够与变化后的距离相适应,可以调整待渲染三维要素的渲染精度,例如,根据变化后的距离对应的渲染精度重新渲染待渲染三维要素。
具体地,参见图4,该方法还包括:
在步骤S401中,确定渲染视角基点的位置是否发生变化;
如果渲染视角基点的位置发生变化,在步骤S402中,获取变化后的渲染视角基点与待渲染三维要素之间的第二空间距离;
因此,在本步骤中,可以获取变化后的渲染视角基点在预设三维空间中的位置,然后获取待渲染三维要素在预设三维空间中的位置,然后可以根据变化后的渲染视角基点在预设三维空间中的位置与待渲染三维要素在预设三维空间中的位置,计算变化后的渲染视角基点与待渲染三维要素之间的第二空间距离。
在步骤S403中,根据第二空间距离确定用于渲染待渲染三维要素的第二渲染精度;
在本申请一个实施例中,可以在空间距离与渲染精度之间的对应关系中,查找与第二空间距离相对应的渲染精度,并作为第二渲染精度。
或者,在本申请另一实施例中,可以在空间距离区间与渲染精度之间的对应关系中,确定第二空间距离所在的空间距离区间;然后在空间距离区间与渲染精度之间的对应关系中,查找与该空间距离区间相对应的渲染精度,并作为第二渲染精度。
在步骤S404中,根据第二渲染精度重新渲染待渲染三维要素。
在本步骤中,可以在待渲染三维要素对应的、渲染精度与渲染材质之间的对应关系中,查找与第二渲染精度相对应的渲染材质,然后根据该渲染材质在屏幕上渲染待渲染三维要素。
然而,根据第一渲染精度渲染待渲染三维要素之后,用户在观看渲染出的待渲染三维要素时,可能会根据需求实时调整渲染视角基点的位置。在调整渲染视角基点的位置之后,用户观看到的场景就会发生变化,例如,增加或减少了渲染视角基点与某些渲染要素之间的距离。
如果调整后的渲染视角基点的位置与该渲染要素之间的距离越近,则通常是为了仔细观看待渲染三维要素。
然而,如果调整后的渲染视角基点的位置与该渲染要素之间的距离越远,则通常是 为了用户可以观看到更大范围的三维空间,且在这种情况下,用户看到的渲染出的待渲染三维要素的体积就越小,此时无论是否更新渲染要素的渲染精度,用户往往都不容易看清楚待渲染三维要素的细节,因此,在这种情况下,重复渲染对用户而言是没有意义的,且会耗费较多的系统资源。
因此,为了节省系统资源,在图3所示的实施例的基础之上,在本申请另一实施例中,该方法还包括:
确定第二空间距离是否小于第一空间距离;如果第二空间距离小于第一空间距离,则再根据第二空间距离确定用于渲染待渲染三维要素的第二渲染精度。如果第二空间距离大于或等于第一空间距离,则为了节省系统资源,可以无需根据第二空间距离确定用于渲染待渲染三维要素的第二渲染精度,可以结束流程。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请实施例并不受所描述的动作顺序的限制,因为依据本申请实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本申请实施例所必须的。
图5是根据一示例性实施例示出的一种渲染装置的框图,如图5所示,该装置包括:
第一确定模块11,用于确定渲染视角基点;
第一获取模块12,用于获取待渲染三维要素;
第二获取模块13,用于获取所述渲染视角基点与所述待渲染三维要素之间的第一空间距离;
第二确定模块14,用于根据所述第一空间距离确定用于渲染所述待渲染三维要素的第一渲染精度;
第一渲染模块15,用于根据所述第一渲染精度渲染所述待渲染三维要素。
在一个可选的实现方式中,所述第二确定模块14包括:
第一查找单元,用于在空间距离与渲染精度之间的对应关系中,查找与所述第一空间距离相对应的渲染精度,并作为所述第一渲染精度。
在一个可选的实现方式中,所述第二确定模块14包括:
第一确定单元,用于在空间距离区间与渲染精度之间的对应关系中,确定所述第一空间距离所在的空间距离区间;
第二查找单元,用于在空间距离区间与渲染精度之间的对应关系中,查找与所述空 间距离区间相对应的渲染精度,并作为所述第一渲染精度。
在一个可选的实现方式中,所述第一获取模块12包括:
第二确定单元,用于在预设三维空间中确定待渲染三维空间;
第一获取单元,用于获取位于所述待渲染三维空间中的三维要素,并作为所述待渲染三维要素。
在一个可选的实现方式中,所述第一获取单元包括:
获取子单元,用于获取所述待渲染三维空间的空间标识;
查找子单元,用于在三维空间的空间标识与位于三维空间中的三维要素之间的对应关系中,查找与所述空间标识相对应的三维要素。
在一个可选的实现方式中,所述装置还包括:
第三确定模块,用于确定渲染视角基点的位置是否发生变化;
第三获取模块,用于如果渲染视角基点的位置发生变化,则获取变化后的渲染视角基点与所述待渲染三维要素之间的第二空间距离;
第四确定模块,用于根据所述第二空间距离确定用于渲染所述待渲染三维要素的第二渲染精度;
第二渲染模块,用于根据所述第二渲染精度重新渲染所述待渲染三维要素。
在一个可选的实现方式中,所述装置还包括:
第五确定模块,用于确定所述第二空间距离是否小于所述第一空间距离;
所述第四确定模块还用于如果所述第二空间距离小于所述第一空间距离,则根据所述第二空间距离确定用于渲染所述待渲染三维要素的第二渲染精度。
通过本申请,在渲染待渲染三维要素时,可以根据渲染视角基点与待渲染三维要素之间的第一空间距离,确定用于渲染待渲染三维要素的第一渲染精度,然后根据第一渲染精度渲染待渲染三维要素。从而实现根据渲染视角基点与待渲染三维要素之间的空间距离灵活配置渲染精度,进而使得用户可以看清楚待渲染三维要素的真实面貌的细节,或者,降低运算量以节省系统资源。
例如,当第一空间距离越小时,可以根据越高的精度渲染待渲染三维要素,以使用户可以看清楚待渲染三维要素的真实面貌的细节。或者,当第一空间距离越大时,用户看到的渲染出的待渲染三维要素的体积越小,即使是根据越高的精度渲染待渲染三维要素,用户也不容易看清楚待渲染三维要素的真实面貌的细节,也即此时无论是根据较高的渲染精度渲染待渲染三维要素,还是根据较低的渲染精度渲染待渲染三维要素,对于 用户而言,并没有实质性区别,因此,此时也就无需在渲染时将待渲染三维要素的真实面貌全部渲染出来,所以,可以根据越低的精度渲染待渲染三维要素,从而可以节省系统资源。
图6是根据一示例性实施例示出的一种渲染装置的框图,如图6所示,该装置包括:
第六确定模块21,用于确定渲染视角基点;
第四获取模块22,用于获取待渲染要素;
第五获取模块23,用于获取所述渲染视角基点与所述待渲染要素之间的第一空间距离;
第七确定模块24,用于根据所述第一空间距离确定用于渲染所述待渲染要素的第一渲染精度;
第二渲染模块25,用于根据所述第一渲染精度渲染所述待渲染要素。
通过本申请,在渲染待渲染要素时,可以根据渲染视角基点与待渲染要素之间的第一空间距离,确定用于渲染待渲染要素的第一渲染精度,然后根据第一渲染精度渲染待渲染要素。从而实现根据渲染视角基点与待渲染要素之间的空间距离灵活配置渲染精度,进而使得用户可以看清楚待渲染要素的真实面貌的细节,或者,降低运算量以节省系统资源。
例如,当第一空间距离越小时,可以根据越高的精度渲染待渲染要素,以使用户可以看清楚待渲染要素的真实面貌的细节。或者,当第一空间距离越大时,用户看到的渲染出的待渲染要素的体积越小,即使是根据越高的精度渲染待渲染要素,用户也不容易看清楚待渲染要素的真实面貌的细节,也即此时无论是根据较高的渲染精度渲染待渲染要素,还是根据较低的渲染精度渲染待渲染要素,对于用户而言,并没有实质性区别,因此,此时也就无需在渲染时将待渲染要素的真实面貌全部渲染出来,所以,可以根据越低的精度渲染待渲染要素,从而可以节省系统资源。
本申请实施例还提供了一种非易失性可读存储介质,该存储介质中存储有一个或多个模块(programs),该一个或多个模块被应用在设备时,可以使得该设备执行本申请实施例中各方法步骤的指令(instructions)。
本申请实施例提供了一个或多个机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得电子设备执行如上述实施例中一个或多个所述的渲染方法。本申请实施例中,所述电子设备包括服务器、网关、子设备等,子设备为物联网设备等设备。
本说明书公开的实施例可被实现为使用任意适当的硬件,固件,软件,或及其任意 组合进行想要的配置的装置,该装置可包括服务器(集群)、终端设备如IoT设备等电子设备。
图7示意性地示出了可被用于实现本申请中所述的各个实施例的示例性装置1300。
对于一个实施例,图7示出了示例性装置1300,该装置具有一个或多个处理器1302、被耦合到(一个或多个)处理器1302中的至少一个的控制模块(芯片组)1304、被耦合到控制模块1304的存储器1306、被耦合到控制模块1304的非易失性存储器(NVM)/存储设备1308、被耦合到控制模块1304的一个或多个输入/输出设备1310,以及被耦合到控制模块1306的网络接口1312。
处理器1302可包括一个或多个单核或多核处理器,处理器1302可包括通用处理器或专用处理器(例如图形处理器、应用处理器、基频处理器等)的任意组合。在一些实施例中,装置1300能够作为本申请实施例中所述网关或控制器等服务器设备。
在一些实施例中,装置1300可包括具有指令1314的一个或多个计算机可读介质(例如,存储器1306或NVM/存储设备1308)以及与该一个或多个计算机可读介质相合并被配置为执行指令1314以实现模块从而执行本说明书实施例中所述的动作的一个或多个处理器1302。
对于一个实施例,控制模块1304可包括任意适当的接口控制器,以向(一个或多个)处理器1302中的至少一个和/或与控制模块1304通信的任意适当的设备或组件提供任意适当的接口。
控制模块1304可包括存储器控制器模块,以向存储器1306提供接口。存储器控制器模块可以是硬件模块、软件模块和/或固件模块。
存储器1306可被用于例如为装置1300加载和存储数据和/或指令1314。对于一个实施例,存储器1306可包括任意适当的易失性存储器,例如,适当的DRAM。在一些实施例中,存储器1306可包括双倍数据速率类型四同步动态随机存取存储器(DDR4SDRAM)。
对于一个实施例,控制模块1304可包括一个或多个输入/输出控制器,以向NVM/存储设备1308及(一个或多个)输入/输出设备1310提供接口。
例如,NVM/存储设备1308可被用于存储数据和/或指令1314。NVM/存储设备1308可包括任意适当的非易失性存储器(例如,闪存)和/或可包括任意适当的(一个或多个)非易失性存储设备(例如,一个或多个硬盘驱动器(HDD)、一个或多个光盘(CD)驱动器和/或一个或多个数字通用光盘(DVD)驱动器)。
NVM/存储设备1308可包括在物理上作为装置1300被安装在其上的设备的一部分的 存储资源,或者其可被该设备访问可不必作为该设备的一部分。例如,NVM/存储设备1308可通过网络经由(一个或多个)输入/输出设备1310进行访问。
(一个或多个)输入/输出设备1310可为装置1300提供接口以与任意其他适当的设备通信,输入/输出设备1310可以包括通信组件、音频组件、传感器组件等。网络接口1312可为装置1300提供接口以通过一个或多个网络通信,装置1300可根据一个或多个无线网络标准和/或协议中的任意标准和/或协议来与无线网络的一个或多个组件进行无线通信,例如接入基于通信标准的无线网络,如WiFi、2G、3G、4G、5G等,或它们的组合进行无线通信。
对于一个实施例,(一个或多个)处理器1302中的至少一个可与控制模块1304的一个或多个控制器(例如,存储器控制器模块)的逻辑封装在一起。对于一个实施例,(一个或多个)处理器1302中的至少一个可与控制模块1304的一个或多个控制器的逻辑封装在一起以形成系统级封装(SiP)。对于一个实施例,(一个或多个)处理器1302中的至少一个可与控制模块1304的一个或多个控制器的逻辑集成在同一模具上。对于一个实施例,(一个或多个)处理器1302中的至少一个可与控制模块1304的一个或多个控制器的逻辑集成在同一模具上以形成片上系统(SoC)。
在各个实施例中,装置1300可以但不限于是:服务器、台式计算设备或移动计算设备(例如,膝上型计算设备、手持计算设备、平板电脑、上网本等)等终端设备。在各个实施例中,装置1300可具有更多或更少的组件和/或不同的架构。例如,在一些实施例中,装置1300包括一个或多个摄像机、键盘、液晶显示器(LCD)屏幕(包括触屏显示器)、非易失性存储器端口、多个天线、图形芯片、专用集成电路(ASIC)和扬声器。
本申请实施例提供了一种电子设备,包括:一个或多个处理器;和,其上存储有指令的一个或多个机器可读介质,当由所述一个或多个处理器执行时,使得所述处理器执行如本申请实施例中一个或多个所述的渲染方法。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本申请实施例是参照根据本申请实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可 提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本申请所提供的一种渲染方法及装置,进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。
Claims (20)
- 一种渲染方法,其特征在于,所述方法包括:确定渲染视角基点;获取待渲染三维要素;获取所述渲染视角基点与所述待渲染三维要素之间的第一空间距离;根据所述第一空间距离确定用于渲染所述待渲染三维要素的第一渲染精度;根据所述第一渲染精度渲染所述待渲染三维要素。
- 根据权利要求1所述的方法,其特征在于,所述根据所述第一空间距离确定用于渲染所述待渲染三维要素的渲染精度,包括:在空间距离与渲染精度之间的对应关系中,查找与所述第一空间距离相对应的渲染精度,并作为所述第一渲染精度。
- 根据权利要求1所述的方法,其特征在于,所述根据所述第一空间距离确定用于渲染所述待渲染三维要素的渲染精度,包括:在空间距离区间与渲染精度之间的对应关系中,确定所述第一空间距离所在的空间距离区间;在空间距离区间与渲染精度之间的对应关系中,查找与所述空间距离区间相对应的渲染精度,并作为所述第一渲染精度。
- 根据权利要求1所述的方法,其特征在于,所述获取待渲染三维要素,包括:在预设三维空间中确定待渲染三维空间;获取位于所述待渲染三维空间中的三维要素,并作为所述待渲染三维要素。
- 根据权利要求4所述的方法,其特征在于,所述获取位于所述待渲染三维空间中的三维要素,包括:获取所述待渲染三维空间的空间标识;在三维空间的空间标识与位于三维空间中的三维要素之间的对应关系中,查找与所述空间标识相对应的三维要素。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:确定渲染视角基点的位置是否发生变化;如果渲染视角基点的位置发生变化,则获取变化后的渲染视角基点与所述待渲染三维要素之间的第二空间距离;根据所述第二空间距离确定用于渲染所述待渲染三维要素的第二渲染精度;根据所述第二渲染精度重新渲染所述待渲染三维要素。
- 根据权利要求6所述的方法,其特征在于,所述方法还包括:确定所述第二空间距离是否小于所述第一空间距离;如果所述第二空间距离小于所述第一空间距离,则执行所述根据所述第二空间距离确定用于渲染所述待渲染三维要素的第二渲染精度的步骤。
- 一种渲染方法,其特征在于,所述方法包括:确定渲染视角基点;获取待渲染要素;获取所述渲染视角基点与所述待渲染要素之间的第一空间距离;根据所述第一空间距离确定用于渲染所述待渲染要素的第一渲染精度;根据所述第一渲染精度渲染所述待渲染要素。
- 一种渲染装置,其特征在于,所述装置包括:第一确定模块,用于确定渲染视角基点;第一获取模块,用于获取待渲染三维要素;第二获取模块,用于获取所述渲染视角基点与所述待渲染三维要素之间的第一空间距离;第二确定模块,用于根据所述第一空间距离确定用于渲染所述待渲染三维要素的第一渲染精度;第一渲染模块,用于根据所述第一渲染精度渲染所述待渲染三维要素。
- 根据权利要求9所述的装置,其特征在于,所述第二确定模块包括:第一查找单元,用于在空间距离与渲染精度之间的对应关系中,查找与所述第一空 间距离相对应的渲染精度,并作为所述第一渲染精度。
- 根据权利要求9所述的装置,其特征在于,所述第二确定模块包括:第一确定单元,用于在空间距离区间与渲染精度之间的对应关系中,确定所述第一空间距离所在的空间距离区间;第二查找单元,用于在空间距离区间与渲染精度之间的对应关系中,查找与所述空间距离区间相对应的渲染精度,并作为所述第一渲染精度。
- 根据权利要求9所述的装置,其特征在于,所述第一获取模块包括:第二确定单元,用于在预设三维空间中确定待渲染三维空间;第一获取单元,用于获取位于所述待渲染三维空间中的三维要素,并作为所述待渲染三维要素。
- 根据权利要求12所述的装置,其特征在于,所述第一获取单元包括:获取子单元,用于获取所述待渲染三维空间的空间标识;查找子单元,用于在三维空间的空间标识与位于三维空间中的三维要素之间的对应关系中,查找与所述空间标识相对应的三维要素。
- 根据权利要求9所述的装置,其特征在于,所述装置还包括:第三确定模块,用于确定渲染视角基点的位置是否发生变化;第三获取模块,用于如果渲染视角基点的位置发生变化,则获取变化后的渲染视角基点与所述待渲染三维要素之间的第二空间距离;第四确定模块,用于根据所述第二空间距离确定用于渲染所述待渲染三维要素的第二渲染精度;第二渲染模块,用于根据所述第二渲染精度重新渲染所述待渲染三维要素。
- 根据权利要求14所述的装置,其特征在于,所述装置还包括:第五确定模块,用于确定所述第二空间距离是否小于所述第一空间距离;所述第四确定模块还用于如果所述第二空间距离小于所述第一空间距离,则根据所述第二空间距离确定用于渲染所述待渲染三维要素的第二渲染精度。
- 一种渲染装置,其特征在于,所述装置包括:第六确定模块,用于确定渲染视角基点;第四获取模块,用于获取待渲染要素;第五获取模块,用于获取所述渲染视角基点与所述待渲染要素之间的第一空间距离;第七确定模块,用于根据所述第一空间距离确定用于渲染所述待渲染要素的第一渲染精度;第二渲染模块,用于根据所述第一渲染精度渲染所述待渲染要素。
- 一种电子设备,其特征在于,所述电子设备包括:处理器;和存储器,其上存储有可执行代码,当所述可执行代码被执行时,使得所述处理器执行如权利要求1-7任一项所述的渲染方法。
- 一个或多个机器可读介质,其上存储有可执行代码,当所述可执行代码被执行时,使得处理器执行如权利要求1-7任一项所述的渲染方法。
- 一种电子设备,其特征在于,所述电子设备包括:处理器;和存储器,其上存储有可执行代码,当所述可执行代码被执行时,使得所述处理器执行如权利要求8所述的渲染方法。
- 一个或多个机器可读介质,其上存储有可执行代码,当所述可执行代码被执行时,使得处理器执行如权利要求8所述的渲染方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910390011.8 | 2019-05-10 | ||
CN201910390011.8A CN111915709A (zh) | 2019-05-10 | 2019-05-10 | 渲染方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020228592A1 true WO2020228592A1 (zh) | 2020-11-19 |
Family
ID=73241837
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/089114 WO2020228592A1 (zh) | 2019-05-10 | 2020-05-08 | 渲染方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111915709A (zh) |
WO (1) | WO2020228592A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113963103A (zh) * | 2021-10-26 | 2022-01-21 | 中国银行股份有限公司 | 一种三维模型的渲染方法和相关装置 |
CN114706936B (zh) * | 2022-05-13 | 2022-08-26 | 高德软件有限公司 | 地图数据处理方法及基于位置的服务提供方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105117111A (zh) * | 2015-09-23 | 2015-12-02 | 小米科技有限责任公司 | 虚拟现实交互画面的渲染方法和装置 |
CN106162142A (zh) * | 2016-06-15 | 2016-11-23 | 南京快脚兽软件科技有限公司 | 一种高效的vr场景绘画方法 |
US20160379417A1 (en) * | 2011-12-06 | 2016-12-29 | Microsoft Technology Licensing, Llc | Augmented reality virtual monitor |
CN106910236A (zh) * | 2017-01-22 | 2017-06-30 | 北京微视酷科技有限责任公司 | 一种三维虚拟环境中的渲染显示方法及装置 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109413337A (zh) * | 2018-08-30 | 2019-03-01 | 北京达佳互联信息技术有限公司 | 视频渲染方法、装置、电子设备及存储介质 |
-
2019
- 2019-05-10 CN CN201910390011.8A patent/CN111915709A/zh active Pending
-
2020
- 2020-05-08 WO PCT/CN2020/089114 patent/WO2020228592A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160379417A1 (en) * | 2011-12-06 | 2016-12-29 | Microsoft Technology Licensing, Llc | Augmented reality virtual monitor |
CN105117111A (zh) * | 2015-09-23 | 2015-12-02 | 小米科技有限责任公司 | 虚拟现实交互画面的渲染方法和装置 |
CN106162142A (zh) * | 2016-06-15 | 2016-11-23 | 南京快脚兽软件科技有限公司 | 一种高效的vr场景绘画方法 |
CN106910236A (zh) * | 2017-01-22 | 2017-06-30 | 北京微视酷科技有限责任公司 | 一种三维虚拟环境中的渲染显示方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN111915709A (zh) | 2020-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022193941A1 (zh) | 图像渲染方法、装置、设备、介质和计算机程序产品 | |
EP2748561B1 (en) | Method, apparatus and computer program product for displaying items on multiple floors in multi-level maps | |
TWI525303B (zh) | 用以自適應地將基於位置的數位資訊視覺化之方法及裝置 | |
WO2020228592A1 (zh) | 渲染方法及装置 | |
US9129428B2 (en) | Map tile selection in 3D | |
CN109558470B (zh) | 一种轨迹数据可视化方法及装置 | |
Pryss et al. | Advanced algorithms for location-based smart mobile augmented reality applications | |
US20190266793A1 (en) | Apparatus, systems, and methods for tagging building features in a 3d space | |
WO2019024885A1 (zh) | 一种信息显示方法及装置 | |
CN110781263A (zh) | 房源信息展示方法、装置、电子设备及计算机存储介质 | |
CN109359141A (zh) | 一种可视化报表数据展示方法及装置 | |
CN104809259B (zh) | 混响三维空间定位方法和装置 | |
WO2023123583A1 (zh) | 适用于地质数据和地理信息数据的融合方法、装置及系统 | |
CN116109734A (zh) | 图片处理方法和装置 | |
CN114064829A (zh) | 对定位点进行聚合展示的方法、装置及电子设备 | |
WO2015192716A1 (zh) | 一种基于电子地图的划线搜索方法和装置 | |
TW201305985A (zh) | 行動運算裝置的內容提供方法 | |
CN114373055B (zh) | 基于bim的三维图像生成方法、装置、电子设备及介质 | |
CN115511701A (zh) | 一种地理信息的转换方法及装置 | |
Rashidan et al. | GeoPackage as future ubiquitous GIS data format: a review | |
CN116688491A (zh) | 虚拟场景中的对象位置同步系统、方法、装置及电子设备 | |
CN107703537B (zh) | 一种炮检点在三维地表上的展示方法及装置 | |
US8867785B2 (en) | Method and apparatus for detecting proximate interface elements | |
KR20150021168A (ko) | 스마트기기를 이용한 현장 조사 시스템 | |
US20150130817A1 (en) | Generating a sidebar from vector tiles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20805267 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20805267 Country of ref document: EP Kind code of ref document: A1 |