CN116543105A - Processing method and system of three-dimensional object, electronic equipment and storage medium - Google Patents

Processing method and system of three-dimensional object, electronic equipment and storage medium Download PDF

Info

Publication number
CN116543105A
CN116543105A CN202310492050.5A CN202310492050A CN116543105A CN 116543105 A CN116543105 A CN 116543105A CN 202310492050 A CN202310492050 A CN 202310492050A CN 116543105 A CN116543105 A CN 116543105A
Authority
CN
China
Prior art keywords
observation
dimensional object
implicit
model
target model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310492050.5A
Other languages
Chinese (zh)
Inventor
张骐
满远斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Damo Institute Hangzhou Technology Co Ltd
Original Assignee
Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Damo Institute Hangzhou Technology Co Ltd filed Critical Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority to CN202310492050.5A priority Critical patent/CN116543105A/en
Publication of CN116543105A publication Critical patent/CN116543105A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a three-dimensional object processing method, a three-dimensional object processing system, electronic equipment and a storage medium. Wherein the method comprises the following steps: capturing an observation view angle for observing a three-dimensional object, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object; rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle. The method and the device solve the technical problem of low rendering accuracy of the three-dimensional object in the related technology.

Description

Processing method and system of three-dimensional object, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing in three-dimensional scenes, and in particular, to a method, a system, an electronic device, and a storage medium for processing a three-dimensional object.
Background
In a three-dimensional map scene, for example: a three-dimensional game map scene or a three-dimensional navigation map scene generally includes a plurality of three-dimensional objects. The three-dimensional object enables a user to more immersively experience the three-dimensional map scene on the basis of the planar object, and interaction between the user and the three-dimensional map scene can be improved. However, as the display function of the three-dimensional map scene becomes more complex, the demand for the three-dimensional map scene increases gradually, and the demand for the display effect of the three-dimensional object increases more and more in order to be able to improve the visual perception of the user and the interaction with the three-dimensional map scene. In the related art, when a three-dimensional object image is generated, the three-dimensional object is mainly rendered through an explicit characterization mode to generate the three-dimensional object image corresponding to the three-dimensional object, however, when the explicit characterization mode is used for rendering the three-dimensional object, the three-dimensional object (such as water, fog and the like) which is uncertain cannot be accurately rendered, and therefore the rendering accuracy of the three-dimensional object is low.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a processing method, a processing system, electronic equipment and a storage medium for a three-dimensional object, which are used for at least solving the technical problem of low rendering accuracy of the three-dimensional object in the related technology.
According to an aspect of an embodiment of the present application, there is provided a method for processing a three-dimensional object, including: capturing an observation view angle for observing a three-dimensional object, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object; rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle.
According to another aspect of the embodiments of the present application, there is also provided a method for processing a three-dimensional object, including: responding to an input instruction acted on an operation interface, and displaying a three-dimensional object on the operation interface, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; and responding to the observation instruction acted on the operation interface, and displaying an observation image corresponding to the observation instruction on the operation interface, wherein the observation image is obtained by rendering a target model corresponding to the observation time based on the observation view angle corresponding to the observation instruction, and the target model is determined from an implicit characterization model corresponding to the three-dimensional object based on the observation view angle.
According to another aspect of the embodiments of the present application, there is also provided a method for processing a three-dimensional object, including: displaying a three-dimensional object on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; capturing an observation view angle for observing a three-dimensional object; determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object; rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle; driving the VR device or the AR device to render the presentation observation image.
According to another aspect of the embodiments of the present application, there is also provided a method for processing a three-dimensional object, including: acquiring an observation visual angle by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the observation visual angle, the observation visual angle is the visual angle for observing a three-dimensional object, and the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object; rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle; and outputting the observed image by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the observed image.
According to another aspect of the embodiments of the present application, there is also provided a processing system for a three-dimensional object, including: the client device is used for displaying the three-dimensional object and capturing an observation view angle for observing the three-dimensional object, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; the server cluster is connected with the client and is used for determining a target model corresponding to the observation visual angle from the implicit characterization model corresponding to the three-dimensional object, and rendering the target model based on the observation visual angle to obtain an observation image corresponding to the observation visual angle; the client device is also configured to display the observation image.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including: a memory storing an executable program; and a processor for running the program, wherein the program executes the method of any one of the above.
According to another aspect of the embodiments of the present application, there is also provided a computer readable storage medium, including a stored executable program, where the executable program when executed controls a device in which the computer readable storage medium is located to perform the method of any one of the above.
In the embodiment of the application, an observation view angle for capturing and observing a three-dimensional object is adopted, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object; and rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle. It is easy to note that in the three-dimensional map scene, the target model for rendering can be quickly and accurately obtained by capturing the observation view angle for observing the three-dimensional object and the implicit characterization model constructed in advance, further, the target model is rendered by the observation view angle, and the target model is based on the neural network, so that the observation image of the three-dimensional object corresponding to the observation view angle can be accurately and intuitively reflected in a visual manner, the purpose of accurately obtaining the observation image corresponding to the three-dimensional object is achieved, the technical effect of improving the rendering accuracy of the three-dimensional object is achieved, and the technical problem of low rendering accuracy of the three-dimensional object in the related art is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a schematic diagram of a hardware environment of a virtual reality device for a method of processing a three-dimensional object according to an embodiment of the application;
FIG. 2 is a block diagram of a computing environment for a method of processing a three-dimensional object according to an embodiment of the present application;
FIG. 3 is a flow chart of a method of processing a three-dimensional object according to embodiment 1 of the present application;
FIG. 4 is a flow chart of a method of processing a three-dimensional object according to embodiment 2 of the present application;
FIG. 5 is a schematic illustration of a display of an alternative operator interface according to embodiment 2 of the present application;
FIG. 6 is a flow chart of a method of processing a three-dimensional object according to embodiment 3 of the present application;
FIG. 7 is a flow chart of a method of processing a three-dimensional object according to embodiment 4 of the present application;
FIG. 8 is a schematic diagram of a three-dimensional object processing system according to embodiment 5 of the present application;
FIG. 9 is a schematic view of a three-dimensional object processing apparatus according to embodiment 6 of the present application;
FIG. 10 is a schematic view of a three-dimensional object processing apparatus according to embodiment 7 of the present application;
FIG. 11 is a schematic view of a three-dimensional object processing apparatus according to embodiment 8 of the present application;
FIG. 12 is a schematic view of a three-dimensional object processing apparatus according to embodiment 9 of the present application;
fig. 13 is a block diagram of a computer terminal according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in describing embodiments of the present application are applicable to the following explanation:
1. CIM: city Information Modeling the city information model is based on city information data, and based on BIM (Building Information Modeling, building information model), GIS (Geographic Information System ), ioT (Internet of Things, internet of things) and other technologies, integrates multi-scale and multi-dimensional city space-time data, and constructs the city information organic complex of the three-dimensional digital space.
2. NeRF: neural Radiance Fields, nerve radiation field, a technical scheme for performing three-dimensional data implicit characterization by using a deep neural network and rendering high-quality two-dimensional pictures at any angle is provided.
3. Three-dimensional data characterization:
a) Displaying the characterization: describing a scheme of the three-dimensional object through geometric shapes determined by triangles, tetrahedrons, bessel curved surfaces and the like;
b) Implicit characterization: the scheme of the three-dimensional object is described by abstract forms such as mathematical functions, radiation fields and the like.
4. Viewing cone: the space within the rendering range between the near clipping surface and the far clipping surface of the camera is called the viewing cone. The shape may be a pyramid with the top cut away parallel to the ground, an area shape that can be seen when rendered for a perspective camera.
5. Envelope: the envelope of a family of planar straight lines (or curves) is a curve tangent to any one of the family of straight lines (or curves).
Example 1
According to embodiments of the present application, there is provided an embodiment of a method of processing a three-dimensional object, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 1 is a schematic diagram of a hardware environment of a virtual reality device of a method for processing a three-dimensional object according to an embodiment of the present application. As shown in fig. 1, the virtual reality device 104 is connected to the terminal 106, the terminal 106 is connected to the server 102 via a network, and the virtual reality device 104 is not limited to: the terminal 104 is not limited to a PC, a mobile phone, a tablet computer, etc., and the server 102 may be a server corresponding to a media file operator, and the network includes, but is not limited to: a wide area network, a metropolitan area network, or a local area network.
Optionally, the virtual reality device 104 of this embodiment includes: memory, processor, and transmission means. The memory is used to store an application program that can be used to perform: displaying a three-dimensional object on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; capturing an observation view angle for observing a three-dimensional object; determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object; rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle; the VR device or the AR device is driven to render and display the observation image, so that the technical problem of low rendering accuracy of the three-dimensional object in the related technology is solved, and the purpose of improving the rendering accuracy of the three-dimensional object is achieved.
The terminal of this embodiment may be configured to display the observation image on a presentation screen of a Virtual Reality (VR) device or an augmented Reality (Augmented Reality, AR) device, and send the observation image to the Virtual Reality device 104, where the Virtual Reality device 104 displays at a target delivery location after receiving the observation image.
Optionally, the HMD (Head Mount Display, head mounted display) head display and the eye tracking module of the virtual reality device 104 of this embodiment have the same functions as those of the above embodiment, that is, a screen in the HMD head display is used for displaying a real-time picture, and the eye tracking module in the HMD is used for acquiring a real-time motion track of an eyeball of a user. The terminal of the embodiment obtains the position information and the motion information of the user in the real three-dimensional space through the tracking system, and calculates the three-dimensional coordinates of the head of the user in the virtual three-dimensional space and the visual field orientation of the user in the virtual three-dimensional space.
The hardware architecture block diagram shown in fig. 1 may be used not only as an exemplary block diagram for an AR/VR device (or mobile device) as described above, but also as an exemplary block diagram for a server as described above, and in an alternative embodiment, fig. 2 shows in block diagram form one embodiment of a computing node in a computing environment 201 using an AR/VR device (or mobile device) as described above in fig. 1. Fig. 2 is a block diagram of a computing environment for a method of processing a three-dimensional object according to an embodiment of the present application, as shown in fig. 2, where the computing environment 201 includes a plurality of computing nodes (e.g., servers) running on a distributed network (shown as 210-1, 210-2, …). Different computing nodes contain local processing and memory resources and end user 202 may run applications or store data remotely in computing environment 201. The application may be provided as a plurality of services 220-1, 220-2, 220-3, and 220-4 in computing environment 201, representing services "A", "D", "E", and "H", respectively.
End user 202 may provide and access services through a web browser or other software application on a client, in some embodiments, provisioning and/or requests of end user 202 may be provided to portal gateway 230. Ingress gateway 230 may include a corresponding agent to handle provisioning and/or request for services (one or more services provided in computing environment 201).
Services are provided or deployed in accordance with various virtualization techniques supported by the computing environment 201. In some embodiments, services may be provided according to Virtual Machine (VM) based virtualization, container based virtualization, and/or the like. Virtual machine-based virtualization may be the emulation of a real computer by initializing a virtual machine, executing programs and applications without directly touching any real hardware resources. While the virtual machine virtualizes the machine, according to container-based virtualization, a container may be started to virtualize the entire Operating System (OS) so that multiple workloads may run on a single Operating System instance.
In one embodiment based on container virtualization, several containers of a service may be assembled into one Pod (e.g., kubernetes Pod). For example, as shown in FIG. 2, the service 220-2 may be equipped with one or more Pods 240-1, 240-2, …,240-N (collectively referred to as Pods). The Pod may include an agent 245 and one or more containers 242-1, 242-2, …,242-M (collectively referred to as containers). One or more containers in the Pod handle requests related to one or more corresponding functions of the service, and the agent 245 generally controls network functions related to the service, such as routing, load balancing, etc. Other services may also be equipped with similar Pod.
In operation, executing a user request from end user 202 may require invoking one or more services in computing environment 201, and executing one or more functions of one service may require invoking one or more functions of another service. As shown in FIG. 2, service "A"220-1 receives a user request of end user 202 from ingress gateway 230, service "A"220-1 may invoke service "D"220-2, and service "D"220-2 may request service "E"220-3 to perform one or more functions.
The computing environment may be a cloud computing environment, and the allocation of resources is managed by a cloud service provider, allowing the development of functions without considering the implementation, adjustment or expansion of the server. The computing environment allows developers to execute code that responds to events without building or maintaining a complex infrastructure. Instead of expanding a single hardware device to handle the potential load, the service may be partitioned to a set of functions that can be automatically scaled independently.
In the above-described operating environment, the present application provides a method for processing a three-dimensional object as shown in fig. 3. It should be noted that, the method for processing a three-dimensional object according to this embodiment may be performed by the mobile terminal according to the embodiment shown in fig. 1. Fig. 3 is a flowchart of a processing method of a three-dimensional object according to embodiment 1 of the present application. As shown in fig. 3, the method may include the steps of:
Step S302, capturing an observation perspective for observing a three-dimensional object, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment.
In the technical solution provided in step S302 of the present application, the three-dimensional object may be a virtual three-dimensional object corresponding to a real object in a real environment, for example, may be a virtual three-dimensional mountain corresponding to a mountain in the real environment; the virtual three-dimensional building corresponding to the building in the real environment can also be used; it may also be a virtual three-dimensional river corresponding to a river in a real environment, but is not limited thereto. The three-dimensional object may be a three-dimensional object in a three-dimensional map scene, for example, a three-dimensional object in a three-dimensional game map, or a three-dimensional object in a three-dimensional navigation map, but is not limited thereto. The observation view angle may be a view angle at which a user observes a three-dimensional object in a virtual environment, and observation images at which the three-dimensional object is observed at different view angles are different.
In an alternative embodiment, when the user obtains a specific observation image of a certain virtual three-dimensional object in the three-dimensional map, the user may select one observation angle in the virtual environment, for example, the user may select the observation angle by clicking, sliding, or the like on the operation interface. Therefore, by detecting the user operation, an observation angle at which the virtual three-dimensional object is observed can be captured in the virtual environment. For example, when a user observes a certain area of a three-dimensional map, the user may first operate on the three-dimensional map to select an observation angle of view, so that the device may capture the observation angle of view at which the three-dimensional map is observed.
Step S304, determining a target model corresponding to the observation visual angle from the implicit characterization model corresponding to the three-dimensional object.
The target model may be a part model on an implicit characterization model observed based on an observation view angle.
In the embodiment of the present application, the three-dimensional object may be described by means of implicit characterization, that is, a three-dimensional object may be represented in a virtual environment by an implicit characterization model, which may be, but is not limited to, a NeRF model, and may be other types of models. The target model may be a part of a model that can be observed by the user on the implicit characterization model, for example, the user may observe only a certain building or a certain road for the whole three-dimensional map.
In an alternative embodiment, after capturing the observation angle for observing the virtual three-dimensional object, an implicit characterization model corresponding to the three-dimensional object may be generated based on an artificial intelligence algorithm, and the specific model generating process may adopt a method provided by a related technology, which is not limited in this application. Based on the observation perspective, it is then determined which part of the model on the implicit characterization model can be observed by the user, and the part of the model is then used as the target model.
In another alternative embodiment, firstly, an implicit characterization model corresponding to the three-dimensional object can be generated based on an artificial intelligence algorithm, and secondly, based on the captured observation view angle, which part of the model on the implicit characterization model can be observed by the user can be determined, and then the part of the model is taken as a target model.
In another alternative embodiment, while capturing the observation view angle for observing the three-dimensional object, an implicit characterization model corresponding to the three-dimensional object can be generated based on an artificial intelligence algorithm, and then, based on the captured observation view angle, which part of the model on the implicit characterization model can be observed by the user can be determined, and then, the part of the model is taken as a target model.
And step S306, rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle.
The observation image may be a two-dimensional image obtained by rendering the object model and projecting it onto a plane.
In an alternative embodiment, after determining the target model corresponding to the observation angle, the target model may be rendered based on the observation angle, and the rendering result obtained may be an observation image corresponding to the observation angle. For example, after the target model corresponding to the observation view angle is obtained, the pixel points in the target model can be sampled based on the observation view angle to obtain sampling values of the pixel points, and then the sampling values can be summarized and presented on a background canvas to obtain the observation image.
In the embodiment of the application, an observation view angle for capturing and observing a three-dimensional object is adopted, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object; and rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle. It is easy to note that in the three-dimensional map scene, the target model for rendering can be quickly and accurately obtained by capturing the observation view angle for observing the three-dimensional object and the implicit characterization model constructed in advance, further, the target model is rendered by the observation view angle, and the target model is based on the neural network, so that the observation image of the three-dimensional object corresponding to the observation view angle can be accurately and intuitively reflected in a visual manner, the purpose of accurately obtaining the observation image corresponding to the three-dimensional object is achieved, the technical effect of improving the rendering accuracy of the three-dimensional object is achieved, and the technical problem of low rendering accuracy of the three-dimensional object in the related art is solved.
The above-described method of this embodiment is further described below.
In the above embodiments of the present application, determining, from an implicit characterization model corresponding to a three-dimensional object, a target model corresponding to an observation perspective includes: constructing a viewing cone corresponding to the observation visual angle, wherein the viewing cone is used for representing an observation space range corresponding to the observation visual angle; screening an implicit expression feature set in a view cone from the implicit characterization model based on a first spatial index corresponding to the implicit characterization model, wherein different spatial indexes contained in the first spatial index characterize different spatial ranges corresponding to the implicit characterization model; based on the implicit feature set, a target model is determined.
The viewing cone can be used for accurately describing the space range corresponding to the observation visual angle, and can be expressed as V. Wherein the viewing cone may include, but is not limited to: opening angle of the viewing cone and range angle of the viewing cone. The first spatial index may be a spatial index of a corresponding relationship between different spatial ranges and different implicit expression sets, where the first spatial index includes a plurality of different spatial indexes, and the implicit expression sets in different view cones may be determined by using the different spatial indexes. The implicit feature set described above may be a set of multiple implicit feature components within the view cone. The different spatial ranges mentioned above can be expressed as A n N represents the number of different spatial ranges, but is not limited thereto.
In an alternative embodiment, a viewing cone corresponding to the viewing angle may be constructed first, where the viewing cone may be a more precise spatial range corresponding to the viewing angle; and then, different implicit characteristic sets corresponding to different spatial indexes can be determined based on a first spatial index corresponding to the implicit characteristic model, wherein the first spatial index comprises a plurality of different spatial indexes, then, a viewing cone in an observation view angle can be determined from the different spatial indexes, then, the implicit characteristic sets in the viewing cone can be screened from the implicit characteristic model, and finally, a target model can be obtained based on the implicit characteristic sets. For example, a set of implicit feature sets may be obtained, resulting in a target model.
In the foregoing embodiments of the present application, based on a first spatial index corresponding to an implicit token model, screening an implicit token set in a view cone from the implicit token model includes: acquiring spatial ranges corresponding to different spatial indexes; determining whether crossing areas exist between the space ranges corresponding to different space indexes and the viewing cones; and adding the implicit representation corresponding to the first spatial index to the implicit representation set under the condition that the spatial range corresponding to the first spatial index and the viewing cone have an intersection region.
The intersection region may be a region overlapping the viewing cone in different spatial ranges corresponding to different spatial indexes.
In an alternative embodiment, firstly, the spatial ranges corresponding to different spatial indexes can be obtained, secondly, whether the spatial ranges corresponding to the different spatial indexes and the viewing cone have an intersection area or not can be determined, and if the spatial ranges corresponding to the first spatial indexes and the viewing cone are determined to have the intersection area, the implicit representation set corresponding to the first spatial indexes in the intersection area can be added to the implicit representation set, so that the implicit representation set in the viewing cone can be obtained.
In another optional embodiment, firstly, spatial ranges corresponding to different index spaces can be obtained, secondly, whether an intersection area exists between the spatial ranges corresponding to different spatial indexes and the view cone or not can be determined, and if it is determined that an intersection area exists between one spatial range corresponding to a first spatial index and the view cone, one implicit representation set corresponding to the first spatial index in the intersection area can be added to the implicit representation set, so that the implicit representation set in the view cone can be obtained.
In another optional embodiment, firstly, spatial ranges corresponding to different index spaces can be obtained, secondly, whether an intersection area exists between the spatial ranges corresponding to different spatial indexes and the view cone or not can be determined, and if it is determined that a plurality of spatial ranges corresponding to the first spatial index and the view cone exist in the intersection area, a plurality of implicit characterization sets corresponding to the first spatial index in the intersection area can be added to the implicit characterization set, so that the implicit characterization set in the view cone can be obtained.
In the foregoing embodiment of the present application, after constructing the viewing cone corresponding to the observation viewing angle, the method further includes: dividing the video cone to obtain a plurality of subspace ranges; the method comprises the steps of obtaining sub-models corresponding to a plurality of subspace ranges in parallel through a plurality of processing engines, wherein the processing engines correspond to the subspace ranges one by one; and (5) summarizing the sub-models to obtain the target model.
The processing engine may be a cloud processing engine capable of acquiring sub-models corresponding to a plurality of subspace ranges in parallel, where the plurality of cloud processing engines and the plurality of subspace ranges are in one-to-one correspondence. In this embodiment, the specific type of the cloud processing engine is not limited, and the user may select the cloud processing engine according to the actual requirement. The sub-model may be an implicit characterization model corresponding to the sub-range space.
In an alternative embodiment, firstly, the video cone may be divided to obtain a plurality of sub-range spaces, secondly, sub-models corresponding to the plurality of sub-range spaces may be obtained in parallel through a plurality of processing engines, wherein the plurality of processing engines and the plurality of sub-space ranges are in one-to-one correspondence, and then the sub-models may be summed up, for example, a sum value of the plurality of sub-models may be obtained, and thus the target model may be obtained.
In the above embodiment of the present application, rendering a target model based on an observation angle of view to obtain an observation image corresponding to the observation angle of view includes: determining sampling points in the viewing cone based on the observation view angle; determining a value of the sampling point based on the target model; superposing the values of the sampling points to obtain superposed pixel values; and superposing the superposed pixel values with a preset image to generate an observation image.
The sampling points may be pixel points within the viewing cone. The value of the sampling point may be a pixel value obtained by sampling the pixel point, and may be expressed as red, green, blue, and σ (RGB σ), but is not limited thereto. The preset image may be a virtual image corresponding to an object other than the real object in the real environment, for example, may be a virtual background image, but is not limited thereto.
In an alternative embodiment, firstly, the sampling point in the view cone may be determined based on the observation view angle, secondly, the value of the sampling point may be determined based on the target model, then the value of the sampling point may be overlapped to obtain an overlapped pixel value, and finally, the overlapped pixel value may be overlapped with the preset image, so that the observation image may be obtained.
In the above embodiments of the present application, determining the value of the sampling point based on the target model includes: constructing a second spatial index based on the geometric envelope of the target model; determining a target implicit representation corresponding to the sampling point from the target model based on the second spatial index; based on the implicit characterization of the target, the value of the sampling point is determined.
The geometric envelope may be selected from a target model, and the envelope includes a plurality of sampling points. The second spatial index may be pre-established, and the spatial index of the corresponding relation between the different sampling points and the different implicit characterizations is determined, so that the different implicit characterizations corresponding to the different sampling points can be obtained based on the second spatial index.
In an alternative embodiment, the target model may be screened to obtain a geometric envelope of the target model, where the geometric envelope includes a plurality of different sampling points, and then a second spatial index may be constructed based on the geometric envelope, where the different spatial indexes included in the second spatial index may embody different implicit target characterizations corresponding to the different sampling points, and then based on the second spatial index, a target implicit feature corresponding to the sampling point may be determined from the target model, and finally a value of the sampling point may be determined based on the target implicit feature and the target model.
In the above embodiments of the present application, determining the value of the sampling point based on the target model includes: dividing sampling points to obtain a plurality of sampling point sets; determining values of a plurality of sampling point sets based on a target model through a plurality of model reasoning devices in parallel, wherein the plurality of model reasoning devices and the plurality of sampling point sets correspond to one another; summarizing the values of the plurality of sampling point sets to obtain the values of the sampling points.
The above-described sampling point set may be a set composed of a plurality of sampling points. The above-mentioned multiple model inference devices can determine the values of multiple sampling point sets based on the target model, and may be cloud model inference devices, where the specific cloud model inference devices are not limited in this embodiment, and the user may set the values according to the actual needs.
In an alternative embodiment, the sampling points may be divided to obtain a plurality of sampling point sets, and then the values of the plurality of sampling point sets may be determined based on the target model through a plurality of model inference devices in parallel, where the plurality of model inference devices and the plurality of sampling point sets correspond to one another, and finally the values of the plurality of sampling point sets may be summarized, for example, the values of the plurality of sampling point sets may be summed, or the values of the plurality of sampling point sets may be weighted and summed, so as to obtain the values of the sampling points.
In the above embodiment of the present application, the method further includes: shooting a real object in a real environment to obtain an image material; generating an implicit characterization model based on the image material; and constructing a first spatial index corresponding to the implicit characterization model.
In an alternative embodiment, firstly, a real object in a real environment can be shot through image acquisition equipment to obtain image materials; secondly, an implicit characterization model can be obtained based on the image materials and an artificial intelligence algorithm; finally, a first spatial index corresponding to the implicit characterization model can be constructed based on the implicit characterization model.
It should be noted that the image capturing device may be any one or more devices capable of capturing a real object in a real environment, for example, but not limited to, an unmanned aerial vehicle, a camera, and the like. In this embodiment, the specific type of the image capturing device is not limited, and the user can select the image capturing device according to the actual requirement.
In the above embodiment of the present application, constructing the first spatial index corresponding to the implicit characterization model includes one of the following: determining a scene space range of the three-dimensional object in a real environment, and constructing a first space index based on the scene space range; an explicit envelope of the implicit characterization model is extracted and a first spatial index is constructed based on the explicit envelope.
The above-described scene space ranges may include, but are not limited to: the latitude and longitude ranges of the scene and the altitude range. The explicit envelope can be obtained by extracting an implicit characterization model, and is a more accurate spatial range.
In an alternative embodiment, first a scene space range of the three-dimensional object in the real environment may be determined, and second a first spatial index may be constructed based on the scene space range and the implicit characterization model.
In another alternative embodiment, the implicit characterization model may be extracted to obtain an explicit envelope, and the first spatial index may be constructed based on the explicit envelope and the implicit characterization model.
In the related art, when generating an observation image corresponding to a three-dimensional object, the following problems are mainly included:
1. the production cost of the three-dimensional explicit characterization data is extremely high, and the preparation period is long;
2. based on the client rendering scheme of explicit characterization, three-dimensional explicit characterization data need to be transmitted from the back end to the front end, and the risk of data security exists;
3. explicit characterization cannot describe three-dimensional objects (e.g., water, fog) with uncertain geometry such as fluids, gases, liquids, etc., rendering results with low fidelity;
4. With explicit characterization data size growth, rendering speed is limited by the client device.
In order to solve the technical problems, the application provides a three-dimensional map solving method based on implicit neural characterization, which comprises the following steps:
step S1, data acquisition: the two-dimensional picture material can be collected aiming at the three-dimensional scene through means of aerial photography, unmanned aerial vehicle, manual shooting and the like. And records a scene space range including longitude, latitude, and altitude ranges of the acquisition scene.
Step S2, obtaining an implicit characterization model: and removing the background of the unnecessary shot photo by using an artificial intelligent algorithm to generate an implicit characterization model of the three-dimensional scene.
It should be noted that, the implicit characterization model is a three-dimensional model expressed by a neural network, and is mainly expressed with respect to visual information, where the implicit characterization may make an application (e.g., a user) feel a shape in other manners (e.g., a visual manner), and the authenticity of the implicit characterization model is higher than that of the explicit characterization model.
Step S3, obtaining an implicit representation space index: a global spatial index (i.e., a first spatial index) may be constructed based on the scene spatial range of the acquisition record; the relevant explicit envelope can also be extracted based on an implicit characterization model to replace the spatial range of the acquisition record as a global spatial index.
When the field Jing Shumu is sufficiently large, the scene space range (a 1 、A 2 、……、A n ). When a certain specified spatial range (V) is of interest, it is necessary to quickly find the spatial range of the scene within the range of interest (e.g., when V and a 10 、A 100 With overlap, and with no overlap with other scene space ranges, A can be determined 10 And A 100 In the range of V), this technical means that can help to quickly screen the spatial range of the scene is called global spatial index.
Step S4, screening an implicit expression feature set based on an observation view angle: after the viewing angle is determined, a viewing cone may be determined, and then the implicit feature set within the viewing cone may be filtered out and a fast spatial index (i.e., a second spatial index) constructed based on the geometric envelope of the implicit feature set.
It should be noted that, the viewing cone is a more accurate expression of the view angle of the map, and is a specific mathematical geometry, and can accurately express the spatial range seen from the view angle.
Step S5, realizing implicit characterization set rendering based on an observation view angle to obtain an observation image: firstly, light rays emitted from an observation view angle can be sampled in a view cone, secondly, a specific implicit representation model can be positioned based on a rapid spatial index, model capacity is called, values (RGB sigma) of sampling points are obtained, then the values of the sampling points can be accumulated, and finally, the overlapped pixel values are overlapped on a background canvas, so that an observation image can be obtained.
It should be noted that, the steps S2 to S5 may be completed in a cloud scene, and the client may directly display the rendered picture without consuming computing power.
The related cloud computing force scheme may include:
1. implicit characterization model production and implicit characterization spatial index extraction can distribute production tasks to cloud containers for execution.
2. The storage and query capabilities of the implicit spatial index are supported by the cloud-powerful database products.
3. Model screening based on viewing angles includes the steps of:
a) After the observation visual angle is stable, the view cone range is partitioned into a plurality of sub view cones, and the sub view cones are distributed to a plurality of cloud map engines for execution;
b) And calling a cloud database by the cloud map engine to finish coarse screening, and then creating a temporary index in the memory based on screening results.
4. Rendering based on the observation perspective and the implicit characterization model:
a) Distributing reasoning of the sampling points to a model reasoning cluster by a cloud map engine, and caching (cache) a representation model by machines on the cluster to reduce the cost of repeated loading of the model;
b) The result of the model reasoning cluster is accumulated by the cloud map engine and rendered to the canvas;
c) And after the client receives the partition result, rendering the partition result on the display screen sequentially according to the partition, so as to obtain an observation image.
According to the method, three-dimensional map implementation is performed based on the implicit characterization model, an index system is built by collecting information and geometric envelope, the implicit characterization model related to the viewing cone is rapidly positioned, cloud resources are combined, and powerful computing power is provided for supporting view rendering.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus a necessary general hardware platform, but that it may also be implemented by means of hardware. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
Example 2
There is also provided in accordance with an embodiment of the present application a method of processing a three-dimensional object, it being noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system, such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Fig. 4 is a flowchart of a method for processing a three-dimensional object according to embodiment 2 of the present application, and as shown in fig. 4, the method may include the steps of:
step S402, responding to an input instruction acted on an operation interface, and displaying a three-dimensional object on the operation interface, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment;
and step S404, in response to the observation instruction acted on the operation interface, displaying an observation image corresponding to the observation instruction on the operation interface, wherein the observation image is obtained by rendering a target model corresponding to the observation time based on the observation view angle corresponding to the observation instruction, and the target model is determined from an implicit characterization model corresponding to the three-dimensional object based on the observation view angle.
FIG. 5 is a schematic display diagram of an alternative operation interface according to embodiment 2 of the present application, as shown in FIG. 5, firstly, in response to a user input command in an input command input area, the operation interface may display a three-dimensional object in a display area, where the three-dimensional object is used to characterize a virtual object corresponding to a real object in a real environment; and secondly, responding to an observation instruction input by a user in an observation instruction input area, and displaying an observation image corresponding to the observation instruction in a real area by an operation interface, wherein the observation image is obtained by rendering a target model corresponding to the observation time based on an observation view angle corresponding to the observation instruction, and the target model is determined from an implicit characterization model corresponding to the three-dimensional object based on the observation view angle.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus a necessary general hardware platform, but that it may also be implemented by means of hardware. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
Example 3
There is also provided, in accordance with an embodiment of the present application, a method of processing a three-dimensional object in a virtual reality scene that may be applied to a virtual reality VR device, an augmented reality AR device, or the like, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Fig. 6 is a flowchart of a processing method of a three-dimensional object according to embodiment 3 of the present application. As shown in fig. 6, the method may include the steps of:
step S602, a three-dimensional object is displayed on a display screen of a virtual reality VR device or an augmented reality AR device, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment;
step S604, capturing an observation view angle for observing the three-dimensional object;
step S606, determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object;
step S608, rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle;
step S6010, driving the VR device or the AR device to render the display observation image.
In an alternative embodiment, firstly, in response to an acquisition request of a user, a three-dimensional object may be displayed on a display screen of a virtual reality VR device or an augmented reality AR device, where the three-dimensional object is used to represent a virtual object corresponding to the real object in a real environment, secondly, an observation view angle for observing the three-dimensional object may be captured by the virtual reality VR device or the augmented reality AR device, then, a target model corresponding to the observation view angle may be determined from an implicit representation model corresponding to the three-dimensional object, then, the target model may be rendered based on the observation view angle, an observation image corresponding to the observation view angle may be obtained, and finally, the VR device or the AR device may be driven to render and display the observation image.
Alternatively, in the present embodiment, the above-described method for processing a three-dimensional object may be applied to a hardware environment constituted by a server, a virtual reality device. The observation image is shown on a presentation screen of the virtual reality VR device or the augmented reality AR device, and the server may be a server corresponding to a media file operator, where the network includes but is not limited to: the virtual reality device is not limited to a wide area network, a metropolitan area network, or a local area network: virtual reality helmets, virtual reality glasses, virtual reality all-in-one machines, and the like.
Optionally, the virtual reality device comprises: memory, processor, and transmission means. The memory is used to store an application program that can be used to perform: displaying a three-dimensional object on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; capturing an observation view angle for observing a three-dimensional object; determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object; rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle; driving the VR device or the AR device to render the presentation observation image.
It should be noted that, the method for processing a three-dimensional object applied to a VR device or an AR device in this embodiment may include the method of the embodiment shown in fig. 6, so as to achieve the purpose of driving the VR device or the AR device to display an observed image.
Alternatively, the processor of this embodiment may call the application program stored in the memory through the transmission device to perform the above steps. The transmission device can receive the media file sent by the server through the network and can also be used for data transmission between the processor and the memory.
Optionally, in the virtual reality device, a head-mounted display with eye tracking is provided, a screen in the head-mounted display of the HMD is used for displaying a video picture displayed, an eye tracking module in the HMD is used for acquiring real-time motion tracks of eyes of the user, a tracking system is used for tracking position information and motion information of the user in a real three-dimensional space, a calculation processing unit is used for acquiring real-time position and motion information of the user from the tracking system, and calculating three-dimensional coordinates of the head of the user in the virtual three-dimensional space, visual field orientation of the user in the virtual three-dimensional space and the like.
In this embodiment of the present application, the virtual reality device may be connected to a terminal, where the terminal and the server are connected through a network, and the virtual reality device is not limited to: the terminal is not limited to a PC, a mobile phone, a tablet PC, etc., and the server may be a server corresponding to a media file operator, and the network includes but is not limited to: a wide area network, a metropolitan area network, or a local area network.
Example 4
There is also provided in accordance with an embodiment of the present application a method of processing a three-dimensional object, it being noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system, such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Fig. 7 is a flowchart of a method for processing a three-dimensional object according to embodiment 4 of the present application, and as shown in fig. 7, the method may include the steps of:
step S702, an observation view angle is obtained by calling a first interface, wherein the first interface comprises a first parameter, a parameter value of the first parameter is the observation view angle, the observation view angle is a view angle for observing a three-dimensional object, and the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment;
step S704, determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object;
step S706, rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle;
in step S708, the observed image is output by calling a second interface, wherein the second interface includes a second parameter, and a parameter value of the second parameter is the observed image.
The first interface may be an interface that the cloud server obtains an observation perspective from the mobile terminal. The second interface may be an interface that the cloud server outputs the observation image to the mobile terminal.
In an alternative embodiment, an observation view angle can be obtained from a mobile terminal by calling a first interface, wherein the first interface comprises a first parameter, a parameter value of the first parameter is the observation view angle, the observation view angle is a view angle for observing a three-dimensional object, and the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; secondly, determining a target model corresponding to the observation visual angle from an implicit characterization model corresponding to the three-dimensional object; then rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle; and finally, outputting the observed image to the mobile terminal by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the observed image.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus a necessary general hardware platform, but that it may also be implemented by means of hardware. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
Example 5
According to an embodiment of the application, there is also provided a processing system for implementing the three-dimensional object. Fig. 8 is a schematic diagram of a three-dimensional object processing system according to embodiment 5 of the present application, as shown in fig. 8, the system including: a client device 80 and a server cluster 82.
The client device is used for displaying a three-dimensional object and capturing an observation view angle for observing the three-dimensional object, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; the server cluster is connected with the client and is used for determining a target model corresponding to the observation visual angle from the implicit characterization model corresponding to the three-dimensional object, and rendering the target model based on the observation visual angle to obtain an observation image corresponding to the observation visual angle; the client device is also configured to display the observation image.
In the above embodiment of the present application, the server cluster includes: control device, database and processing device.
The control equipment is used for constructing a viewing cone corresponding to the observation visual angle, wherein the viewing cone is used for representing an observation space range corresponding to the observation visual angle; the database is used for storing a first spatial index corresponding to the implicit characterization model, wherein different indexes contained in the first spatial index characterize different spatial ranges corresponding to the implicit characterization model; the processing device is connected with the control device and the database and is used for screening an implicit characteristic set in the view cone from the implicit characteristic model based on a first spatial index corresponding to the implicit characteristic model and determining a target model based on the implicit characteristic set.
In the above embodiments of the present application, the processing apparatus includes: a plurality of processing engines.
The processing engines are used for parallelly acquiring sub-models corresponding to the subspace ranges, the processing engines correspond to the subspace ranges one by one, and the subspace ranges are obtained by dividing the video cone through the control equipment; the control device is also used for summarizing the sub-models to obtain the target model.
In the foregoing embodiment of the present application, the server cluster further includes: processing device and inference clusters.
The processing equipment is used for determining sampling points in the viewing cone based on the observation visual angle; the reasoning cluster is connected with the processing equipment and is used for determining the value of the sampling point based on the target model; the processing equipment is also used for superposing the values of the sampling points to obtain superposed pixel values, and superposing the superposed pixel values with a preset image to generate an observation image.
In the above embodiment of the present application, the inference cluster includes: a plurality of model reasoning devices.
The system comprises a plurality of model reasoning devices, a plurality of sampling point sets and a model analysis device, wherein the model reasoning devices are used for determining values of the plurality of sampling point sets based on a target model in parallel, the model reasoning devices correspond to the plurality of sampling point sets in a one-to-one mode, and the sampling point sets are obtained by dividing sampling points; the processing device is further configured to aggregate values of the plurality of sampling point sets to obtain values of the sampling points.
In the foregoing embodiment of the present application, the server cluster further includes: a container apparatus.
The container equipment is used for generating an implicit characterization model based on the image material and constructing a first spatial index corresponding to the implicit characterization model, wherein the image material is obtained by shooting a real object in a real environment.
Example 6
According to an embodiment of the application, there is also provided a three-dimensional object processing apparatus for implementing the three-dimensional object processing method. Fig. 9 is a schematic view of a three-dimensional object processing apparatus according to embodiment 6 of the present application, as shown in fig. 9, the apparatus including: a capture module 92, a determination module 94, and a rendering module 96.
The three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; the determining module is used for determining a target model corresponding to the observation visual angle from the implicit characterization model corresponding to the three-dimensional object; the rendering module is used for rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle.
It should be noted that, the capturing module 92, the determining module 94 and the rendering module 96 correspond to steps S302 to S306 in embodiment 1, and the three modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above modules or units may be hardware components or software components stored in a memory and processed by one or more processors, or the above modules may also be part of an apparatus and may be run in the AR/VR device provided in embodiment 1.
In the above embodiments of the present application, the determining module includes: the device comprises a construction unit, a screening unit and a first determination unit.
The construction unit is used for constructing a viewing cone corresponding to the observation visual angle, wherein the viewing cone is used for representing an observation space range corresponding to the observation visual angle; the screening unit is used for screening the implicit expression feature set in the view cone from the implicit characterization model based on a first spatial index corresponding to the implicit characterization model, wherein different spatial indexes contained in the first spatial index characterize different spatial ranges corresponding to the implicit characterization model; the first determining unit is used for determining a target model based on the implicit characteristic set.
In the above embodiments of the present application, the screening unit includes: the system comprises an acquisition subunit, a first determination subunit and an addition subunit.
The acquisition subunit is used for acquiring the space ranges corresponding to the different space indexes; the first determining subunit is used for determining whether a crossing area exists between a space range corresponding to different space indexes and a view cone; the adding subunit is configured to add, to the implicit feature set, an implicit feature corresponding to the first spatial index when the spatial range corresponding to the first spatial index has an intersection region with the view cone.
In the foregoing embodiment of the present application, the determining module further includes: the device comprises a dividing unit, an acquiring unit and a summarizing unit.
The dividing unit is used for dividing the video cone to obtain a plurality of subspace ranges; the acquisition unit is used for acquiring the sub-models corresponding to the subspace ranges in parallel through the processing engines, wherein the processing engines correspond to the subspace ranges one by one; and the summarizing unit is used for summarizing the sub-models to obtain the target model.
In the above embodiments of the present application, the rendering module includes: the device comprises a sampling unit, a second determining unit, a first superposition unit and a second superposition unit.
The sampling unit is used for determining sampling points in the viewing cone based on the observation visual angle; the second determining unit is used for determining the value of the sampling point based on the target model; the first superposition unit is used for superposing the values of the sampling points to obtain superposed pixel values; and the second superposition unit is used for superposing the superposed pixel values with a preset image to generate an observation image.
In the above embodiment of the present application, the second determining unit includes: a construction subunit, a second determination subunit, and a third determination subunit.
The construction subunit is used for constructing a second spatial index based on the geometric envelope of the target model; the second determining subunit is used for determining the implicit representation of the target corresponding to the sampling point from the target model based on the second spatial index; the third determination subunit is configured to determine a value of the sampling point based on the implicit characterization of the target.
In the above embodiment of the present application, the second determining unit further includes: a dividing subunit, a fourth determining subunit and a summarizing subunit.
The division subunit finally divides the sampling points to obtain a plurality of sampling point sets; the fourth determining subunit is used for determining values of a plurality of sampling point sets based on the target model through a plurality of model reasoning devices in parallel, wherein the plurality of model reasoning devices and the plurality of sampling point sets are corresponding in one; and the summarizing subunit is used for summarizing the values of the plurality of sampling point sets to obtain the values of the sampling points.
In the above embodiments of the present application, the apparatus further includes: shooting module, generating module and constructing module.
The shooting module is used for shooting a real object in a real environment to obtain an image material; the generation module is used for generating an implicit characterization model based on the image materials; the construction module is used for constructing a first spatial index corresponding to the implicit characterization model.
In the above embodiment of the present application, the building block includes at least one of the following: and a third determination unit and an extraction unit.
The third determining unit is used for determining a scene space range of the three-dimensional object in the real environment and constructing a first space index based on the scene space range; the extraction unit is used for extracting an explicit envelope of the implicit characterization model and constructing a first spatial index based on the explicit envelope.
Example 7
According to an embodiment of the application, there is also provided a three-dimensional object processing apparatus for implementing the three-dimensional object processing method. Fig. 10 is a schematic view of a three-dimensional object processing apparatus according to embodiment 7 of the present application, as shown in fig. 10, including: a first display module 1002 and a second display module 1004.
The first display module is used for responding to an input instruction acted on the operation interface, and displaying a three-dimensional object on the operation interface, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; the second display module is used for responding to the observation instruction acted on the operation interface, and displaying an observation image corresponding to the observation instruction on the operation interface, wherein the observation image is obtained by rendering a target model corresponding to the observation time based on the observation view angle corresponding to the observation instruction, and the target model is determined from an implicit characterization model corresponding to the three-dimensional object based on the observation view angle.
Here, it should be noted that the first display module 1002 and the second display module 1004 correspond to steps S402 to S404 in embodiment 2, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above modules or units may be hardware components or software components stored in a memory and processed by one or more processors, or the above modules may also be part of an apparatus and may be run in the AR/VR device provided in embodiment 1.
Example 8
According to an embodiment of the application, there is also provided a three-dimensional object processing apparatus for implementing the three-dimensional object processing method. Fig. 11 is a schematic view of a three-dimensional object processing apparatus according to embodiment 8 of the present application, as shown in fig. 11, the apparatus including: a display module 1102, a capture module 1104, a determination module 1106, a rendering module 1108, and a drive module 11010.
The display module is used for displaying the three-dimensional object on a display picture of the virtual reality VR device or the augmented reality AR device, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; the capturing module is used for capturing an observation visual angle for observing the three-dimensional object; the determining module is used for determining a target model corresponding to the observation visual angle from the implicit characterization model corresponding to the three-dimensional object; the rendering module is used for rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle; the driving module is used for driving the VR device or the AR device to render and display the observed image.
Here, the display module 1102, the capturing module 1104, the determining module 1106, the rendering module 1108, and the driving module 11010 correspond to steps S602 to S6010 in embodiment 3, and the two modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above modules or units may be hardware components or software components stored in a memory and processed by one or more processors, or the above modules may also be part of an apparatus and may be run in the AR/VR device provided in embodiment 1.
Example 9
According to an embodiment of the application, there is also provided a three-dimensional object processing apparatus for implementing the three-dimensional object processing method. Fig. 12 is a schematic view of a three-dimensional object processing apparatus according to embodiment 9 of the present application, as shown in fig. 12, including: a calling module 1202, a determining module 1204, a rendering module 1206, and an output module 1208.
The calling module is used for obtaining an observation visual angle through calling the first interface, wherein the first interface comprises a first parameter, a parameter value of the first parameter is the observation visual angle, the observation visual angle is a visual angle for observing a three-dimensional object, and the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; the determining module is used for determining a target model corresponding to the observation visual angle from the implicit characterization model corresponding to the three-dimensional object; the rendering module is used for rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle; the output module is used for outputting the observed image by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the observed image.
It should be noted that, the invoking module 1202, the determining module 1204, the rendering module 1206 and the output module 1208 correspond to steps S702 to S708 in embodiment 4, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above modules or units may be hardware components or software components stored in a memory and processed by one or more processors, or the above modules may also be part of an apparatus and may be run in the AR/VR device provided in embodiment 1.
Example 10
Embodiments of the present application may provide an AR/VR device that may be any one of a group of AR/VR devices. Alternatively, in this embodiment, the AR/VR device may be replaced by a terminal device such as a mobile terminal.
Alternatively, in this embodiment, the AR/VR device may be located in at least one network device among a plurality of network devices of the computer network.
In this embodiment, the above-mentioned AR/VR device may execute the program codes of the following steps in the processing method of the three-dimensional object: displaying a three-dimensional object on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; capturing an observation view angle for observing a three-dimensional object; determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object; rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle; driving the VR device or the AR device to render the presentation observation image.
Alternatively, fig. 13 is a block diagram of a computer terminal according to an embodiment of the present application. As shown in fig. 13, the computer terminal a may include: one or more (only one is shown) processors 1302, memory 1304, memory controller, and peripheral interfaces, where the peripheral interfaces are coupled to the radio frequency module, audio module, and display.
The memory may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for processing a three-dimensional object in the embodiments of the present application, and the processor executes the software programs and modules stored in the memory, thereby executing various functional applications and data processing, that is, implementing the method for processing a three-dimensional object. The memory may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located with respect to the processor, which may be connected to terminal a through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: capturing an observation view angle for observing a three-dimensional object, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object; rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle.
Optionally, the above processor may further execute program code for: constructing a viewing cone corresponding to the observation visual angle, wherein the viewing cone is used for representing an observation space range corresponding to the observation visual angle; screening an implicit expression feature set in a view cone from the implicit characterization model based on a first spatial index corresponding to the implicit characterization model, wherein different spatial indexes contained in the first spatial index characterize different spatial ranges corresponding to the implicit characterization model; based on the implicit feature set, a target model is determined.
Optionally, the above processor may further execute program code for: acquiring spatial ranges corresponding to different spatial indexes; determining whether crossing areas exist between the space ranges corresponding to different space indexes and the viewing cones; and adding the implicit representation corresponding to the first spatial index to the implicit representation set under the condition that the spatial range corresponding to the first spatial index and the viewing cone have an intersection region.
Optionally, the above processor may further execute program code for: dividing the video cone to obtain a plurality of subspace ranges; the method comprises the steps of obtaining sub-models corresponding to a plurality of subspace ranges in parallel through a plurality of processing engines, wherein the processing engines correspond to the subspace ranges one by one; and (5) summarizing the sub-models to obtain the target model.
Optionally, the above processor may further execute program code for: determining sampling points in the viewing cone based on the observation view angle; determining a value of the sampling point based on the target model; superposing the values of the sampling points to obtain superposed pixel values; and superposing the superposed pixel values with a preset image to generate an observation image.
Optionally, the above processor may further execute program code for: constructing a second spatial index based on the geometric envelope of the target model; determining a target implicit representation corresponding to the sampling point from the target model based on the second spatial index; based on the implicit characterization of the target, the value of the sampling point is determined.
Optionally, the above processor may further execute program code for: dividing sampling points to obtain a plurality of sampling point sets; determining values of a plurality of sampling point sets based on a target model through a plurality of model reasoning devices in parallel, wherein the plurality of model reasoning devices and the plurality of sampling point sets correspond to one another; summarizing the values of the plurality of sampling point sets to obtain the values of the sampling points.
Optionally, the above processor may further execute program code for: shooting a real object in a real environment to obtain an image material; generating an implicit characterization model based on the image material; and constructing a first spatial index corresponding to the implicit characterization model.
Optionally, the above processor may further execute a program code for one of the following steps: determining a scene space range of the three-dimensional object in a real environment, and constructing a first space index based on the scene space range; an explicit envelope of the implicit characterization model is extracted and a first spatial index is constructed based on the explicit envelope.
By adopting the embodiment of the application, an observation view angle for capturing an observation of a three-dimensional object is provided, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object; rendering the target model based on the observation view angle to obtain an observation image scheme corresponding to the observation view angle. It is easy to note that in the three-dimensional map scene, the target model to be rendered can be quickly and accurately obtained by capturing the observation view angle for observing the three-dimensional object and the implicit characterization model constructed in advance, further, the target model is rendered by the observation view angle, and the target model is based on a neural network, so that the observation image of the three-dimensional object corresponding to the observation view angle can be accurately and intuitively reflected in a visual manner, the purpose of accurately obtaining the observation image corresponding to the three-dimensional object is achieved, the technical effect of improving the rendering accuracy of the three-dimensional object is achieved, and the technical problem of low rendering accuracy of the three-dimensional object in the related art is solved.
It will be appreciated by those skilled in the art that the configuration shown in fig. 13 is only illustrative, and the computer terminal may be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 13 is not limited to the structure of the electronic device. For example, the computer terminal a may also include more or fewer components (such as a network interface, a display device, etc.) than shown in fig. 13, or have a different configuration than shown in fig. 13.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
Example 6
Embodiments of the present application also provide a computer-readable storage medium. Alternatively, in the present embodiment, the above-described computer-readable storage medium may be used to store program codes executed by the processing method of the three-dimensional object provided in the above-described embodiment 1.
Alternatively, in this embodiment, the above-mentioned computer readable storage medium may be located in any one of the AR/VR device terminals in the AR/VR device network or in any one of the mobile terminals in the mobile terminal group.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: capturing an observation view angle for observing a three-dimensional object, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment; determining a target model corresponding to an observation visual angle from an implicit characterization model corresponding to the three-dimensional object; rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle.
Optionally, in the present embodiment, the computer readable storage medium is further configured to store program code for performing the steps of: constructing a viewing cone corresponding to the observation visual angle, wherein the viewing cone is used for representing an observation space range corresponding to the observation visual angle; screening an implicit expression feature set in a view cone from the implicit characterization model based on a first spatial index corresponding to the implicit characterization model, wherein different spatial indexes contained in the first spatial index characterize different spatial ranges corresponding to the implicit characterization model; based on the implicit feature set, a target model is determined.
Optionally, in the present embodiment, the computer readable storage medium is further configured to store program code for performing the steps of: acquiring spatial ranges corresponding to different spatial indexes; determining whether crossing areas exist between the space ranges corresponding to different space indexes and the viewing cones; and adding the implicit representation corresponding to the first spatial index to the implicit representation set under the condition that the spatial range corresponding to the first spatial index and the viewing cone have an intersection region.
Optionally, in the present embodiment, the computer readable storage medium is further configured to store program code for performing the steps of: dividing the video cone to obtain a plurality of subspace ranges; the method comprises the steps of obtaining sub-models corresponding to a plurality of subspace ranges in parallel through a plurality of processing engines, wherein the processing engines correspond to the subspace ranges one by one; and (5) summarizing the sub-models to obtain the target model.
Optionally, in the present embodiment, the computer readable storage medium is further configured to store program code for performing the steps of: determining sampling points in the viewing cone based on the observation view angle; determining a value of the sampling point based on the target model; superposing the values of the sampling points to obtain superposed pixel values; and superposing the superposed pixel values with a preset image to generate an observation image.
Optionally, in the present embodiment, the computer readable storage medium is further configured to store program code for performing the steps of: constructing a second spatial index based on the geometric envelope of the target model; determining a target implicit representation corresponding to the sampling point from the target model based on the second spatial index; based on the implicit characterization of the target, the value of the sampling point is determined.
Optionally, in the present embodiment, the computer readable storage medium is further configured to store program code for performing the steps of: dividing sampling points to obtain a plurality of sampling point sets; determining values of a plurality of sampling point sets based on a target model through a plurality of model reasoning devices in parallel, wherein the plurality of model reasoning devices and the plurality of sampling point sets correspond to one another; summarizing the values of the plurality of sampling point sets to obtain the values of the sampling points.
Optionally, in the present embodiment, the computer readable storage medium is further configured to store program code for performing the steps of: shooting a real object in a real environment to obtain an image material; generating an implicit characterization model based on the image material; and constructing a first spatial index corresponding to the implicit characterization model.
Optionally, in the present embodiment, the computer readable storage medium is further configured to store program code for performing one of the following steps: determining a scene space range of the three-dimensional object in a real environment, and constructing a first space index based on the scene space range; an explicit envelope of the implicit characterization model is extracted and a first spatial index is constructed based on the explicit envelope.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (14)

1. A method of processing a three-dimensional object, comprising:
capturing an observation view angle for observing a three-dimensional object, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment;
determining a target model corresponding to the observation visual angle from an implicit characterization model corresponding to the three-dimensional object;
rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle.
2. The method of claim 1, wherein determining the target model corresponding to the viewing perspective from the implicit characterization model corresponding to the three-dimensional object comprises:
constructing a viewing cone corresponding to the observation visual angle, wherein the viewing cone is used for representing an observation space range corresponding to the observation visual angle;
screening an implicit feature set in the view cone from the implicit characterization model based on a first spatial index corresponding to the implicit characterization model, wherein different spatial indexes contained in the first spatial index characterize different spatial ranges corresponding to the implicit characterization model;
The target model is determined based on the implicit feature set.
3. The method of claim 2, wherein screening the implicit feature set within the view cone from the implicit characterization model based on a first spatial index corresponding to the implicit characterization model comprises:
acquiring the space ranges corresponding to the different space indexes;
determining whether a crossing area exists between the space range corresponding to the different space indexes and the viewing cone;
and adding the implicit representation corresponding to the first spatial index to the implicit representation set under the condition that a space range corresponding to the first spatial index and the view cone have an intersection area.
4. The method of claim 2, wherein after constructing the viewing cone corresponding to the viewing angle, the method further comprises:
dividing the viewing cone to obtain a plurality of subspace ranges;
obtaining sub-models corresponding to the subspace ranges in parallel through a plurality of processing engines, wherein the processing engines correspond to the subspace ranges one by one;
and (3) summarizing the sub-model to obtain the target model.
5. The method of claim 1, wherein rendering the target model based on the observation perspective to obtain an observation image corresponding to the observation perspective comprises:
Determining sampling points in the viewing cone based on the observation visual angle;
determining a value of the sampling point based on the target model;
superposing the values of the sampling points to obtain superposed pixel values;
and superposing the superposed pixel values with a preset image to generate the observation image.
6. The method of claim 5, wherein determining the value of the sampling point based on the target model comprises:
constructing a second spatial index based on the geometric envelope of the target model;
determining a target implicit representation corresponding to the sampling point from the target model based on the second spatial index;
determining a value of the sampling point based on the target implicit characterization.
7. The method of claim 5, wherein determining the value of the sampling point based on the target model comprises:
dividing the sampling points to obtain a plurality of sampling point sets;
determining values of the plurality of sampling point sets based on the target model in parallel through a plurality of model reasoning devices, wherein the plurality of model reasoning devices and the plurality of sampling point sets correspond to one another;
summarizing the values of the plurality of sampling point sets to obtain the values of the sampling points.
8. The method according to claim 1, wherein the method further comprises:
shooting the real object in the real environment to obtain an image material;
generating the implicit characterization model based on the image material;
and constructing a first spatial index corresponding to the implicit characterization model.
9. The method of claim 8, wherein constructing a first spatial index corresponding to the implicit characterization model comprises one of:
determining a scene space range of the three-dimensional object in the real environment, and constructing the first spatial index based on the scene space range;
an explicit envelope of the implicit characterization model is extracted and the first spatial index is constructed based on the explicit envelope.
10. A method of processing a three-dimensional object, comprising:
responding to an input instruction acted on an operation interface, and displaying a three-dimensional object on the operation interface, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment;
and responding to the observation instruction acted on the operation interface, and displaying an observation image corresponding to the observation instruction on the operation interface, wherein the observation image is obtained by rendering a target model corresponding to the observation time based on an observation view angle corresponding to the observation instruction, and the target model is determined from an implicit characterization model corresponding to the three-dimensional object based on the observation view angle.
11. A method of processing a three-dimensional object, comprising:
displaying a three-dimensional object on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment;
capturing an observation view angle for observing the three-dimensional object;
determining a target model corresponding to the observation visual angle from an implicit characterization model corresponding to the three-dimensional object;
rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle;
the VR device or the AR device is driven to render and display the observed image.
12. A method of processing a three-dimensional object, comprising:
acquiring an observation view angle by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the observation view angle, the observation view angle is a view angle for observing a three-dimensional object, and the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment;
determining a target model corresponding to the observation visual angle from an implicit characterization model corresponding to the three-dimensional object;
Rendering the target model based on the observation view angle to obtain an observation image corresponding to the observation view angle;
and outputting the observed image by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the observed image.
13. A system for processing a three-dimensional object, comprising:
the client device is used for displaying a three-dimensional object and capturing an observation view angle for observing the three-dimensional object, wherein the three-dimensional object is used for representing a virtual object corresponding to a real object in a real environment;
the server cluster is connected with the client and is used for determining a target model corresponding to the observation visual angle from an implicit characterization model corresponding to the three-dimensional object, and rendering the target model based on the observation visual angle to obtain an observation image corresponding to the observation visual angle;
the client device is also configured to display the observation image.
14. An electronic device, comprising:
a memory storing an executable program;
a processor for executing the program, wherein the program when run performs the method of any of claims 1 to 12.
CN202310492050.5A 2023-05-04 2023-05-04 Processing method and system of three-dimensional object, electronic equipment and storage medium Pending CN116543105A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310492050.5A CN116543105A (en) 2023-05-04 2023-05-04 Processing method and system of three-dimensional object, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310492050.5A CN116543105A (en) 2023-05-04 2023-05-04 Processing method and system of three-dimensional object, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116543105A true CN116543105A (en) 2023-08-04

Family

ID=87457124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310492050.5A Pending CN116543105A (en) 2023-05-04 2023-05-04 Processing method and system of three-dimensional object, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116543105A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163943A (en) * 2018-11-21 2019-08-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
WO2021235971A1 (en) * 2020-05-17 2021-11-25 Общество с ограниченной ответственностью "ЭсЭнЭйч МейстерСофт" Method for rendering 3d models in a browser using distributed resources
CN114119849A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Three-dimensional scene rendering method, device and storage medium
CN114255313A (en) * 2022-02-28 2022-03-29 深圳星坊科技有限公司 Three-dimensional reconstruction method and device for mirror surface object, computer equipment and storage medium
CN115115780A (en) * 2022-06-29 2022-09-27 聚好看科技股份有限公司 Three-dimensional reconstruction method and system based on multi-view RGBD camera
WO2023061364A1 (en) * 2021-10-15 2023-04-20 华为技术有限公司 Model establishment method and related apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163943A (en) * 2018-11-21 2019-08-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
WO2021235971A1 (en) * 2020-05-17 2021-11-25 Общество с ограниченной ответственностью "ЭсЭнЭйч МейстерСофт" Method for rendering 3d models in a browser using distributed resources
WO2023061364A1 (en) * 2021-10-15 2023-04-20 华为技术有限公司 Model establishment method and related apparatus
CN114119849A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Three-dimensional scene rendering method, device and storage medium
CN114255313A (en) * 2022-02-28 2022-03-29 深圳星坊科技有限公司 Three-dimensional reconstruction method and device for mirror surface object, computer equipment and storage medium
CN115115780A (en) * 2022-06-29 2022-09-27 聚好看科技股份有限公司 Three-dimensional reconstruction method and system based on multi-view RGBD camera

Similar Documents

Publication Publication Date Title
CN109064542B (en) Threedimensional model surface hole complementing method and device
EP4059007A1 (en) Cross reality system with localization service and shared location-based content
EP3727622A1 (en) Caching and updating of dense 3d reconstruction data
CN115461787A (en) Cross reality system with quick positioning
CN114616534A (en) Cross reality system with wireless fingerprint
EP1764745A2 (en) Collaborative environments in a geographic information system
CN112560137A (en) Multi-model fusion method and system based on smart city
CN115359261B (en) Image recognition method, computer-readable storage medium, and electronic device
CN116188689A (en) Radiation field processing method, storage medium and computer terminal
CN113544748A (en) Cross reality system
CN113313832A (en) Semantic generation method and device of three-dimensional model, storage medium and electronic equipment
WO2022156451A1 (en) Rendering method and apparatus
CN109754463B (en) Three-dimensional modeling fusion method and device
CN114926612A (en) Aerial panoramic image processing and immersive display system
Cui et al. Fusing surveillance videos and three‐dimensional scene: A mixed reality system
US20170374351A1 (en) System, method, and recording medium for a closed-loop immersive viewing technology coupled to drones
CN115527166A (en) Image processing method, computer-readable storage medium, and electronic device
CN116485983A (en) Texture generation method of virtual object, electronic device and storage medium
CN116543105A (en) Processing method and system of three-dimensional object, electronic equipment and storage medium
Hu et al. 3D map reconstruction using a monocular camera for smart cities
CN113946221A (en) Eye driving control method and device, storage medium and electronic equipment
CN116206066B (en) Method, storage medium and system for generating video based on scene reconstruction
CN117197319B (en) Image generation method, device, electronic equipment and storage medium
CN116188698B (en) Object processing method and electronic equipment
CN116681818B (en) New view angle reconstruction method, training method and device of new view angle reconstruction network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination