CN113436348B - Three-dimensional model processing method and device, electronic equipment and storage medium - Google Patents
Three-dimensional model processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113436348B CN113436348B CN202110709960.5A CN202110709960A CN113436348B CN 113436348 B CN113436348 B CN 113436348B CN 202110709960 A CN202110709960 A CN 202110709960A CN 113436348 B CN113436348 B CN 113436348B
- Authority
- CN
- China
- Prior art keywords
- dimensional model
- view
- dimensional
- light field
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure relates to a three-dimensional model processing method, a three-dimensional model processing device, electronic equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: acquiring light field data and a three-dimensional model, and executing the following iterative process: determining a two-dimensional model view mapping the three-dimensional model to any view angle, and determining a two-dimensional light field view mapping the light field data to any view angle; according to a first difference parameter between the two-dimensional model view and the two-dimensional light field view, adjusting the position of the three-dimensional model so as to ensure that the superposition ratio of the two-dimensional model view and the two-dimensional light field view of the three-dimensional model after adjustment under any view angle is the highest; and stopping the iterative process in response to the iterative process meeting the iteration ending condition to obtain the registered three-dimensional model. The three-dimensional model and the light field data are converted into two-dimensional views, so that the registration problem in the three-dimensional space can be converted into the registration problem in the two-dimensional space, the registration process is simpler, and the registration accuracy can be gradually improved by adopting an iterative mode for registration.
Description
Technical Field
The disclosure relates to the field of computer technology, and in particular relates to a three-dimensional model processing method, a three-dimensional model processing device, electronic equipment and a storage medium.
Background
A Light Field (Light-Field) is a data set recording Light information in various directions in a space, creates a three-dimensional scene in various fields such as virtual reality, games, etc., and acquires three-dimensionally created Light Field data recording Light information in the three-dimensional scene. Since the light field data does not include geometric information, the light field data cannot be directly used for processing. A three-dimensional model is typically created for the three-dimensional scene, which is used to describe the geometric information in the three-dimensional scene, providing the geometric information as a reference for the processing of the light field data.
However, since the three-dimensional model is created apart from the light field data, the degree of matching of the three-dimensional model with the light field data is low. Accordingly, there is a need to provide a method of registering three-dimensional models with light field data.
Disclosure of Invention
The disclosure provides a three-dimensional model processing method, a three-dimensional model processing device, electronic equipment and a storage medium, which can improve registration accuracy of a three-dimensional model and light field data.
According to a first aspect of embodiments of the present disclosure, there is provided a three-dimensional model processing method including:
Acquiring light field data and a three-dimensional model corresponding to a three-dimensional scene;
for the light field data and the three-dimensional model, performing the following iterative process:
determining a two-dimensional model view mapping the three-dimensional model to any view angle, and determining a two-dimensional light field view mapping the light field data to any view angle;
according to a first difference parameter between the two-dimensional model view and the two-dimensional light field view under any view angle, the position of the three-dimensional model is adjusted so that the superposition ratio of the two-dimensional model view and the two-dimensional light field view of the three-dimensional model after adjustment under any view angle is the highest;
and stopping the iterative process in response to the iterative process meeting the iteration ending condition to obtain the registered three-dimensional model.
Optionally, the iteration end condition is: there is a consecutive target number of first difference parameters that are each less than the target threshold parameter.
Optionally, the adjusting the position of the three-dimensional model according to the first difference parameter between the two-dimensional model view and the two-dimensional light field view at any view angle so as to maximize the overlap ratio between the two-dimensional model view and the two-dimensional light field view of the three-dimensional model after the adjustment at any view angle includes:
Determining a first registration parameter of the three-dimensional model according to the first difference parameter, wherein the first registration parameter can enable the superposition ratio of the two-dimensional model view of the three-dimensional model after adjustment under any view angle to the two-dimensional light field view to be highest, and the first registration parameter comprises a translation parameter and a rotation parameter;
according to the first registration parameters, at least one of the following is performed:
according to the translation parameters, controlling the three-dimensional model to translate;
and controlling the three-dimensional model to rotate according to the rotation parameters.
Optionally, the determining maps the three-dimensional model to a two-dimensional model view at any view angle and determining maps the light field data to a two-dimensional light field view at any view angle comprises:
determining to map the three-dimensional model to an original two-dimensional model view at the arbitrary view angle, and determining to map the light field data to an original two-dimensional light field view at the arbitrary view angle;
removing background information in the original two-dimensional model view to obtain a two-dimensional model view under any view angle;
and removing background information in the original two-dimensional light field view to obtain the two-dimensional light field view under any view angle.
Optionally, before the performing the following iterative process on the light field data and the three-dimensional model, the three-dimensional model processing method further includes:
performing three-dimensional reconstruction on the light field data to obtain first point cloud data corresponding to the light field data, wherein the first point cloud data is used for describing geometric information in the three-dimensional scene;
extracting vertexes in the three-dimensional model, and forming the extracted vertexes into second point cloud data corresponding to the three-dimensional model;
determining a second registration parameter of the three-dimensional model according to a second difference parameter between the first point cloud data and the second point cloud data, wherein the second registration parameter can enable the contact ratio of the second point cloud data corresponding to the adjusted three-dimensional model and the first point cloud data to be the highest;
and adjusting the position of the three-dimensional model according to the second registration parameters.
According to a second aspect of the embodiments of the present disclosure, there is provided a three-dimensional model processing apparatus including:
an acquisition unit configured to perform acquisition of light field data and a three-dimensional model corresponding to a three-dimensional scene;
a registration unit configured to perform the following iterative procedure on the light field data and the three-dimensional model:
Determining a two-dimensional model view mapping the three-dimensional model to any view angle, and determining a two-dimensional light field view mapping the light field data to any view angle;
according to a first difference parameter between the two-dimensional model view and the two-dimensional light field view under any view angle, the position of the three-dimensional model is adjusted so that the superposition ratio of the two-dimensional model view and the two-dimensional light field view of the three-dimensional model after adjustment under any view angle is the highest;
and stopping the iterative process in response to the iterative process meeting the iteration ending condition to obtain the registered three-dimensional model.
Optionally, the iteration end condition is: there is a consecutive target number of first difference parameters that are each less than the target threshold parameter.
Optionally, the registration unit includes:
a parameter determination subunit configured to perform determining a first registration parameter of the three-dimensional model according to the first difference parameter, the first registration parameter being capable of maximizing a degree of coincidence of the two-dimensional model view of the three-dimensional model adjusted at the arbitrary view angle with the two-dimensional light field view, the first registration parameter including a translation parameter and a rotation parameter;
A position adjustment subunit configured to perform at least one of the following in accordance with the first registration parameter:
according to the translation parameters, controlling the three-dimensional model to translate;
and controlling the three-dimensional model to rotate according to the rotation parameters.
Optionally, the registration unit includes:
a view determination subunit configured to perform a determination to map the three-dimensional model to an original two-dimensional model view at the arbitrary view angle, and a determination to map the light field data to an original two-dimensional light field view at the arbitrary view angle;
the background rejection subunit is configured to reject background information in the original two-dimensional model view to obtain a two-dimensional model view under any view angle;
the background eliminating subunit is further configured to execute eliminating background information in the original two-dimensional light field view to obtain a two-dimensional light field view under any view angle.
Optionally, the three-dimensional model processing device further includes:
the three-dimensional reconstruction unit is configured to perform three-dimensional reconstruction on the light field data to obtain first point cloud data corresponding to the light field data, wherein the first point cloud data is used for describing geometric information in the three-dimensional scene;
A vertex extraction unit configured to perform extraction of vertices in the three-dimensional model, the extracted vertices constituting second point cloud data corresponding to the three-dimensional model;
a registration parameter determining unit configured to determine a second registration parameter of the three-dimensional model according to a second difference parameter between the first point cloud data and the second point cloud data, wherein the second registration parameter can enable the contact ratio of the second point cloud data corresponding to the adjusted three-dimensional model and the first point cloud data to be the highest;
and a position adjustment unit configured to perform adjustment of the position of the three-dimensional model in accordance with the second registration parameter.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the three-dimensional model processing method as described in the first aspect above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, instructions in which, when executed by a processor of an electronic device, enable the electronic device to perform the three-dimensional model processing method as described in the first aspect above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the three-dimensional model processing method as described in the first aspect above.
In the three-dimensional model processing method, the device, the electronic equipment and the storage medium provided by the embodiment of the disclosure, the fact that the three-dimensional model and the light field data cannot be directly registered in consideration of the fact that the light field data do not have geometric information in a three-dimensional scene is considered, so that the three-dimensional light field data are converted into two-dimensional light field views, the three-dimensional model is converted into the two-dimensional model views, the position of the three-dimensional model is adjusted to enable the coincidence ratio of the two-dimensional light field views and the two-dimensional model views to be the highest, and therefore the coincidence ratio of the three-dimensional model and the light field data is improved, namely the registration problem in the three-dimensional space is converted into the registration problem in the two-dimensional space, the registration process of the three-dimensional model and the light field data is simpler, and the registration is carried out in an iterative mode, and the registration accuracy of the three-dimensional model and the light field data can be gradually improved in the iterative process.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic diagram illustrating an implementation environment according to an example embodiment.
FIG. 2 is a flow chart illustrating a method of three-dimensional model processing according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating another three-dimensional model processing method according to an exemplary embodiment.
FIG. 4 is a flow chart illustrating another three-dimensional model processing method according to an exemplary embodiment.
Fig. 5 is a block diagram illustrating a three-dimensional model processing apparatus according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating another three-dimensional model processing apparatus according to an exemplary embodiment.
Fig. 7 is a block diagram of a terminal according to an exemplary embodiment.
Fig. 8 is a block diagram of a server, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description of the present disclosure and the claims and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data related to the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
Before explaining the embodiments of the present disclosure in detail, the concepts involved are explained as follows:
(1) CG (Computer Graphics ) is a science of converting two-dimensional graphics or three-dimensional graphics into a grid form for computer displays. The study content of computer graphics is to study how graphics are represented in a computer, and the calculation, processing, and display of the graphics are performed by the computer.
(2) LF (Light Field) describes Light information along any direction at any point in space where all directed ray sets constitute Light Field data, the Light information described herein being a vector comprising components of the respective color models. Wherein, the light field is a scene representation form which is different from other modes in the field of computer graphics. The light field represents a three-dimensional scene by recording ray information dispersed in space. The light field has the advantage of separating the attributes such as geometric information, texture information and the like in the three-dimensional scene and directly recording the ray information.
The embodiment of the disclosure provides a three-dimensional model processing method, wherein an execution subject is electronic equipment. The electronic device is a terminal, and the terminal is a computer, a mobile phone, a tablet computer or other terminals. The electronic device is a server, which is a background server or a cloud server providing services such as cloud computing and cloud storage, etc. The electronic equipment acquires light field data and a three-dimensional model corresponding to the three-dimensional scene, and registers the three-dimensional model and the light field data by processing the three-dimensional model.
FIG. 1 is a schematic illustration of an implementation environment provided by embodiments of the present disclosure. Referring to fig. 1, the implementation environment includes a terminal 101 and a server 102. The terminal 101 and the server 102 are connected by a wireless or wired network.
Illustratively, the terminal 101 has installed thereon a target application served by the server 102, by which the terminal 101 can implement functions such as creating light field data, creating a three-dimensional model, or data transmission. The target application is illustratively a target application in the operating system of the terminal 101 or a target application provided for a third party. The target application has an information display function, for example, a function of displaying light field data, a three-dimensional model, image information, video information. Of course, the target application can also have other functions, such as a game function, a shopping function, a chat function, and the like. By way of example, the target application is a short video application, a gaming application, a graphics processing application, a three-dimensional model processing application, or other application, to which embodiments of the present disclosure are not limited.
In the embodiment of the present disclosure, the terminal 101 acquires three-dimensional model and light field data, sends the three-dimensional model and the light field data to the server 102, registers the three-dimensional model and the light field data by the server 102, and returns the registered three-dimensional model and the registered light field data to the terminal 101.
The three-dimensional model processing method provided by the embodiment of the disclosure is applied to various scenes.
For example, in the field of games, a virtual three-dimensional scene includes a virtual object and a virtual object, and clicking the virtual object with a mouse can control the virtual object to pick up the virtual object. The three-dimensional scene is displayed by displaying the light field data, wherein the light field data only comprises ray information and does not comprise geometric information, and whether the position clicked by the mouse is the position of the virtual object cannot be judged only according to the light field data, so that the picking action of the virtual object cannot be completed. Therefore, it is necessary to create a three-dimensional model corresponding to the three-dimensional scene.
Because the three-dimensional model is created by separating from the light field data, the matching degree of the three-dimensional model and the light field data is low, and the three-dimensional model and the light field data are registered by adopting the method provided by the embodiment of the disclosure, so that the effect of overlapping the three-dimensional model and the light field data is achieved. And when the clicking operation is executed on the pair of light field data, mapping the clicking position of the clicking operation in the light field data into the clicking position in the three-dimensional model according to the registration result, and judging whether the clicking position is the position of the virtual object according to the geometric information in the three-dimensional model, so that the geometric information is provided as a reference for the processing of the light field data.
FIG. 2 is a flow chart illustrating a method of processing a three-dimensional model, see FIG. 2, according to an exemplary embodiment, the method comprising the steps of:
201. and acquiring light field data and a three-dimensional model corresponding to the three-dimensional scene.
It should be noted that, in the embodiment of the present disclosure, the execution body is taken as an electronic device, for example, the electronic device is a portable, pocket-sized, hand-held terminal of various types, such as a mobile phone, a computer, a tablet computer, or the like, or the electronic device is a server, or the like. In another embodiment, the execution subject of the three-dimensional model processing method may also be other devices.
And the electronic equipment acquires the light field data and the three-dimensional model corresponding to the same three-dimensional scene. The light field data is used for describing light ray information in the three-dimensional scene, and the light ray information comprises component information such as brightness information, chromaticity information and the like. A three-dimensional model is used to describe geometric information in the three-dimensional scene, the three-dimensional model being represented by a plurality of faces made up of points and lines.
After the electronic device obtains the light field data and the three-dimensional model, the following iterative process in steps 202-203 is performed on the light field data and the three-dimensional model to register the three-dimensional model and the light field data.
202. A two-dimensional model view mapping the three-dimensional model to any view angle is determined, and a two-dimensional light field view mapping the light field data to any view angle is determined.
Registering the three-dimensional model and the light field data means that the three-dimensional scene represented by the three-dimensional model is overlapped with the three-dimensional scene represented by the light field data, but the three-dimensional model and the light field data cannot be registered directly because the light field data does not comprise geometric information. The electronic device therefore determines to map the three-dimensional model to the two-dimensional model view at any view angle and to map the light field data to the two-dimensional light field view at any view angle, respectively, thereby converting the three-dimensional model to the two-dimensional model view and converting the three-dimensional light field data to the two-dimensional light field view.
203. And according to a first difference parameter between the two-dimensional model view and the two-dimensional light field view under any view angle, adjusting the position of the three-dimensional model so as to ensure that the superposition ratio of the two-dimensional model view and the two-dimensional light field view of the three-dimensional model after adjustment under any view angle is the highest.
The electronic equipment determines a first difference parameter between the two-dimensional model view and the two-dimensional light field view under any view angle, and adjusts the position of the three-dimensional model according to the determined first difference parameter so as to enable the superposition ratio of the two-dimensional model view of the adjusted three-dimensional model and the two-dimensional light field view to be the highest under any view angle.
204. And stopping the iterative process in response to the iterative process meeting the iteration ending condition to obtain the registered three-dimensional model.
And the electronic equipment executes the iterative process in the steps 202-203 at least once until the iterative process meets the iteration ending condition, stops the iterative process, and obtains a registered three-dimensional model, and then completes the registration of the three-dimensional model and the light field data.
According to the method provided by the embodiment of the disclosure, the fact that the three-dimensional model and the light field data cannot be directly registered due to the fact that the light field data do not have geometric information in a three-dimensional scene is considered, therefore, the three-dimensional light field data are converted into two-dimensional light field views, the three-dimensional model is converted into the two-dimensional model views, the position of the three-dimensional model is adjusted to enable the coincidence ratio of the two-dimensional light field views and the two-dimensional model views to be the highest, so that the coincidence ratio of the three-dimensional model and the light field data is improved, namely, the registration problem in the three-dimensional space is converted into the registration problem in the two-dimensional space, the registration process of the three-dimensional model and the light field data is simpler, the registration is carried out in an iterative mode, and the registration accuracy of the three-dimensional model and the light field data can be gradually improved in the iterative process.
FIG. 3 is a flow chart illustrating another three-dimensional model processing method, see FIG. 3, according to an exemplary embodiment, comprising the steps of:
301. and acquiring light field data and a three-dimensional model corresponding to the three-dimensional scene.
It should be noted that, in the embodiment of the present disclosure, the execution body is taken as an electronic device, for example, the electronic device is a portable, pocket-sized, hand-held terminal of various types, such as a mobile phone, a computer, a tablet computer, or the like, or the electronic device is a server, or the like. In another embodiment, the execution subject of the three-dimensional model processing method may also be other devices.
The electronic equipment acquires light field data and a three-dimensional model corresponding to any three-dimensional scene, wherein the three-dimensional scene is a real three-dimensional scene or a virtual three-dimensional scene. The light field data is used for describing light ray information in the three-dimensional scene, and the light ray information comprises component information such as brightness information, chromaticity information and the like. A three-dimensional model is used to describe geometric information in the three-dimensional scene, the three-dimensional model being composed of vertices and line segments, for example, the three-dimensional model being a three-dimensional mesh model, the mesh being composed of triangles, quadrilaterals, or other simple convex polygons.
In the embodiment of the present disclosure, the three-dimensional scene is represented by the light field data, but since geometric information in the three-dimensional scene, such as the contour or position of an object in the three-dimensional scene, etc., is not recorded in the light field data, processing on the geometric information, such as the position of an object, etc., cannot be performed only from the light field data. Thus, a three-dimensional model for describing geometric information is created for the three-dimensional scene, and the contour, the position, etc. of the object in the three-dimensional scene can be roughly described, thereby providing geometric data for reference for the processing of light field data. The three-dimensional model is a low-precision three-dimensional model which is independently created by separating from the light field data, so that the matching degree of the three-dimensional model and the light field data is low, and therefore, the three-dimensional model and the light field data need to be registered. In the embodiment of the disclosure, the three-dimensional scene represented by the three-dimensional model is overlapped with the three-dimensional scene represented by the light field data by adjusting the position of the three-dimensional model.
302. And carrying out three-dimensional reconstruction on the light field data to obtain first point cloud data corresponding to the light field data.
In the field of computer graphics, in registering two different data, the two data generally include geometric information, for example, the two data are triangle mesh models, voxel construction data, point cloud data or volume data, and the data record the geometric information, so that the registration by the geometric information is feasible.
In the embodiment of the disclosure, registering the three-dimensional model and the light field data refers to overlapping a three-dimensional scene represented by the three-dimensional model with a three-dimensional scene represented by the light field data, but the three-dimensional model and the light field data cannot be registered directly because the light field data does not include geometric information. Therefore, the electronic equipment performs three-dimensional reconstruction on the light field data to obtain first point cloud data corresponding to the light field data. The first point cloud data is a set formed by a plurality of points, each point has position information in a three-dimensional scene, so that the first point cloud data can describe geometric information in the three-dimensional scene, and the electronic equipment converts light field data which does not comprise the geometric information into point cloud data comprising the geometric information.
In some embodiments, the electronic device performs feature extraction on the two-dimensional light field views under multiple view angles to obtain feature points in each two-dimensional light field view; first point cloud data is created from the feature points in each two-dimensional light field view.
Because the light field data includes ray information under each view angle in the three-dimensional scene, the electronic device can acquire two-dimensional light field views of the light field data under a plurality of view angles according to the light field data, the two-dimensional light field views of the light field data under a certain view angle refer to images obtained by observing the three-dimensional scene represented by the light field data under the view angle, and the two-dimensional light field views include ray information in the three-dimensional scene under the view angle. Therefore, the electronic device can perform feature extraction on the multiple two-dimensional light field views based on the Multi-view reconstruction point cloud technology (MVS, multi-View Stereovision), so as to obtain feature points in each two-dimensional light field view, and create first point cloud data according to the feature points in each two-dimensional light field view, wherein the first point cloud data can roughly represent geometric information in the three-dimensional scene.
The embodiment of the disclosure provides a scheme for reconstructing point cloud data by utilizing multi-view ray information, which performs three-dimensional reconstruction on light field data by means of a two-dimensional light field view under the multi-view in the light field data, so as to obtain first point cloud data, and converts three-dimensional ray information into three-dimensional geometric information, thereby providing the geometric information as a reference for a subsequent registration process.
For example, the first point cloud data obtained by the method has errors, and the first point cloud data includes interference points, wherein the interference points refer to the points in the first point cloud data, which are wrong. The first point cloud data can be manually corrected, and interference points in the first point cloud data are removed, so that the accuracy of the first point cloud data is improved.
303. And extracting vertexes in the three-dimensional model, and forming the extracted vertexes into second point cloud data corresponding to the three-dimensional model.
The three-dimensional model is composed of vertexes and line segments, geometric information in a three-dimensional scene represented by the three-dimensional model can be determined according to the position relation among the vertexes in the three-dimensional model, so that the electronic equipment extracts the vertexes in the three-dimensional model, the extracted vertexes form second point cloud data corresponding to the three-dimensional model, and the second point cloud data can describe the geometric information in the three-dimensional scene.
304. And determining a second registration parameter of the three-dimensional model according to a second difference parameter between the first point cloud data and the second point cloud data, and adjusting the position of the three-dimensional model according to the second registration parameter.
Since the first point cloud data can describe geometric information in the three-dimensional scene represented by the light field data and the second point cloud data can describe geometric information in the three-dimensional scene represented by the three-dimensional model, the higher the contact ratio of the first point cloud data corresponding to the light field data and the second point cloud data corresponding to the three-dimensional model is, the higher the contact ratio of the three-dimensional scene represented by the light field data and the three-dimensional scene represented by the three-dimensional model is. When the second point cloud data corresponding to the adjusted three-dimensional model is overlapped with the first point cloud data, the possibility that the three-dimensional scene represented by the light field data is overlapped with the three-dimensional scene represented by the three-dimensional model is maximum. Therefore, the electronic device determines a second registration parameter of the three-dimensional model according to the second difference parameter between the first point cloud data and the second point cloud data, and adjusts the position of the three-dimensional model according to the second registration parameter so as to enable the superposition ratio of the second point cloud data corresponding to the adjusted three-dimensional model and the first point cloud data to be the highest.
The electronic device determines a second difference parameter between the first point cloud data and the second point cloud data, adjusts the second point cloud data according to the second difference parameter so that the contact ratio of the adjusted second point cloud data and the first point cloud data is the highest, and determines a second registration parameter according to the second point cloud data before adjustment and the second point cloud data after adjustment, so that the second registration parameter can enable the contact ratio of the second point cloud data corresponding to the adjusted three-dimensional model and the first point cloud data to be the highest, and the electronic device adjusts the position of the three-dimensional model according to the second registration parameter.
Wherein the second difference parameter is used to represent the degree of difference between the first point cloud data and the second point cloud data, for example, the second difference parameter may be represented by a unidirectional hausdorff distance or a bidirectional hausdorff distance. And adjusting the second point cloud data according to the second difference parameters so as to ensure that the contact ratio of the adjusted second point cloud data and the first point cloud data is the highest, namely, adjusting the second point cloud data so as to ensure that the second difference parameters between the second point cloud data and the first point cloud data are smaller and smaller.
For example, the electronic device first determines an initial adjustment information by using a principal component analysis method, where the initial adjustment information is also called an initial registration parameter, for example, the initial adjustment information includes parameters of 6 degrees of freedom, that is, an x-axis translational amount, a y-axis translational amount, a z-axis translational amount, a Roll (Roll angle) rotational amount, a Yaw (Yaw angle) rotational amount, and a Pitch (Pitch angle) rotational amount in the three-dimensional space, respectively. The electronic equipment adjusts the second point cloud data according to the initial adjustment information so as to reduce a second difference parameter between the adjusted second point cloud data and the first point cloud data, searches for the next adjustment information by adopting an optimization algorithm such as a steepest descent method, a gradient descent method or a simulated annealing method, and adjusts the second point cloud data according to the next adjustment information, and through multiple iterations, the second difference parameter between the second point cloud data and the first point cloud data is smaller and smaller. And when the iteration times reach the target times or the second difference parameter is smaller than the target parameter, the second point cloud data is considered to be overlapped with the first point cloud data, and the adjustment of the second point cloud data is completed.
The second point cloud data corresponding to the current three-dimensional model is the second point cloud data before adjustment, and if the second point cloud data corresponding to the three-dimensional model is required to have the highest overlapping ratio with the first point cloud data, the three-dimensional model is only required to be adjusted according to the adjustment mode of the second point cloud data. And the electronic equipment determines a second registration parameter according to the second point cloud data before adjustment and the second point cloud data after adjustment, wherein the second registration parameter is information required to be adjusted after the second point cloud data before adjustment is converted into the second point cloud data after adjustment. And the electronic equipment adjusts the three-dimensional model according to the second registration parameter, so that the coincidence ratio of the second point cloud data corresponding to the three-dimensional model and the first point cloud data is the highest.
The second registration parameters include, for example, a translation parameter and a rotation parameter. The translation parameters include a direction and a distance along which the electronic device controls the three-dimensional model to move, e.g., the translation parameters include an amount of translation in the x-axis direction, an amount of translation in the y-axis direction, and an amount of translation in the z-axis direction. The rotation parameters include a direction and an angle, and the electronic device controls the three-dimensional model to rotate the angle about an axis of rotation indicated by the direction. For example, the rotation parameters include a Roll rotation amount, a Yaw rotation amount, and a Pitch rotation amount, the Roll rotation amount being an angle of rotation about the x-axis, the Yaw rotation amount being an angle of rotation about the z-axis, the Pitch rotation amount being an angle of rotation about the y-axis.
In the embodiment of the disclosure, since the light field data and the first point cloud data are corresponding, and the three-dimensional model and the second point cloud data are corresponding, the second point cloud data are directly adjusted to ensure that the contact ratio of the adjusted second point cloud data and the first point cloud data is the highest, and then the three-dimensional model is adjusted according to the adjustment mode of the second point cloud data, so that the contact ratio between the second point cloud data and the first point cloud data corresponding to the three-dimensional model is improved, and the contact ratio between the three-dimensional model and the light field data is improved.
It should be noted that, since the second point cloud data corresponding to the light field data can only roughly describe the geometric information of the three-dimensional scene, and the error of the second point cloud data is larger, the process of registering by using the point cloud data in the steps 302 to 304 is only the process of performing preliminary processing in the three-dimensional model processing method provided by the embodiment of the present disclosure, and in order to improve the accuracy of registering the three-dimensional model and the light field data, the following steps 305 to 307 are also required to be executed. Alternatively, in another embodiment, the electronic device may also perform steps 305-307 described below directly without performing steps 302-304 described above.
It should be noted that, in another embodiment, when performing the preliminary processing, a method of manually controlling the electronic device to perform the registration may be used instead of the method of performing the registration by using the point cloud data in the steps 302 to 304. The method for manually controlling the electronic equipment to register comprises at least one of the following steps:
(1) And the electronic equipment responds to the dragging operation of the three-dimensional model after triggering the movement option in the editing interface, and controls the three-dimensional model to move.
The electronic equipment displays the three-dimensional model and the light field data in an editing interface, a user checks the displayed three-dimensional model and the light field data, if the three-dimensional model is required to be overlapped with the light field data and the three-dimensional model needs to be moved, the user triggers a movement option in the editing interface and then performs a dragging operation on the three-dimensional model, and accordingly the electronic equipment responds to the dragging operation on the three-dimensional model after triggering the movement option in the editing interface and controls the three-dimensional model to move.
(2) And the electronic equipment responds to the dragging operation of the three-dimensional model after triggering the rotation option in the editing interface, and controls the three-dimensional model to rotate.
If the three-dimensional model is required to coincide with the light field data, the user triggers a rotation option in the editing interface and then performs a drag operation on the three-dimensional model, so that the electronic device controls the three-dimensional model to rotate in response to the drag operation on the three-dimensional model after triggering the rotation option in the editing interface.
(3) The electronic device responds to the adjustment information input in the editing interface, and adjusts the three-dimensional model according to the adjustment information.
In addition to performing a drag operation on the three-dimensional model in the editing interface to move or rotate the three-dimensional model, the user may input a third registration parameter in the editing interface, so that the electronic device adjusts the position of the three-dimensional model according to the third registration parameter in response to the third registration parameter input in the editing interface. The third registration parameter is the same as the second registration parameter, and will not be described in detail herein.
In the embodiment of the disclosure, a manual intervention mode is adopted to realize adjustment of the three-dimensional model so as to improve the coincidence degree of the three-dimensional model and the light field data and improve the flexibility of registering the three-dimensional model and the light field data. Moreover, the manual intervention mode can also improve the accuracy of registering the three-dimensional model and the light field data.
In another embodiment, considering that the automatic registration by using the point cloud data can save manpower and time, and the registration by using the manual intervention method can improve the accuracy of the registration, the two methods are combined, for example, the automatic registration is performed by using the point cloud data first, and if the user considers that the deviation of the registered result is large, the manual intervention method is used for correcting the registered result.
After the electronic device obtains the preliminarily registered light field data and the three-dimensional model, the following iterative process in steps 305-306 is executed on the light field data and the three-dimensional model to continue registering the three-dimensional model and the light field data.
305. A two-dimensional model view mapping the three-dimensional model to any view angle is determined, and a two-dimensional light field view mapping the light field data to any view angle is determined.
Because of the errors in the results of the preliminary registration performed by steps 302-304, the three-dimensional model needs to be continuously adjusted based on the registration results, so that the three-dimensional model coincides with the light field data. However, since the light field data does not include geometric information, the three-dimensional model and the light field data cannot be directly registered. The electronic device therefore determines to map the three-dimensional model to the two-dimensional model view at any view angle and to map the light field data to the two-dimensional light field view at any view angle, respectively, thereby converting the three-dimensional model to the two-dimensional model view and converting the three-dimensional light field data to the two-dimensional light field view. For example, the arbitrary view angle is a view angle randomly determined by the electronic device.
The two-dimensional model view mapping the three-dimensional model to any view angle refers to a view obtained by rendering the three-dimensional model according to the view angle, and can be understood as an image obtained by observing a three-dimensional scene represented by the three-dimensional model at the view angle, wherein the two-dimensional model view comprises geometric information in the three-dimensional scene at the view angle. Mapping light field data to a two-dimensional light field view under any view angle refers to an image obtained by observing a three-dimensional scene represented by the light field data under the view angle, wherein the two-dimensional light field view comprises ray information in the three-dimensional scene under the view angle.
In some embodiments, determining an original two-dimensional model view mapping the three-dimensional model to any view angle, determining an original two-dimensional light field view mapping the light field data to any view angle, removing background information in the original two-dimensional model view to obtain a two-dimensional model view at any view angle, removing background information in the original two-dimensional light field view, and obtaining a two-dimensional light field view at any view angle.
The original two-dimensional model view comprises foreground information and background information, the foreground information in the original two-dimensional model view refers to geometric information corresponding to the foreground in the original two-dimensional model view, the foreground refers to an object relatively close to a lens, and the background information in the original two-dimensional model view refers to geometric information corresponding to an object located behind the foreground in the original two-dimensional model view. The original two-dimensional light field view comprises foreground information and background information, the foreground information in the original two-dimensional light field view refers to light information corresponding to a foreground in the original two-dimensional light field view, and the background information in the original two-dimensional light field view refers to light field information corresponding to an object positioned behind the foreground in the original two-dimensional light field view.
The electronic device eliminates background information in the original two-dimensional model view to obtain a two-dimensional model view only comprising foreground information, for example, the electronic device directly sets the background of the original two-dimensional model view to be transparent to obtain the two-dimensional model view. The electronic equipment eliminates background information in the original two-dimensional light field view to obtain a two-dimensional light field view only comprising foreground information, for example, the electronic equipment processes the background information through a background elimination algorithm or a foreground background segmentation algorithm to obtain the two-dimensional light field view.
In the embodiment of the disclosure, the information amount of the original two-dimensional light field view and the original two-dimensional model view is considered to be more, so that the subsequent processing complexity is higher, and therefore, the two-dimensional light field view and the two-dimensional model view only comprising the foreground information are acquired, and therefore, the data amount required to be processed can be reduced when the two-dimensional light field view and the two-dimensional model view are processed later, and the processing complexity is reduced.
306. And according to a first difference parameter between the two-dimensional model view and the two-dimensional light field view under any view angle, adjusting the position of the three-dimensional model so as to ensure that the superposition ratio of the two-dimensional model view and the two-dimensional light field view of the three-dimensional model after adjustment under any view angle is the highest.
The electronic equipment determines a first difference parameter between the two-dimensional model view and the two-dimensional light field view under any view angle, and adjusts the position of the three-dimensional model according to the determined first difference parameter so as to enable the superposition ratio of the two-dimensional model view of the adjusted three-dimensional model and the two-dimensional light field view to be the highest under any view angle.
The first difference parameter is used for representing the difference degree between the two-dimensional model view and the two-dimensional light field view, and the larger the first difference parameter is, the larger the difference degree between the two-dimensional model view and the two-dimensional light field view is, and the smaller the first difference parameter is, the smaller the difference degree between the two-dimensional model view and the two-dimensional light field view is. For example, the electronic device determines a square value of a difference between pixel values corresponding to the same pixel point in the two-dimensional model view and the two-dimensional light field view, and uses a sum of the determined square values as a first difference parameter between the two-dimensional model view and the two-dimensional light field view.
In the embodiment of the disclosure, three-dimensional light field data are converted into two-dimensional light field views, a three-dimensional model is converted into two-dimensional model views, and the positions of the three-dimensional model are adjusted to enable the two-dimensional light field views to coincide with the two-dimensional model views, so that the coincidence ratio of the three-dimensional model and the light field data is improved, and the three-dimensional model and the light field data are registered. The method for indirectly registering based on the two-dimensional light field view and the two-dimensional model view avoids the problem that the light field data does not have geometric information.
In some embodiments, a first registration parameter of the three-dimensional model is determined based on the first discrepancy parameter, and the position of the three-dimensional model is adjusted in accordance with the first registration parameter. The first registration parameters can enable the coincidence degree of the two-dimensional model view and the two-dimensional light field view of the three-dimensional model after adjustment under any view angle to be highest, and include translation parameters and rotation parameters.
The electronic device adjusts the two-dimensional model view according to the first difference parameter so as to ensure that the superposition ratio of the two-dimensional model view after adjustment and the two-dimensional light field view is the highest, and determines a first registration parameter according to the two-dimensional model view before adjustment and the two-dimensional model view after adjustment; and adjusting the position of the three-dimensional model according to the first registration parameters. The electronic device adjusts the two-dimensional model view according to the first difference parameter by adopting an optimization algorithm such as a gradient descent method or a simulated annealing method, so that the superposition ratio of the adjusted two-dimensional model view and the two-dimensional light field view is the highest. Or, because the two-dimensional model view and the two-dimensional light field view are formed by discrete pixel points, the electronic equipment adopts an enumeration method to adjust the two-dimensional model view according to the first difference parameter so as to ensure that the coincidence ratio of the two-dimensional model view and the two-dimensional light field view after adjustment is the highest.
The superposition ratio of the two-dimensional model view after adjustment and the two-dimensional light field view is highest, but the two-dimensional model view of the current three-dimensional model under the view angle is the two-dimensional model view before adjustment, if the superposition ratio of the two-dimensional model view of the three-dimensional model under the view angle and the two-dimensional light field view is required to be highest, the position of the three-dimensional model is only required to be adjusted according to the adjustment mode of the two-dimensional model view. The electronic device determines, based on the pre-adjustment two-dimensional model view and the post-adjustment two-dimensional model view, a first registration parameter that is information to be adjusted to convert from the pre-adjustment two-dimensional model view to the post-adjustment two-dimensional model view. And the electronic equipment adjusts the position of the three-dimensional model according to the first registration parameter, so that the superposition ratio of the two-dimensional model view and the two-dimensional light field view of the three-dimensional model under the view angle is the highest.
In the embodiment of the disclosure, since the three-dimensional model and the two-dimensional model view are corresponding, the two-dimensional model view is directly adjusted, so that the superposition ratio of the adjusted two-dimensional model view and the two-dimensional light field view is the highest, and then the three-dimensional model is adjusted according to the adjustment mode of the two-dimensional model view, so that the superposition ratio of the two-dimensional model view and the two-dimensional light field view corresponding to the adjusted three-dimensional model is the highest under the view angle, namely the problem of adjusting the three-dimensional model is converted into the problem of adjusting the two-dimensional view, and the complexity of the registration process is reduced.
Illustratively, the electronic device performs at least one of the following in accordance with the first registration parameters:
(1) And the electronic equipment controls the three-dimensional model to translate according to the translation parameters in the first registration parameters.
If it is necessary to move the two-dimensional model view if it is to be registered with the two-dimensional light field view, the first registration parameters comprise translation parameters for representing the manner of translation. Therefore, the electronic equipment can control the three-dimensional model to translate according to the translation parameters. In the embodiment of the disclosure, since the translation parameter indicates the translation mode of the two-dimensional model view, the electronic device controls the three-dimensional model to translate according to the translation parameter, so that the three-dimensional model translates according to the movement mode of the two-dimensional model view.
For example, the translation parameter includes a first direction and a distance, the first direction refers to a direction of translation, the distance refers to a distance of translation, and since the translation parameter is a translation parameter obtained by translating the two-dimensional model view, the two-dimensional model view can only translate in a plane in which the two-dimensional model view is located, the first direction is parallel to the plane in which the two-dimensional model view is located. The electronic device controls the translation of the three-dimensional model along the first direction. In the embodiment of the disclosure, since the first direction is parallel to the plane in which the two-dimensional model view is located, the translation parameter only includes the translation amount in the x-axis direction and the translation amount in the y-axis direction in the two-dimensional space, but not the translation amount in the x-axis direction, the translation amount in the y-axis direction and the translation amount in the z-axis direction in the three-dimensional space, so that the dimension reduction of the translation parameter is realized.
(2) And the electronic equipment controls the three-dimensional model to rotate according to the rotation parameters in the first registration parameters.
If it is necessary to rotate the two-dimensional model view if it coincides with the two-dimensional light field view, a rotation parameter is included in the first registration parameter, which rotation parameter is used to indicate the way of rotation. Therefore, the electronic equipment can control the three-dimensional model to rotate according to the rotation parameters. In the embodiment of the disclosure, since the rotation parameter indicates the rotation mode of the two-dimensional model view, the electronic device controls the three-dimensional model to rotate according to the rotation parameter, so that the three-dimensional model rotates according to the rotation mode of the two-dimensional model view.
For example, the rotation parameters include a second direction and an angle, the second direction refers to a direction corresponding to the rotation axis, the angle refers to the rotation angle, and since the rotation parameters are rotation parameters obtained by rotating the two-dimensional model view, the two-dimensional model view can only rotate around the rotation axis indicated by the direction perpendicular to the plane in which the two-dimensional model view is located, the second direction is perpendicular to the plane in which the two-dimensional model view is located. The electronic device controls the rotation angle of the three-dimensional model around the rotation axis indicated by the second direction. In the embodiment of the disclosure, since the two-dimensional model view rotates in the two-dimensional space, the rotation parameter only includes an angle corresponding to the second direction perpendicular to the plane in which the two-dimensional model view is located, but not an angle around the x-axis direction, an angle around the y-axis direction, and an angle around the z-axis direction in the three-dimensional space, so that the dimension reduction of the rotation parameter is realized.
307. And stopping the iterative process in response to the iterative process meeting the iteration ending condition to obtain the registered three-dimensional model.
Since the two-dimensional light field view and the two-dimensional model view at any view angle should be coincident after registration of the light field data and the three-dimensional model, if the coincidence ratio of the two-dimensional light field view and the two-dimensional model view at any view angle is the highest, the coincidence ratio of the light field data corresponding to the three-dimensional scene and the three-dimensional model can be considered to be the highest. Therefore, if the registration of the light field data and the three-dimensional model is desired, only the highest coincidence ratio of the two-dimensional light field view and the two-dimensional model view under as many viewing angles as possible needs to be ensured, so that the three-dimensional registration process is reduced in dimension Cheng Erwei.
Therefore, the electronic device executes the iterative process in the steps 305-306 at least once until the iterative process meets the iteration end condition, stops the iterative process, and obtains the registered three-dimensional model, and completes the registration of the three-dimensional model and the light field data.
In the embodiment of the disclosure, considering that the light field data does not have geometric information in a three-dimensional scene, the three-dimensional model and the light field data cannot be directly registered, so that the three-dimensional light field data is converted into a two-dimensional light field view, the three-dimensional model is converted into a two-dimensional model view, the three-dimensional model is adjusted to enable the contact ratio of the two-dimensional light field view and the two-dimensional model view to be the highest, so that the contact ratio of the three-dimensional model and the light field data is improved, namely, the registration problem in the three-dimensional space is converted into the registration problem in the two-dimensional space, the registration process of the three-dimensional model and the light field data is simpler, and the registration accuracy of the three-dimensional model and the light field data can be gradually improved in the iteration process.
In some embodiments, the iteration end condition is: there is a consecutive target number of first difference parameters that are each less than the target threshold parameter.
After executing the step 306, the electronic device continues to adjust the position of the three-dimensional model according to the first difference parameters between the two-dimensional model view and the two-dimensional light field view under other view angles in response to the absence of the first difference parameters with the number of continuous targets being smaller than the target threshold parameter, so as to maximize the overlap ratio between the two-dimensional model view of the three-dimensional model and the two-dimensional light field view of the light field data under other view angles. The electronic device adjusts the three-dimensional model according to a first difference parameter between the two-dimensional model view and the two-dimensional light field view under any view angle, and then redetermines the first difference parameter between the two-dimensional model view and the two-dimensional light field view under other view angles. The first difference parameter between the two-dimensional model view and the two-dimensional light field view is smaller than the target threshold parameter, and the coincidence degree of the two-dimensional model view and the two-dimensional light field view can be approximately considered to reach the maximum value. If the number of the first difference parameters of the continuous targets is not smaller than the target threshold parameter, the coincidence ratio of the two-dimensional model view and the two-dimensional light field view under the condition that enough view angles do not exist is considered to reach the maximum value, so that the electronic equipment needs to continuously adjust the three-dimensional model to enable the coincidence ratio of the two-dimensional model view and the two-dimensional light field view under the condition that more view angles reach the maximum value, and the accuracy of registering the three-dimensional model and the light field data is ensured. The target number and target threshold parameter are illustratively preset by the electronic device, e.g., the target number is 3, etc.
After executing step 306, the electronic device considers that the iteration process meets the iteration ending condition and stops the iteration process in response to the existence of the continuous target number of first difference parameters which are smaller than the target threshold parameter, and the registered three-dimensional model is obtained. The first difference parameters of the continuous target number are smaller than the target threshold parameters, which means that the coincidence ratio of the two-dimensional model view and the two-dimensional light field view under the randomly determined target number of view angles reaches the maximum value, and the coincidence ratio of the two-dimensional model view and the two-dimensional light field view under enough view angles is considered to reach the maximum value, so that the electronic equipment does not need to adjust the three-dimensional model any more. In the embodiment of the disclosure, since the light field data does not have geometric information, whether the three-dimensional model is overlapped with the light field data cannot be judged directly according to the light field data, so that the problem of judging whether the three-dimensional model is overlapped or not in a three-dimensional scene is converted into the problem of judging whether the three-dimensional model is overlapped or not in a two-dimensional scene, and therefore under the condition that the overlap ratio of a two-dimensional model view and a two-dimensional light field view under the randomly determined target number of view angles is highest, the overlap ratio of the three-dimensional model and the light field data is highest is determined.
Fig. 4 is a flowchart of a three-dimensional model processing method according to an exemplary embodiment, as shown in fig. 4, taking the target number as 3 as an example, an electronic device acquires light field data and a three-dimensional model, determines first point cloud data corresponding to the light field data and second point cloud data corresponding to the three-dimensional model, and performs preliminary registration on the three-dimensional model and the light field data according to the first point cloud data and the second point cloud data. And then the electronic equipment randomly determines a view angle, acquires an original two-dimensional light field view and an original two-dimensional model view under the view angle, and respectively performs foreground extraction on the original two-dimensional light field view and the original two-dimensional model view to obtain a two-dimensional light field view and a two-dimensional model view which only comprise foreground information. The electronic device registers the two-dimensional model view and the two-dimensional light field view according to a first difference parameter between the two-dimensional model view and the two-dimensional light field view, and registers the two-dimensional model view and the two-dimensional light field view, and counteracts the three-dimensional model and the light field data. The electronic equipment judges whether 3 continuous first difference parameters are smaller than the target threshold parameters, if yes, the electronic equipment determines that the registration between the three-dimensional model and the light field data is completed, if not, the electronic equipment randomly replaces one view angle, and continues to register the three-dimensional model and the light field data until the 3 continuous first difference parameters are smaller than the target threshold parameters.
According to the method provided by the embodiment of the disclosure, the fact that the three-dimensional model and the light field data cannot be directly registered due to the fact that the light field data do not have geometric information in a three-dimensional scene is considered, therefore, the three-dimensional light field data are converted into two-dimensional light field views, the three-dimensional model is converted into the two-dimensional model views, the position of the three-dimensional model is adjusted to enable the coincidence ratio of the two-dimensional light field views and the two-dimensional model views to be the highest, so that the coincidence ratio of the three-dimensional model and the light field data is improved, namely, the registration problem in the three-dimensional space is converted into the registration problem in the two-dimensional space, the registration process of the three-dimensional model and the light field data is simpler, the registration is carried out in an iterative mode, and the registration accuracy of the three-dimensional model and the light field data can be gradually improved in the iterative process.
Fig. 5 is a block diagram illustrating a three-dimensional model processing apparatus according to an exemplary embodiment. Referring to fig. 5, the apparatus includes an acquisition unit 501 and a registration unit 502.
An acquisition unit 501 configured to perform acquisition of light field data and a three-dimensional model corresponding to a three-dimensional scene;
a registration unit 502 configured to perform the following iterative procedure on the light field data and the three-dimensional model:
Determining a two-dimensional model view mapping the three-dimensional model to any view angle, and determining a two-dimensional light field view mapping the light field data to any view angle;
according to a first difference parameter between the two-dimensional model view and the two-dimensional light field view under any view angle, the position of the three-dimensional model is adjusted so that the superposition degree of the two-dimensional model view and the two-dimensional light field view of the three-dimensional model after adjustment under any view angle is the highest;
and stopping the iterative process in response to the iterative process meeting the iteration ending condition to obtain the registered three-dimensional model.
According to the device provided by the embodiment of the disclosure, the fact that the light field data does not have geometric information in a three-dimensional scene is considered, and the three-dimensional model and the light field data cannot be directly registered, so that the three-dimensional light field data are converted into two-dimensional light field views, the three-dimensional model is converted into the two-dimensional model views, the position of the three-dimensional model is adjusted to enable the coincidence ratio of the two-dimensional light field views and the two-dimensional model views to be highest, so that the coincidence ratio of the three-dimensional model and the light field data is improved, namely the registration problem in the three-dimensional space is converted into the registration problem in the two-dimensional space, the registration process of the three-dimensional model and the light field data is simpler, the registration is carried out in an iterative mode, and the registration accuracy of the three-dimensional model and the light field data can be gradually improved in the iterative process.
In some embodiments, the iteration end condition is: there is a consecutive target number of first difference parameters that are each less than the target threshold parameter.
In some embodiments, referring to fig. 6, the registration unit 501 includes:
a parameter determination subunit 511 configured to perform determining a first registration parameter of the three-dimensional model according to the first difference parameter, the first registration parameter being capable of maximizing a degree of coincidence of the two-dimensional model view and the two-dimensional light field view of the three-dimensional model adjusted at any view angle, the first registration parameter including a translation parameter and a rotation parameter;
a position adjustment subunit 521 configured to perform at least one of the following according to the first registration parameter:
according to the translation parameters, controlling the three-dimensional model to translate;
and controlling the three-dimensional model to rotate according to the rotation parameters.
Optionally, the registration unit 501 includes:
a view determination subunit 531 configured to perform a determination of mapping the three-dimensional model to an original two-dimensional model view at any view angle, and a determination of mapping the light field data to an original two-dimensional light field view at any view angle;
a background rejection subunit 541 configured to perform rejection of background information in the original two-dimensional model view, to obtain a two-dimensional model view under any view angle;
The background rejection subunit 541 is further configured to perform rejection of background information in the original two-dimensional light field view, so as to obtain a two-dimensional light field view under any view angle.
Optionally, the three-dimensional model processing device further includes:
the three-dimensional reconstruction unit 503 is configured to perform three-dimensional reconstruction on the light field data, so as to obtain first point cloud data corresponding to the light field data, where the first point cloud data is used for describing geometric information in the three-dimensional scene;
a vertex extraction unit 504 configured to perform extraction of vertices in the three-dimensional model, the extracted vertices constituting second point cloud data corresponding to the three-dimensional model;
a registration parameter determining unit 505 configured to determine a second registration parameter of the three-dimensional model according to a second difference parameter between the first point cloud data and the second point cloud data, where the second registration parameter can maximize a degree of coincidence between the second point cloud data corresponding to the adjusted three-dimensional model and the first point cloud data;
the position adjustment unit 506 is configured to perform an adjustment of the position of the three-dimensional model according to the second registration parameter.
The specific manner in which the individual units perform the operations in relation to the apparatus of the above embodiments has been described in detail in relation to the embodiments of the method and will not be described in detail here.
In an exemplary embodiment, there is provided an electronic device including: a processor, and a memory for storing instructions executable by the processor. Wherein the processor is configured to execute the instructions to implement a three-dimensional model processing method as described above.
In some embodiments, the electronic device is a terminal. Fig. 7 is a block diagram illustrating a structure of a terminal 700 according to an exemplary embodiment. The terminal 700 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 700 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
The terminal 700 includes: a processor 701 and a memory 702.
Processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 701 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 701 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit, graphics processor) for taking care of rendering and drawing of content that the display screen is required to display. In some embodiments, the processor 701 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. The memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one program code for execution by processor 701 to implement the three-dimensional model processing methods provided by the method embodiments in the present disclosure.
In some embodiments, the terminal 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 703 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, a display 705, a camera assembly 706, audio circuitry 707, a positioning assembly 708, and a power supply 709.
A peripheral interface 703 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 701 and memory 702. In some embodiments, the processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 704 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuitry 704 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 704 may also include NFC (Near Field Communication ) related circuitry, which is not limited by the present disclosure.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 705 is a touch display, the display 705 also has the ability to collect touch signals at or above the surface of the display 705. The touch signal may be input to the processor 701 as a control signal for processing. At this time, the display 705 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 705 may be one and disposed on the front panel of the terminal 700; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the terminal 700 or in a folded design; in other embodiments, the display 705 may be a flexible display disposed on a curved surface or a folded surface of the terminal 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 705 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 706 is used to capture images or video. Optionally, the camera assembly 706 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing, or inputting the electric signals to the radio frequency circuit 704 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 700. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 707 may also include a headphone jack.
The location component 708 is operative to locate the current geographic location of the terminal 700 for navigation or LBS (Location Based Service, location-based services). The positioning component 708 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, the Granati positioning system of Russia, or the Galileo positioning system of the European Union.
A power supply 709 is used to power the various components in the terminal 700. The power supply 709 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
Those skilled in the art will appreciate that the structure shown in fig. 7 is not limiting of the terminal 700 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In some embodiments, the electronic device is a server. Fig. 8 is a schematic diagram of a server according to an exemplary embodiment, where the server 800 may have a relatively large difference between configurations or performances, and may include one or more processors (Central Processing Units, CPU) 801 and one or more memories 802, where the memories 802 store at least one computer program that is loaded and executed by the processors 801 to implement the methods provided in the foregoing method embodiments. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a computer readable storage medium is also provided, which when executed by a processor of an electronic device, enables the electronic device to perform the steps in the three-dimensional model processing method described above. For example, the computer readable storage medium may be a ROM (Read Only Memory), a RAM (random access Memory ), a CD-ROM (compact disc Read Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when executed by a processor of an electronic device, implements the steps of the above three-dimensional model processing method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A three-dimensional model processing method, characterized in that the three-dimensional model processing method comprises:
acquiring light field data and a three-dimensional model corresponding to a three-dimensional scene;
for the light field data and the three-dimensional model, performing the following iterative process:
determining a two-dimensional model view mapping the three-dimensional model to any view angle, and determining a two-dimensional light field view mapping the light field data to any view angle;
according to a first difference parameter between the two-dimensional model view and the two-dimensional light field view under any view angle, the position of the three-dimensional model is adjusted so that the superposition ratio of the two-dimensional model view and the two-dimensional light field view of the three-dimensional model after adjustment under any view angle is the highest;
and stopping the iterative process in response to the iterative process meeting an iteration ending condition, and obtaining the registered three-dimensional model, wherein the iteration ending condition is that the number of first difference parameters with continuous targets is smaller than a target threshold parameter.
2. The method according to claim 1, wherein adjusting the position of the three-dimensional model according to the first difference parameter between the two-dimensional model view and the two-dimensional light field view at the arbitrary viewing angle so that the overlap ratio between the two-dimensional model view and the two-dimensional light field view of the three-dimensional model after adjustment at the arbitrary viewing angle is the highest, comprises:
determining a first registration parameter of the three-dimensional model according to the first difference parameter, wherein the first registration parameter can enable the superposition ratio of the two-dimensional model view of the three-dimensional model after adjustment under any view angle to the two-dimensional light field view to be highest, and the first registration parameter comprises a translation parameter and a rotation parameter;
according to the first registration parameters, at least one of the following is performed:
according to the translation parameters, controlling the three-dimensional model to translate;
and controlling the three-dimensional model to rotate according to the rotation parameters.
3. The method of three-dimensional model processing according to claim 1, wherein the determining of mapping the three-dimensional model to a two-dimensional model view at any view angle and determining of mapping the light field data to a two-dimensional light field view at any view angle comprises:
Determining to map the three-dimensional model to an original two-dimensional model view at the arbitrary view angle, and determining to map the light field data to an original two-dimensional light field view at the arbitrary view angle;
removing background information in the original two-dimensional model view to obtain a two-dimensional model view under any view angle;
and removing background information in the original two-dimensional light field view to obtain the two-dimensional light field view under any view angle.
4. The three-dimensional model processing method according to claim 1, wherein the three-dimensional model processing method further comprises, before performing the following iterative process on the light field data and the three-dimensional model:
performing three-dimensional reconstruction on the light field data to obtain first point cloud data corresponding to the light field data, wherein the first point cloud data is used for describing geometric information in the three-dimensional scene;
extracting vertexes in the three-dimensional model, and forming the extracted vertexes into second point cloud data corresponding to the three-dimensional model;
determining a second registration parameter of the three-dimensional model according to a second difference parameter between the first point cloud data and the second point cloud data, wherein the second registration parameter can enable the contact ratio of the second point cloud data corresponding to the adjusted three-dimensional model and the first point cloud data to be the highest;
And adjusting the position of the three-dimensional model according to the second registration parameters.
5. A three-dimensional model processing apparatus, characterized by comprising:
an acquisition unit configured to perform acquisition of light field data and a three-dimensional model corresponding to a three-dimensional scene;
a registration unit configured to perform the following iterative procedure on the light field data and the three-dimensional model:
determining a two-dimensional model view mapping the three-dimensional model to any view angle, and determining a two-dimensional light field view mapping the light field data to any view angle;
according to a first difference parameter between the two-dimensional model view and the two-dimensional light field view under any view angle, the position of the three-dimensional model is adjusted so that the superposition ratio of the two-dimensional model view and the two-dimensional light field view of the three-dimensional model after adjustment under any view angle is the highest;
and stopping the iterative process in response to the iterative process meeting an iteration ending condition, and obtaining the registered three-dimensional model, wherein the iteration ending condition is that the number of first difference parameters with continuous targets is smaller than a target threshold parameter.
6. The three-dimensional model processing apparatus according to claim 5, wherein the registration unit includes:
A parameter determination subunit configured to perform determining a first registration parameter of the three-dimensional model according to the first difference parameter, the first registration parameter being capable of maximizing a degree of coincidence of the two-dimensional model view of the three-dimensional model adjusted at the arbitrary view angle with the two-dimensional light field view, the first registration parameter including a translation parameter and a rotation parameter;
a position adjustment subunit configured to perform at least one of the following in accordance with the first registration parameter:
according to the translation parameters, controlling the three-dimensional model to translate;
and controlling the three-dimensional model to rotate according to the rotation parameters.
7. The three-dimensional model processing apparatus according to claim 5, wherein the registration unit includes:
a view determination subunit configured to perform a determination to map the three-dimensional model to an original two-dimensional model view at the arbitrary view angle, and a determination to map the light field data to an original two-dimensional light field view at the arbitrary view angle;
the background rejection subunit is configured to reject background information in the original two-dimensional model view to obtain a two-dimensional model view under any view angle;
The background eliminating subunit is further configured to execute eliminating background information in the original two-dimensional light field view to obtain a two-dimensional light field view under any view angle.
8. The three-dimensional model processing apparatus according to claim 5, further comprising:
the three-dimensional reconstruction unit is configured to perform three-dimensional reconstruction on the light field data to obtain first point cloud data corresponding to the light field data, wherein the first point cloud data is used for describing geometric information in the three-dimensional scene;
a vertex extraction unit configured to perform extraction of vertices in the three-dimensional model, the extracted vertices constituting second point cloud data corresponding to the three-dimensional model;
a registration parameter determining unit configured to determine a second registration parameter of the three-dimensional model according to a second difference parameter between the first point cloud data and the second point cloud data, wherein the second registration parameter can enable the contact ratio of the second point cloud data corresponding to the adjusted three-dimensional model and the first point cloud data to be the highest;
and a position adjustment unit configured to perform adjustment of the position of the three-dimensional model in accordance with the second registration parameter.
9. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the three-dimensional model processing method of any one of claims 1 to 4.
10. A computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the three-dimensional model processing method of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110709960.5A CN113436348B (en) | 2021-06-25 | 2021-06-25 | Three-dimensional model processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110709960.5A CN113436348B (en) | 2021-06-25 | 2021-06-25 | Three-dimensional model processing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113436348A CN113436348A (en) | 2021-09-24 |
CN113436348B true CN113436348B (en) | 2023-10-03 |
Family
ID=77754402
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110709960.5A Active CN113436348B (en) | 2021-06-25 | 2021-06-25 | Three-dimensional model processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113436348B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113989432A (en) * | 2021-10-25 | 2022-01-28 | 北京字节跳动网络技术有限公司 | 3D image reconstruction method and device, electronic equipment and storage medium |
CN113781664B (en) * | 2021-11-09 | 2022-01-25 | 四川省交通勘察设计研究院有限公司 | VR panorama construction display method, system and terminal based on three-dimensional model |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103163722A (en) * | 2013-02-21 | 2013-06-19 | 中山大学 | Three-dimensional image display system and three-dimensional image display method based on micro display chip array |
CN103777455A (en) * | 2014-02-25 | 2014-05-07 | 浙江大学 | Spherical immersion three-dimension displaying method and system based on light field splicing |
CN106056656A (en) * | 2016-03-21 | 2016-10-26 | 陈宇鹏 | Three-dimensional display data acquisition method |
CN110599593A (en) * | 2019-09-12 | 2019-12-20 | 北京三快在线科技有限公司 | Data synthesis method, device, equipment and storage medium |
CN110807413A (en) * | 2019-10-30 | 2020-02-18 | 浙江大华技术股份有限公司 | Target display method and related device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FI20125277L (en) * | 2012-03-14 | 2013-09-15 | Mirasys Business Analytics Oy | METHOD, SYSTEM AND COMPUTER SOFTWARE PRODUCT FOR COORDINATING VIDEO INFORMATION WITH OTHER MEASUREMENT INFORMATION |
-
2021
- 2021-06-25 CN CN202110709960.5A patent/CN113436348B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103163722A (en) * | 2013-02-21 | 2013-06-19 | 中山大学 | Three-dimensional image display system and three-dimensional image display method based on micro display chip array |
CN103777455A (en) * | 2014-02-25 | 2014-05-07 | 浙江大学 | Spherical immersion three-dimension displaying method and system based on light field splicing |
CN106056656A (en) * | 2016-03-21 | 2016-10-26 | 陈宇鹏 | Three-dimensional display data acquisition method |
CN110599593A (en) * | 2019-09-12 | 2019-12-20 | 北京三快在线科技有限公司 | Data synthesis method, device, equipment and storage medium |
CN110807413A (en) * | 2019-10-30 | 2020-02-18 | 浙江大华技术股份有限公司 | Target display method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN113436348A (en) | 2021-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3933783A1 (en) | Computer application method and apparatus for generating three-dimensional face model, computer device, and storage medium | |
JP7190042B2 (en) | Shadow rendering method, apparatus, computer device and computer program | |
US11403763B2 (en) | Image segmentation method and apparatus, computer device, and storage medium | |
EP3779883B1 (en) | Method and device for repositioning in camera orientation tracking process, and storage medium | |
US11798190B2 (en) | Position and pose determining method, apparatus, smart device, and storage medium | |
US11436779B2 (en) | Image processing method, electronic device, and storage medium | |
CN110097576B (en) | Motion information determination method of image feature point, task execution method and equipment | |
CN111091166B (en) | Image processing model training method, image processing device, and storage medium | |
CN110110787A (en) | Location acquiring method, device, computer equipment and the storage medium of target | |
CN110599593B (en) | Data synthesis method, device, equipment and storage medium | |
CN111680758B (en) | Image training sample generation method and device | |
CN113436348B (en) | Three-dimensional model processing method and device, electronic equipment and storage medium | |
CN112581358B (en) | Training method of image processing model, image processing method and device | |
CN110335224B (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN113706678A (en) | Method, device and equipment for acquiring virtual image and computer readable storage medium | |
CN111325220B (en) | Image generation method, device, equipment and storage medium | |
CN113706440A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN113570645A (en) | Image registration method, image registration device, computer equipment and medium | |
CN112950753A (en) | Virtual plant display method, device, equipment and storage medium | |
CN112967261B (en) | Image fusion method, device, equipment and storage medium | |
CN111982293B (en) | Body temperature measuring method and device, electronic equipment and storage medium | |
CN113012064A (en) | Image processing method, device, equipment and storage medium | |
CN114093020A (en) | Motion capture method, motion capture device, electronic device and storage medium | |
CN112767453A (en) | Face tracking method and device, electronic equipment and storage medium | |
WO2024108555A1 (en) | Face image generation method and apparatus, device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |