CN115861572A - Three-dimensional modeling method, device, equipment and storage medium - Google Patents

Three-dimensional modeling method, device, equipment and storage medium Download PDF

Info

Publication number
CN115861572A
CN115861572A CN202310161488.5A CN202310161488A CN115861572A CN 115861572 A CN115861572 A CN 115861572A CN 202310161488 A CN202310161488 A CN 202310161488A CN 115861572 A CN115861572 A CN 115861572A
Authority
CN
China
Prior art keywords
modeled
boundary
information
dimensional
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310161488.5A
Other languages
Chinese (zh)
Other versions
CN115861572B (en
Inventor
林铖
杨帆
曾子骄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310161488.5A priority Critical patent/CN115861572B/en
Publication of CN115861572A publication Critical patent/CN115861572A/en
Application granted granted Critical
Publication of CN115861572B publication Critical patent/CN115861572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a three-dimensional modeling method, a three-dimensional modeling device, three-dimensional modeling equipment and a storage medium. The method comprises the following steps: the method comprises the steps of obtaining a plane image set of an object to be modeled, wherein the plane image set comprises plane images of the object to be modeled under different visual angles, obtaining boundary information of the object to be modeled, the boundary information is used for indicating an actual boundary of the object to be modeled in the plane images, conducting depth prediction processing on the plane images in the plane image set respectively to obtain depth information of the object to be modeled, and modeling the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled to obtain a three-dimensional model of the object to be modeled. Therefore, the quality of the three-dimensional model of the object to be modeled can be improved by constraining the modeling process of the object to be modeled through the depth information of the object to be modeled and the boundary information of the object to be modeled.

Description

Three-dimensional modeling method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a three-dimensional modeling method, a three-dimensional modeling apparatus, a computer device, and a computer-readable storage medium.
Background
With the progress of scientific research, three-dimensional models are widely applied in various fields in life; such as the field of gaming, the field of design, the field of video, and so forth. The generation mode of the three-dimensional model of the target object comprises the following steps: a three-dimensional model of the target object is generated from a planar image of the target object. Researches find that the three-dimensional model obtained by directly modeling according to the plane image of the target object has poor quality (for example, the reduction degree is low).
Disclosure of Invention
The embodiment of the application provides a three-dimensional modeling method, a three-dimensional modeling device, a three-dimensional modeling apparatus and a computer-readable storage medium, which can improve the quality of a three-dimensional model.
In one aspect, an embodiment of the present application provides a three-dimensional modeling method, including:
acquiring a planar image set of an object to be modeled, wherein the planar image set comprises a first planar image and a second planar image; the first plane image and the second plane image are plane images of the object to be modeled under different visual angles;
acquiring boundary information of an object to be modeled, wherein the boundary information comprises first geometric boundary marking information and second geometric marking information, the first geometric marking information is used for indicating the actual boundary of the object to be modeled in a first plane image, and the second geometric marking information is used for indicating the actual boundary of the object to be modeled in a second plane image;
respectively carrying out depth prediction processing on the plane images in the plane image set to obtain depth information of an object to be modeled;
and modeling the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled to obtain a three-dimensional model of the object to be modeled.
In one aspect, an embodiment of the present application provides a three-dimensional modeling apparatus, including:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a plane image set of an object to be modeled, and the plane image set comprises a first plane image and a second plane image; the first plane image and the second plane image are plane images of an object to be modeled at different visual angles;
the boundary information comprises first geometric boundary annotation information and second geometric annotation information, the first geometric annotation information is used for indicating the actual boundary of the object to be modeled in the first planar image, and the second geometric annotation information is used for indicating the actual boundary of the object to be modeled in the second planar image;
the processing unit is used for respectively carrying out depth prediction processing on the plane images in the plane image set to obtain depth information of the object to be modeled;
and the three-dimensional modeling module is used for modeling the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled to obtain a three-dimensional model of the object to be modeled.
In one embodiment, the depth information of the object to be modeled comprises depth information of pixel points of the object to be modeled, which are associated in each plane image of the plane image set; the processing unit is used for modeling the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled to obtain a three-dimensional model of the object to be modeled, and is specifically used for:
restoring the position of the pixel point associated with the object to be modeled in the three-dimensional space according to the depth information of the pixel point associated with the object to be modeled in each planar image;
stitching the actual boundary of the object to be modeled, which is indicated by the first geometric boundary marking information and the second geometric marking information, to obtain a three-dimensional boundary line of the object to be modeled;
and generating a three-dimensional model of the object to be modeled through the three-dimensional boundary line of the object to be modeled.
In an embodiment, the processing unit is configured to generate a three-dimensional model of the object to be modeled through a three-dimensional boundary line of the object to be modeled, and specifically is configured to:
determining a grid template corresponding to the object to be modeled according to the topological classification of the three-dimensional boundary line of the object to be modeled;
and cutting the grid template corresponding to the object to be modeled based on the three-dimensional boundary line of the object to be modeled to obtain the three-dimensional model of the object to be modeled.
In one embodiment, the processing unit is further configured to:
acquiring a smooth constraint condition of a grid template corresponding to an object to be modeled and a reduction degree constraint condition of the object to be modeled;
predicting a grid deformation parameter corresponding to the object to be modeled according to the position of the object to be modeled in a three-dimensional space, a smooth constraint condition of a grid template corresponding to the object to be modeled and a reduction degree constraint condition of the object to be modeled;
and performing model optimization processing on the three-dimensional model of the object to be modeled according to the grid deformation parameters corresponding to the object to be modeled to obtain the three-dimensional model after the model optimization processing.
In one embodiment, the object to be modeled is composed of M object elements, the planar image set includes a planar image of each object element in a first view and a planar image in a second view, M is an integer greater than 1; the boundary information of the object to be modeled comprises geometric boundary marking information corresponding to each object element; the geometric boundary marking information corresponding to each object element is used for indicating the actual boundary of the object element in the plane image under the first view angle and the plane image under the second view angle; the depth information of the object to be modeled comprises depth information of M object elements;
the processing unit is used for modeling the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled to obtain a three-dimensional model of the object to be modeled, and is specifically used for:
acquiring a matching relation between a planar image of the M object elements under a first view angle and a planar image of the M object elements under a second view angle;
determining boundary information corresponding to each object element according to the matching relation between the planar image of the M object elements under the first view angle and the planar image of the M object elements under the second view angle;
modeling the object elements according to the boundary information of each object element and the depth information of the object elements to obtain three-dimensional models of M object elements;
and stacking the three-dimensional models of the M object elements to obtain the three-dimensional model of the object to be modeled.
In one embodiment, the first planar image is one of a front view and a back view of the object to be modeled, and the second planar image is the other of the front view and the back view of the object to be modeled except for the first planar image; the processing unit is configured to obtain a matching relationship between a planar image of the M object elements at the first view and a planar image of the M object elements at the second view, and specifically configured to:
carrying out view conversion processing on the planar image of the M object elements under the first visual angle according to the second visual angle to obtain M conversion views;
and determining the matching relation between the plane image of the M object elements under the first view angle and the plane image of the M object elements under the second view angle through the similarity of the boundary of the object element in each transformation view and the boundary of the object element in the plane image under the M second view angles.
In one embodiment, each object element is associated with a layer identifier, and the layer identifier is used for indicating the display priority of the associated object element; the processing unit is further configured to:
if the three-dimensional models of the at least two object elements have the overlapped area, determining the display priority of the three-dimensional models of the at least two object elements through layer identifiers associated with the at least two object elements;
and displaying the three-dimensional model of the object element with the highest display priority of the three-dimensional models in the overlapping area.
In one embodiment, each object element is associated with a layer identifier, and the layer identifier is used for indicating the display priority of the associated object element; if it is detected that there is a mutual interpenetration between the three-dimensional models of at least two object elements in the three-dimensional model of the object to be modeled, the processing unit is further configured to:
according to the layer identifiers associated with the at least two object elements, carrying out grid optimization processing on a grid contained in a three-dimensional model of at least one object element in the at least two object elements to obtain a three-dimensional model of an object to be modeled after the grid optimization processing;
and the three-dimensional models of any two object elements in the three-dimensional model of the object to be modeled after the grid optimization processing are not mutually interspersed.
In one embodiment, the depth information of the object to be modeled is obtained by respectively performing depth prediction processing on the plane images in the plane image set by adopting a depth prediction model; the training process of the depth prediction model comprises the following steps:
performing depth prediction processing on target pixel points associated with target objects in the training images by adopting a depth prediction model to obtain depth prediction results corresponding to the target pixel points;
predicting the normal vector of each target pixel point according to the depth prediction result of each target pixel point;
performing joint optimization on the depth prediction model based on the depth difference information and the normal vector difference information to obtain an optimized depth prediction model;
the depth difference information is obtained based on the difference between the depth prediction result of each target pixel point and the corresponding labeling result of the training image; the normal vector difference information is obtained based on the difference between the prediction normal vector of each target pixel point and the true normal vector of the target pixel point.
In an embodiment, the processing unit is configured to obtain boundary information of an object to be modeled, and specifically, to:
respectively carrying out boundary detection on the plane images in the plane image set of the object to be modeled to obtain the boundary of the object to be modeled in each plane image;
and respectively identifying the boundaries of the object to be modeled in each plane image by adopting a geometric boundary identification model to obtain geometric annotation information corresponding to each plane image.
Accordingly, the present application provides a computer device comprising:
a memory having a computer program stored therein;
and the processor is used for loading a computer program to realize the three-dimensional modeling method.
Accordingly, the present application provides a computer readable storage medium having stored thereon a computer program adapted to be loaded by a processor and to execute the above-mentioned three-dimensional modeling method.
Accordingly, the present application provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the three-dimensional modeling method.
In the embodiment of the application, a plane image set of an object to be modeled is obtained, the plane image set comprises plane images of the object to be modeled at different viewing angles, boundary information of the object to be modeled is obtained, the boundary information is used for indicating an actual boundary of the object to be modeled in the plane images, depth prediction processing is respectively carried out on the plane images in the plane image set to obtain depth information of the object to be modeled, modeling is carried out on the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled, and a three-dimensional model of the object to be modeled is obtained. Therefore, the quality of the three-dimensional model of the object to be modeled can be improved by constraining the modeling process of the object to be modeled through the depth information of the object to be modeled and the boundary information of the object to be modeled.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a three-dimensional modeling scheme provided by an embodiment of the present application;
fig. 2 is a flowchart of a three-dimensional modeling method according to an embodiment of the present application;
FIG. 3 is a front view of an object to be modeled according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a common topology type provided by an embodiment of the present application;
FIG. 5 is a flow chart of another three-dimensional modeling method provided by an embodiment of the present application;
FIG. 6 is a three-dimensional modeling flow chart framework provided by an embodiment of the present application;
FIG. 7 is a schematic management page of a three-dimensional modeling plug-in provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a three-dimensional modeling apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The present application relates to artificial intelligence and computer vision technologies, which are briefly introduced below:
artificial Intelligence (AI): AI is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the implementation method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. The embodiment of the application mainly relates to the depth prediction processing of a plane image containing an object to be modeled through a depth prediction model, and the depth information of the object to be modeled is obtained.
The AI technology is a comprehensive subject, and relates to the field of extensive technology, both hardware level technology and software level technology. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, processing of large applications, operating/interactive systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (Computer Vision, CV): computer vision is a science for researching how to make a machine "see", and more specifically, it refers to that a camera and a computer are used to replace human eyes to perform machine vision such as identification, tracking and measurement on a target, and further to perform graphic processing, so that the computer processing becomes an image more suitable for human eyes to observe or to transmit to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition. The embodiment of the application mainly relates to the construction of a three-dimensional model of an object to be modeled through plane images of the object to be modeled under different visual angles and boundary information of the object to be modeled in each plane image.
Based on artificial intelligence and computer vision technology, the embodiment of the application provides a three-dimensional modeling scheme so as to improve the quality of a three-dimensional model generated based on a plane image. Fig. 1 is a schematic diagram of a three-dimensional modeling scheme provided in an embodiment of the present application, and as shown in fig. 1, the three-dimensional modeling scheme may be executed by a computer device 101, where the computer device 101 may be a terminal or a server with three-dimensional modeling capability. Among others, the terminal may include but is not limited to: the present disclosure relates to a three-dimensional modeling method, and in particular, to a three-dimensional modeling method for a smart phone (e.g., an Android phone, an IOS phone, etc.), a tablet computer, a portable personal computer, a Mobile Internet device (MID for short), a vehicle-mounted terminal, a smart appliance, an unmanned aerial vehicle, a wearable device, and other Devices having a three-dimensional modeling capability. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN (content delivery Network), and a big data and artificial intelligence platform, which is not limited in this embodiment of the present application.
It should be noted that the number of computer devices in fig. 1 is only for example and does not constitute a practical limitation of the present application; for example, the three-dimensional modeling system may further include a computer device 102, a terminal device 103, a server 104, or the like.
In a specific implementation, the general principle of the three-dimensional modeling scheme is as follows:
(1) The computer device 101 acquires a planar image set of an object to be modeled, wherein the planar image set comprises a first planar image and a second planar image; the first plane image and the second plane image are plane images of the object to be modeled under different visual angles. The object to be modeled may specifically be a garment, an ornament, a daily necessity, a game prop, etc., and this application is not limited thereto. The set of planar images includes at least two planar images: the system comprises a front view of an object to be modeled, a back view of the object to be modeled, a top view of the object to be modeled, a left view of the object to be modeled, and a right view of the object to be modeled. The planar images required for modeling different objects to be modeled may be the same or different. For example, when the object to be modeled is a garment, the set of planar images may include a front view and a back view of the garment; when the object to be modeled is a cup, the planar image set may include a front view and a top view of the cup; when the object to be modeled is a vehicle, the planar image set may include a front view, a top view, a left view, and a back view of the vehicle.
(2) The computer device 101 obtains boundary information of an object to be modeled, where the boundary information includes first geometric boundary annotation information and second geometric annotation information, the first geometric annotation information is used to indicate an actual boundary of the object to be modeled in the first planar image, and the second geometric annotation information is used to indicate an actual boundary of the object to be modeled in the second planar image.
In particular, the boundary (contour) of the object to be modeled in the planar image may be composed jointly of a visual boundary and an actual boundary. The actual boundary of the object to be modeled is a boundary existing in an object to be modeled (or a three-dimensional model of the object to be modeled) in a subjective way, and the boundary is not influenced by an observation visual angle; that is, no matter from what perspective the object to be modeled (or the three-dimensional model of the object to be modeled) is viewed, the actual boundary of the object to be modeled exists objectively; for example, for a garment, the actual boundaries include a collar, cuff, or the like. The visual boundary of the object to be modeled refers to a boundary, except an actual boundary, of the boundary (contour) of the object to be modeled in a plane image under a visual angle when the object to be modeled (or a three-dimensional model of the object to be modeled) is observed from the visual angle, and the visual boundary of the object to be modeled is not an objectively existing boundary; that is, the visual boundary changes (e.g., disappears) with changing viewing angle.
(3) The computer device 101 respectively performs depth prediction processing on the plane images in the plane image set to obtain depth information of the object to be modeled. The depth information of the object to be modeled consists of depth information of pixel points of the object to be modeled, which are associated in each plane image. In one embodiment, the depth information of the object to be modeled may include first depth information and second depth information, the first depth information is obtained by the computer device 101 performing depth prediction processing on the first planar image, and the second depth information is obtained by the computer device 101 performing depth prediction processing on the second planar image.
In an embodiment, the computer device 101 may perform depth prediction processing on the planar images in the planar image set through the depth prediction model, to obtain depth information of the object to be modeled, which is output by the depth prediction model.
(4) The computer device 101 models the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled, so as to obtain a three-dimensional model of the object to be modeled. In one embodiment, the depth information of the object to be modeled includes depth information of associated pixel points (e.g., pixel points in the plane image for representing the object to be modeled) of the object to be modeled in each plane image of the plane image set. The computer device 101 restores the positions of the pixel points associated with the object to be modeled in the three-dimensional space according to the depth information of the pixel points associated with the object to be modeled in each planar image. After the position of the pixel point associated with the object to be modeled in the three-dimensional space is obtained, the computer equipment performs stitching processing on the pixel point in the three-dimensional space according to the first geometric boundary marking information and the second geometric marking information to obtain a three-dimensional boundary line of the object to be modeled. The three-dimensional boundary line of the object to be modeled is used for indicating the actual boundary of the object to be modeled in the three-dimensional space. After obtaining the three-dimensional boundary line of the object to be modeled, the computer device 101 may generate a three-dimensional model of the object to be modeled through the three-dimensional boundary line of the object to be modeled.
In the embodiment of the application, a plane image set of an object to be modeled is obtained, the plane image set comprises plane images of the object to be modeled at different viewing angles, boundary information of the object to be modeled is obtained, the boundary information is used for indicating an actual boundary of the object to be modeled in the plane images, depth prediction processing is respectively carried out on the plane images in the plane image set to obtain depth information of the object to be modeled, modeling is carried out on the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled, and a three-dimensional model of the object to be modeled is obtained. Therefore, the quality of the three-dimensional model of the object to be modeled can be improved by constraining the modeling process of the object to be modeled through the depth information of the object to be modeled and the boundary information of the object to be modeled.
Based on the three-dimensional modeling scheme, a more detailed three-dimensional modeling method is provided in the embodiments of the present application, and the three-dimensional modeling method provided in the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 2, fig. 2 is a flowchart of a three-dimensional modeling method according to an embodiment of the present application, where the three-dimensional modeling method may be executed by a computer device, and the computer device may be a terminal device or a server. As shown in fig. 2, the three-dimensional modeling method may include the following steps S201 to S204:
s201, acquiring a plane image set of an object to be modeled.
The planar image set comprises a first planar image and a second planar image of the object to be modeled, and the first planar image and the second planar image are planar images of the object to be modeled under different view angles. Optionally, the planar image set may further include a third planar image of the object to be modeled, and an observation angle corresponding to the third planar image is different from observation angles corresponding to the first planar image and the second planar image.
In one embodiment, the object to be modeled is composed of one object element; that is, the object element is the object to be modeled. In this case, the planar image set includes a first planar image and a second planar image of the object element.
In another embodiment, the object to be modeled is composed of M object elements, where M is an integer greater than 1; for example, the object to be modeled is composed of a shirt (first object element) and a jacket (second object element). In this case, the planar image set includes a first planar image and a second planar image of each of the M object elements; for example, assuming that M =2, the set of planar images of the object to be modeled includes first and second planar images of the first object element, and first and second planar images of the second object element.
In one embodiment, in the M object elements, the viewing angles corresponding to the first planar image of each object element are the same, and the viewing angles corresponding to the second planar image of each object element are also the same; in this case, the first and second planar images of the object to be modeled may be stacked in layer order from the first and second planar images of the M object elements.
In another embodiment, among the M object elements, there may be a case where a viewing angle corresponding to the first planar image (or the second planar image) of at least one object element is different from a viewing angle corresponding to the first planar images of other object elements, and a viewing angle corresponding to the second planar images of other object elements is also different; for example, let M object elements include a shirt, a coat, pants, and shoes (i.e., M = 4), the first planar images of the shirt, the coat, and the pants are all front views, and the second planar images are all rear views; the first planar image of the shoe is a front view and the second planar image is a top view.
S202, boundary information of the object to be modeled is obtained.
The boundary information comprises first geometric boundary annotation information and second geometric annotation information, the first geometric annotation information is used for indicating the actual boundary of the object to be modeled in the first planar image, and the second geometric annotation information is used for indicating the actual boundary of the object to be modeled in the second planar image.
In particular, the boundary (contour) of the object to be modeled in the planar image may be composed jointly of a visual boundary and an actual boundary. The actual boundary of the object to be modeled is a boundary in which an object to be modeled (or a three-dimensional model of the object to be modeled) is objectively present; the visual boundary of the object to be modeled refers to a boundary, except an actual boundary, of boundaries (outlines) of the object to be modeled in a plane image under a visual angle when the object to be modeled (or a three-dimensional model of the object to be modeled) is observed from the visual angle. Fig. 3 is a front view of an object to be modeled according to an embodiment of the present application. As shown in fig. 3, in the front view (plane image) of the short sleeve (object to be modeled), the boundary (contour) of the short sleeve is composed of both the visual boundary and the actual boundary; wherein the solid line part is a visual boundary and the dashed line part is an actual boundary.
In one embodiment, if the object to be modeled is composed of M object elements, where M is an integer greater than 1, the planar image set includes a first planar image and a second planar image of each object element among the M object elements; the boundary information of the object to be modeled further includes the geometric annotation information corresponding to each object element, where the geometric annotation information corresponding to each object element is used to indicate the actual boundary of the object element in the planar image under the first viewing angle (i.e. the first planar image of the object element), and indicate the actual boundary of the object element in the planar image under the second viewing angle (i.e. the second planar image of the object element).
In one implementation, the computer device performs contour detection on the plane images in the plane image set (e.g., by image binarization, laplacian algorithm, etc.) respectively, and obtains boundaries of the object (or object element) to be modeled in each plane image. Further, after obtaining the boundaries of the object (or object element) to be modeled in each plane image, the computer device may further perform boundary optimization processing on the boundaries of the object (or object element) to be modeled in each plane image by using a trajectory compression algorithm (such as Douglas-Peucker algorithm), so as to obtain optimized boundaries. The computer device can display the boundary of the object (or the object element) to be modeled in each plane image and generate the boundary information of the object to be modeled based on the boundary marking operation of the modeler.
In another implementation manner, the computer device may perform boundary identification processing on the planar images in the planar image set by using a boundary identification model to obtain boundary information of the object to be modeled; wherein the boundary identification process is used to identify the actual and visual boundaries of the object (or object element) to be modeled in the respective planar images; the training process of the boundary recognition model comprises the following steps: carrying out boundary identification processing on the boundary training data by adopting an initial model to obtain an identification result corresponding to the boundary training data; based on the difference between the recognition result corresponding to the boundary training data and the calibration data corresponding to the boundary training data, the relevant parameters of the initial model are optimized (such as the number of network layers and the scale of a convolution kernel are adjusted), and a boundary recognition model is obtained. It can be understood that the boundary information of the object to be modeled is determined through the boundary identification model, so that the labor cost can be saved, and the modeling efficiency can be further improved.
And S203, respectively carrying out depth prediction processing on the plane images in the plane image set to obtain the depth information of the object to be modeled.
The depth information of the object to be modeled is composed of depth information of pixel points (such as pixel points used for representing the object to be modeled in the planar image) of the object to be modeled, which are related in each planar image. The computer equipment can respectively carry out depth prediction processing on the plane images in the plane image set through the depth prediction model to obtain the depth information of the object to be modeled.
In one embodiment, the training process of the depth prediction model includes: and performing depth prediction processing on target pixel points associated with the target object in the training image by adopting a depth prediction model to obtain a depth prediction result corresponding to the target pixel points. Predicting the normal vector of each target pixel point according to the depth prediction result of each target pixel point; specifically, based on K (K is an integer greater than 2) adjacent pixel points of the target pixel point (pixel points with the number of pixel points spaced from the target pixel point being less than a threshold), at least one candidate normal vector of the target pixel point is determined, and weighted summation processing is performed on the at least one candidate normal vector of the target pixel point, so as to obtain a normal vector of the target pixel point. After the normal vector of each target pixel point is predicted, the depth prediction model is subjected to combined optimization based on the depth difference information and the normal vector difference information to obtain the optimized depth prediction model. The depth difference information is obtained based on the difference between the depth prediction result of each target pixel point and the corresponding annotation result of the training image; the normal vector difference information is obtained based on the difference between the prediction normal vector of each target pixel point and the true normal vector of the target pixel point.
It should be noted that, in the training process of the depth prediction model, the depth prediction model is jointly optimized through the depth difference information and the normal vector difference information, and compared with the method that the depth prediction model is optimized only through the depth difference information, the accuracy of the optimized depth prediction model can be improved.
S204, modeling the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled to obtain a three-dimensional model of the object to be modeled.
In one embodiment, the depth information of the object to be modeled includes depth information of associated pixel points (e.g., pixel points in the plane image for representing the object to be modeled) of the object to be modeled in each plane image of the plane image set. And the computer equipment restores the positions of the pixel points associated with the object to be modeled in the three-dimensional space according to the depth information of the pixel points associated with the object to be modeled in each planar image. After the position of the pixel point associated with the object to be modeled in the three-dimensional space is obtained, the computer equipment performs stitching processing on the pixel point associated with the object to be modeled in the three-dimensional space according to the first geometric boundary marking information and the second geometric marking information to obtain a three-dimensional boundary line of the object to be modeled. The three-dimensional boundary line of the object to be modeled is used for indicating the actual boundary of the object to be modeled in the three-dimensional space.
For example, assuming that the actual boundary of the object to be modeled indicated by the first geometric boundary labeling information in the first planar image includes pixel points 1 to 10, and the actual boundary of the object to be modeled indicated by the second geometric boundary labeling information in the second planar image includes pixel points 10 to 20, the computer device performs stitching processing (for example, concatenating pixel points 1 to 20 in the three-dimensional space) on pixel points 1 to 20 in the three-dimensional space according to the first geometric boundary labeling information and the second geometric boundary labeling information to obtain the three-dimensional boundary line of the object to be modeled.
After the three-dimensional boundary line of the object to be modeled is obtained, the computer equipment can generate a three-dimensional model of the object to be modeled through the three-dimensional boundary line of the object to be modeled. Specifically, the computer equipment determines a Mesh (Mesh) template corresponding to the object to be modeled according to the topological classification of the three-dimensional boundary line of the object to be modeled, wherein the Mesh templates corresponding to different topological classifications are different. Fig. 4 is a schematic diagram of a common topology type provided in an embodiment of the present application. As shown in fig. 4, common topology types include: t-shaped topologies, inverted V-shaped topologies, and humanoid topologies. The T-shaped topology can represent clothes such as shirts, western-style clothes, overcoat, skirts and the like; the inverted V-shaped topology may represent a pant-like garment; the humanoid topology may represent jumpsuits or the like. After determining the grid template corresponding to the object to be modeled, the computer device performs cutting processing (for example, removing grids located outside the three-dimensional boundary line of the object to be modeled in the grid template) on the grid template corresponding to the object to be modeled based on the three-dimensional boundary line of the object to be modeled, so as to obtain a three-dimensional model of the object to be modeled.
Optionally, the computer device may further obtain a smooth constraint condition of the grid template corresponding to the object to be modeled and a reduction constraint condition of the object to be modeled, predict a grid deformation parameter corresponding to the object to be modeled according to the position of the object to be modeled in the three-dimensional space, the smooth constraint condition of the grid template corresponding to the object to be modeled and the reduction constraint condition of the object to be modeled, and perform model optimization on the three-dimensional model of the object to be modeled according to the grid deformation parameter corresponding to the object to be modeled to obtain the three-dimensional model after the model optimization. It can be understood that, based on the position of the object to be modeled in the three-dimensional space, the smooth constraint condition of the mesh template corresponding to the object to be modeled, and the reduction degree constraint condition of the object to be modeled, the three-dimensional model of the object to be modeled is subjected to model optimization processing, which can improve the reduction degree of the three-dimensional model (such as the wrinkle of clothes, etc.), and further improve the quality of the three-dimensional model of the object to be modeled.
In the embodiment of the application, a plane image set of an object to be modeled is obtained, the plane image set comprises plane images of the object to be modeled at different viewing angles, boundary information of the object to be modeled is obtained, the boundary information is used for indicating an actual boundary of the object to be modeled in the plane images, depth prediction processing is respectively carried out on the plane images in the plane image set to obtain depth information of the object to be modeled, modeling is carried out on the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled, and a three-dimensional model of the object to be modeled is obtained. Therefore, the quality of the three-dimensional model of the object to be modeled can be improved by constraining the modeling process of the object to be modeled through the depth information of the object to be modeled and the boundary information of the object to be modeled. In addition, in the training process of the depth prediction model, the depth prediction model is subjected to combined optimization through the depth difference information and the normal vector difference information, and the accuracy of the optimized depth prediction model can be improved.
Referring to fig. 5, fig. 5 is a flowchart of another three-dimensional modeling method provided in an embodiment of the present application, where the three-dimensional modeling method may be executed by a computer device, and the computer device may be a terminal device or a server. As shown in fig. 5, the three-dimensional modeling method may include the following steps S501 to S507:
s501, acquiring a plane image set of an object to be modeled.
And S502, obtaining boundary information of the object to be modeled.
S503, respectively carrying out depth prediction processing on the plane images in the plane image set to obtain the depth information of the object to be modeled.
The specific implementation of steps S501 to S503 can refer to the implementation of steps S201 to S203 in fig. 2, and is not described herein again.
S504, acquiring a matching relation between the planar image of the M object elements under the first view angle and the planar image of the M object elements under the second view angle.
In one embodiment, the first planar image is one of a front view and a back view of the object to be modeled, and the second planar image is the other of the front view and the back view of the object to be modeled, except for the first planar image. And the computer equipment performs view transformation processing on the planar image (namely the first planar image) of the M object elements under the first visual angle according to the second visual angle to obtain M transformation views.
In one implementation, the computer device may determine a matching relationship between the transformed view and the planar image at the second viewing angle by a Chamfer Distance (Chamfer Distance) between a sampling point on the visual boundary in the transformed view and a sampling point on the visual boundary in the planar image at the second viewing angle, and then determine a matching relationship between the planar image at the first viewing angle and the planar image at the second viewing angle. The number of sample points contained on each visual boundary is multiple (at least two). For example, if the chamfer distance between the sampling point on the visual boundary in the transformed view 1 and the sampling point on the visual boundary in the planar image 2 under the second viewing angle is smaller than the distance threshold, the computer device determines that the planar image 1 under the first viewing angle corresponding to the transformed view 1 matches with the planar image 2 under the second viewing angle.
In another implementation, after obtaining the M transformed views, the computer device determines a matching relationship between the planar image of the M object elements in the first view and the planar image of the M object elements in the second view according to a similarity between a boundary of the object element in each transformed view and a boundary of the object element in the planar image of the M second views. The similarity of the boundary may be determined by transforming a Chamfer Distance (Chamfer Distance) between a sampling point on the visual boundary in the view and a sampling point on the visual boundary in the planar image at the second viewing angle, and the similarity is inversely proportional to the Chamfer Distance between the sampling points. For example, assuming that M =3, the second transformed view is obtained by performing view transformation processing on the planar image under the second first viewing angle; and the similarity between the boundary of the object element in the second transformation view and the boundary of the object element in the plane image under the first second view angle is 95%; the similarity between the boundary of the object element in the second transformation view and the boundary of the object element in the plane image under the second view angle is 25%; the similarity between the boundary of the object element in the second transformation view and the boundary of the object element in the plane image under the third second view angle is 13%; the computer device determines that the planar image at the second first perspective matches the planar image at the first second perspective (i.e., planar images belonging to the same object element).
In another embodiment, each planar image in the set of planar images is associated with a layer identifier, and the computer device determines, based on the layer identifier associated with each planar image, a matching relationship between the planar image of the M object elements at the first view angle and the planar image of the M object elements at the second view angle; for example, if the layer identifier associated with the planar image 1 in the first view is the same as the layer identifier associated with the planar image 3 in the second view, it is determined that the planar image 1 in the first view matches the planar image 3 in the second view (i.e., planar images belonging to the same object element).
And S505, determining boundary information corresponding to each object element according to the matching relation between the planar image of the M object elements under the first view angle and the planar image of the M object elements under the second view angle.
The computer equipment can obtain the complete boundary information corresponding to the target object element through the geometric boundary labeling information of the target object element in the first plane image and the geometric boundary labeling information of the target object element in the second plane image; the target object element is any one of the M object elements.
S506, modeling is carried out on the object elements according to the boundary information of each object element and the depth information of the object elements, and three-dimensional models of the M object elements are obtained.
In one embodiment, the depth information of the target object element includes depth information of associated pixel points (e.g., pixel points in the first planar image and the second planar image for representing the target object element) in the first planar image and the second planar image corresponding to the target object element in the planar image set. And the computer equipment restores the position of the pixel point associated with the target object element in the three-dimensional space according to the depth information of the pixel point associated with the target object element in the first plane image and the second plane image. After the position of the pixel point associated with the target object element in the three-dimensional space is obtained, the computer equipment performs stitching processing on the pixel point associated with the target object element in the three-dimensional space according to the geometric boundary marking information in the first planar image and the geometric marking information in the second planar image, and a three-dimensional boundary line of the target object element is obtained. The three-dimensional boundary line of the target object element is used to indicate the actual boundary of the target object element in the three-dimensional space.
After obtaining the three-dimensional boundary line of the target object element, the computer device may generate a three-dimensional model of the target object element through the three-dimensional boundary line of the target object element. Specifically, the computer device determines a Mesh (Mesh) template corresponding to the target object element according to the topological classification of the three-dimensional boundary line of the target object element, wherein the Mesh templates corresponding to different topological classifications are different. After determining the mesh template corresponding to the target object element, the computer device performs trimming processing on the mesh template corresponding to the target object element based on the three-dimensional boundary line of the target object element (e.g., removing meshes located outside the three-dimensional boundary line of the target object element in the mesh template), so as to obtain a three-dimensional model of the target object element.
Optionally, the computer device may further obtain a smoothness constraint condition of the mesh template corresponding to the target object element and a reduction constraint condition of the target object element, predict a mesh deformation parameter corresponding to the target object element according to the position of the target object element in the three-dimensional space, the smoothness constraint condition of the mesh template corresponding to the target object element and the reduction constraint condition of the target object element, and perform model optimization on the three-dimensional model of the target object element according to the mesh deformation parameter corresponding to the target object element to obtain the three-dimensional model after the model optimization. It can be understood that, the model optimization processing is performed on the three-dimensional model of the target object element based on the position of the target object element in the three-dimensional space, the smooth constraint condition of the mesh template corresponding to the target object element and the reduction degree constraint condition of the target object element, so that the reduction degree of the details (such as the folds of clothes) of the three-dimensional model can be improved, and the quality of the three-dimensional model of the target object element can be further improved.
According to the above embodiment, the computer device may obtain a three-dimensional model of M object elements included in the object to be modeled.
And S507, stacking the three-dimensional models of the M object elements to obtain the three-dimensional model of the object to be modeled.
In one embodiment, each object element is associated with a layer identifier, and the layer identifier is used for indicating the display priority of the associated object element. If the three-dimensional models of the at least two object elements have an overlapping region, the computer device determines the display priority of the three-dimensional models of the at least two object elements through the layer identifiers associated with the at least two object elements, and displays the three-dimensional model of the object element with the highest display priority of the three-dimensional model of the at least two object elements in the overlapping region. For example, if the value of the layer identifier is proportional to the display priority of the three-dimensional model of the object element (that is, the larger the value of the layer identifier associated with the object element is, the higher the display priority of the three-dimensional model of the object element is), if there is an overlapping area a between the three-dimensional model of the object element 1 and the three-dimensional model of the object element 2, the value of the layer identifier associated with the object element 1 is 3, and the value of the layer identifier associated with the object element 2 is 7; the computer device displays the three-dimensional model of the object element 2 in the overlap area a.
In another embodiment, if it is detected that the three-dimensional model of the object to be modeled has at least two three-dimensional models of object elements interspersed with each other, the computer device performs mesh optimization processing (for example, adjusting the position of a part of meshes in a three-dimensional space in the three-dimensional model of the object elements) on meshes included in the three-dimensional model of at least one object element of the at least two object elements according to layer identifiers associated with the at least two object elements, so as to obtain the three-dimensional model of the object to be modeled after the mesh optimization processing; the three-dimensional models of any two object elements in the three-dimensional model of the object to be modeled after the grid optimization processing are not mutually interspersed; the interpenetration between the three-dimensional models of the two object elements can be understood as: there is at least one face in the three-dimensional model of object element 1 that passes through at least one face in the three-dimensional model of object element 2.
It can be understood that optimizing the three-dimensional model of the object to be modeled based on the layer identifier associated with the object element can further improve the quality of the three-dimensional model of the object to be modeled.
Fig. 6 is a three-dimensional modeling flow chart provided in an embodiment of the present application. As shown in fig. 6, the three-dimensional modeling method provided by the present application may be cooperatively implemented by a Boundary extraction (Polygon) module, a Boundary labeling (Boundary) module, a Depth Prediction (Depth Prediction) module, a Boundary stitching (Stitch) module, a template clipping (BaseCut) module, a geometric deformation (Wrap) module, and a Post-processing optimization (Post-processing) module. Specifically, the method comprises the following steps:
(1) And the boundary extraction (Polygon) module is used for carrying out boundary detection on each plane image in the plane image set to obtain the boundary of the object element in the plane image. Optionally, the boundary of the object element may be simplified by a trajectory compression algorithm (e.g., douglas-Peucker algorithm), so as to obtain the simplified boundary of the object element.
(2) A Boundary labeling (Boundary) module is used for further labeling the Boundary of the object element to indicate an actual Boundary and a visual Boundary in the Boundary of the object element. In one implementation, the actual boundaries in the respective planar images may be determined based on a marking operation of a modeler for the boundaries of the object elements. In another implementation, the actual boundaries in each planar image may be obtained by performing a boundary identification process on the boundaries of the object elements through a boundary identification model.
(3) And the depth prediction (DepthPrediction) module is used for carrying out depth prediction processing on the plane image in the plane image set to obtain the depth information of each pixel point in the plane image. In one implementation, the depth prediction module may perform depth prediction processing on a planar image in the planar image set through a depth prediction model to obtain depth information of each pixel point in the planar image. In order to improve the accuracy of the depth prediction model, the depth of the pixel points and the normal vectors of the pixel points can be monitored to perform joint optimization on the depth prediction model in the process of training the depth prediction model.
(4) A border stitching (Stitch) module is used to determine the three-dimensional border line of each object element. Specifically, the boundary stitching module may determine a matching relationship between each planar image in the planar image set and the object element based on the boundary labeling information of the object element in the planar image, which is transferred by the boundary labeling module. And then determining the three-dimensional boundary line of each object element through the boundary marking information in the plane image (the first plane image and the second plane image) corresponding to each object element and the depth information of each pixel point transmitted by the depth prediction module. The specific implementation manner can refer to the implementation manner in step S506 in fig. 5, and is not described herein again.
(5) And the template cutting (Base Cut) module is used for obtaining a grid model of the object element based on the three-dimensional boundary line of the object element transmitted by the boundary stitching module. Specifically, a corresponding topology type is determined according to the three-dimensional boundary line of the object element, a corresponding grid template is determined based on the topology type, and then the corresponding grid (Mesh) template is cut through the three-dimensional boundary line of the object element to obtain a grid model of the object element.
(6) The geometric deformation (Wrap) module is used for recovering detailed parts of the object elements (such as folds of clothes) so as to improve the quality of the three-dimensional model of the object to be modeled. The geometric deformation module carries out constraint solution on the grid deformation parameters based on the depth information of each pixel point transmitted by the depth prediction module, the grid smoothness and the reduction degree to obtain the grid deformation parameters corresponding to the grid model of the object element, and carries out grid optimization on the grid model of the object element based on the grid deformation parameters to obtain the grid model of the object element after the grid optimization.
(7) And the Post-processing optimization (Post-processing) module is used for combining the grid models of the object elements to obtain a three-dimensional model of the object to be modeled. If the existence of the mutually-interspersed grid models is detected in the grid model combination process, adjusting the grids in the model based on the layer identifiers corresponding to the object elements so that the three-dimensional model of the object to be modeled does not have the mutually-interspersed grid models.
The three-dimensional modeling method can be applied to the related fields relating to plane image modeling, such as rapid modeling of game role clothes. By taking the rapid modeling of the clothes of the game role as an example, the three-dimensional modeling method provided by the application can be used for rapidly modeling the plane image of the clothes designed by a game developer, so that a clothes three-dimensional model with higher quality is obtained, and the game development efficiency is improved.
In practical application, the three-dimensional modeling scheme provided by the application can be integrated in three-dimensional modeling software in a plug-in mode, and a modeling worker can call the plug-in integrated with the three-dimensional modeling scheme provided by the application to perform three-dimensional modeling on a planar image set of an object to be modeled to obtain a three-dimensional model of the object to be modeled. The specific process is as follows: and loading the plane image set by the three-dimensional modeling software, and obtaining the layer list and the layer state information after the loading is finished. And (4) the modeling personnel can open the target layer editing interface by selecting the target layer. The target layer editing interface comprises a boundary extraction button, and a modeler can automatically extract the boundary of the object element in the target layer by triggering the boundary extraction button. The extracted boundary may be identified on the planar image in the target layer by a line segment of a preset color. The modeler may mark the boundaries of the object elements in the target layer to indicate actual ones of the boundaries of the object elements. And modeling by the three-dimensional modeling software based on the loaded planar image set and the actual boundary in the boundary of the object element indicated by the modeler to obtain a modeling result. In the modeling process, a modeling person can select to model one or more object elements, and if the modeling is performed on a plurality of object elements, the obtained modeling result is an integral three-dimensional model formed by three-dimensional models of the object elements.
Fig. 7 is a schematic management page of a three-dimensional modeling plug-in provided in an embodiment of the present application. As shown in fig. 7, the management page of the three-dimensional modeling plug-in includes a path selection entry 701, a layer list display area 702, a modeling record viewing entry 703, a modeling result viewing entry 704, and a modeling result clearing control 705. The path selection entry 701 is used for selecting a path of a planar image to be imported; the layer list display area 702 is configured to display a layer list, where the layer list may include at least one of the following: the image layer selection frame 7021 (used for selecting an image layer or canceling a selected image layer), an image layer identifier 7022 (used for indicating an order of the image layer, and a modeler can enter an edit page of a corresponding image layer by triggering the image layer identifier), a state display column 7023 (used for displaying a modeling state of the image layer), an image layer name 7024, a view 7025 (used for indicating an observation view angle corresponding to a planar image in the image layer), and a thumbnail 7026 (used for displaying a thumbnail of an object element in the image layer); the modeling record viewing portal 703 is used for viewing modeling records; the modeling result viewing portal 704 is used to view completed modeling results; the modeling result removal control 705 is used to remove completed modeling results.
On the basis of the embodiment shown in fig. 2, the embodiment of the application performs model optimization processing on the three-dimensional model of the object to be modeled based on the position of the object to be modeled in the three-dimensional space, the smooth constraint condition of the grid template corresponding to the object to be modeled and the reduction degree constraint condition of the object to be modeled, so that the reduction degree of the three-dimensional model (such as the wrinkle of clothes) can be improved, and the quality of the three-dimensional model of the object to be modeled is further improved. And optimizing the three-dimensional model of the object to be modeled based on the layer identifier associated with the object element, so that the quality of the three-dimensional model of the object to be modeled can be further improved. In addition, the three-dimensional modeling scheme provided by the application is integrated in three-dimensional modeling software in a plug-in mode, so that the image modeling process can be simplified, and the three-dimensional model modeling efficiency can be improved.
While the method of the embodiments of the present application has been described in detail above, to facilitate better implementation of the above-described aspects of the embodiments of the present application, the apparatus of the embodiments of the present application is provided below accordingly.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a three-dimensional modeling apparatus provided in an embodiment of the present application, and the three-dimensional modeling apparatus shown in fig. 8 may be mounted in a computer device, where the computer device may specifically be a terminal device or a server. The three-dimensional modeling apparatus shown in fig. 8 may be used to perform some or all of the functions in the method embodiments described above with respect to fig. 2 and 5. Referring to fig. 8, the three-dimensional modeling apparatus includes:
an obtaining unit 801, configured to obtain a planar image set of an object to be modeled, where the planar image set includes a first planar image and a second planar image; the first plane image and the second plane image are plane images of the object to be modeled under different visual angles;
the boundary information comprises first geometric boundary marking information and second geometric marking information, the first geometric marking information is used for indicating the actual boundary of the object to be modeled in the first planar image, and the second geometric marking information is used for indicating the actual boundary of the object to be modeled in the second planar image;
the processing unit is used for respectively carrying out depth prediction processing on the plane images in the plane image set to obtain depth information of the object to be modeled;
and the three-dimensional modeling module is used for modeling the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled to obtain a three-dimensional model of the object to be modeled.
In one embodiment, the depth information of the object to be modeled comprises depth information of pixel points of the object to be modeled, which are associated in each plane image of the plane image set; the processing unit 802 is configured to model the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled, to obtain a three-dimensional model of the object to be modeled, and specifically configured to:
restoring the position of the pixel point associated with the object to be modeled in the three-dimensional space according to the depth information of the pixel point associated with the object to be modeled in each planar image;
stitching the actual boundary of the object to be modeled, which is indicated by the first geometric boundary marking information and the second geometric marking information, to obtain a three-dimensional boundary line of the object to be modeled;
and generating a three-dimensional model of the object to be modeled through the three-dimensional boundary line of the object to be modeled.
In an embodiment, the processing unit 802 is configured to generate a three-dimensional model of an object to be modeled through a three-dimensional boundary line of the object to be modeled, and specifically to:
determining a grid template corresponding to the object to be modeled according to the topological classification of the three-dimensional boundary line of the object to be modeled;
and cutting the grid template corresponding to the object to be modeled based on the three-dimensional boundary line of the object to be modeled to obtain the three-dimensional model of the object to be modeled.
In one embodiment, the processing unit 802 is further configured to:
acquiring a smooth constraint condition of a grid template corresponding to an object to be modeled and a reduction degree constraint condition of the object to be modeled;
predicting a grid deformation parameter corresponding to the object to be modeled according to the position of the object to be modeled in a three-dimensional space, a smooth constraint condition of a grid template corresponding to the object to be modeled and a reduction degree constraint condition of the object to be modeled;
and performing model optimization processing on the three-dimensional model of the object to be modeled according to the grid deformation parameters corresponding to the object to be modeled to obtain the three-dimensional model after the model optimization processing.
In one embodiment, the object to be modeled is composed of M object elements, the planar image set includes a planar image of each object element at a first view angle and a planar image at a second view angle, M is an integer greater than 1; the boundary information of the object to be modeled comprises geometric boundary marking information corresponding to each object element; the geometric boundary marking information corresponding to each object element is used for indicating the actual boundary of the object element in the plane image under the first view angle and the plane image under the second view angle; the depth information of the object to be modeled comprises depth information of M object elements;
the processing unit 802 is configured to model the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled, to obtain a three-dimensional model of the object to be modeled, and specifically configured to:
acquiring a matching relation between a plane image of the M object elements under a first view angle and a plane image of the M object elements under a second view angle;
determining boundary information corresponding to each object element according to the matching relation between the planar image of the M object elements under the first view angle and the planar image of the M object elements under the second view angle;
modeling the object elements according to the boundary information of each object element and the depth information of the object elements to obtain three-dimensional models of M object elements;
and stacking the three-dimensional models of the M object elements to obtain the three-dimensional model of the object to be modeled.
In one embodiment, the first planar image is one of a front view and a back view of the object to be modeled, and the second planar image is the other of the front view and the back view of the object to be modeled except for the first planar image; the processing unit 802 is configured to obtain a matching relationship between a planar image of M object elements in a first view and a planar image of M object elements in a second view, and specifically configured to:
carrying out view transformation processing on the planar image of the M object elements under the first view angle according to the second view angle to obtain M transformation views;
and determining the matching relation between the plane image of the M object elements under the first view angle and the plane image of the M object elements under the second view angle through the similarity between the boundary of the object element in each transformation view and the boundary of the object element in the plane image under the M second view angles.
In one embodiment, each object element is associated with a layer identifier, and the layer identifier is used for indicating the display priority of the associated object element; the processing unit 802 is further configured to:
if the three-dimensional models of the at least two object elements have the overlapped area, determining the display priority of the three-dimensional models of the at least two object elements through layer identifiers associated with the at least two object elements;
a three-dimensional model of an object element of which display priority is highest of three-dimensional models of at least two object elements is displayed in the overlapping area.
In one embodiment, each object element is associated with a layer identifier, and the layer identifier is used for indicating the display priority of the associated object element; if it is detected that there is a mutual interpenetration between the three-dimensional models of at least two object elements in the three-dimensional model of the object to be modeled, the processing unit 802 is further configured to:
according to the layer identifiers associated with the at least two object elements, carrying out grid optimization processing on a grid contained in a three-dimensional model of at least one object element in the at least two object elements to obtain a three-dimensional model of an object to be modeled after the grid optimization processing;
and the three-dimensional models of any two object elements in the three-dimensional model of the object to be modeled after the grid optimization processing are not mutually interspersed.
In one embodiment, the depth information of the object to be modeled is obtained by respectively performing depth prediction processing on the plane images in the plane image set by adopting a depth prediction model; the training process of the depth prediction model comprises the following steps:
performing depth prediction processing on target pixel points associated with target objects in the training images by adopting a depth prediction model to obtain depth prediction results corresponding to the target pixel points;
predicting the normal vector of each target pixel point according to the depth prediction result of each target pixel point;
performing joint optimization on the depth prediction model based on the depth difference information and the normal vector difference information to obtain an optimized depth prediction model;
the depth difference information is obtained based on the difference between the depth prediction result of each target pixel point and the corresponding labeling result of the training image; the normal vector difference information is obtained based on the difference between the prediction normal vector of each target pixel point and the true normal vector of the target pixel point.
In an embodiment, the processing unit 802 is configured to obtain boundary information of an object to be modeled, and specifically, to:
respectively carrying out boundary detection on the plane images in the plane image set of the object to be modeled to obtain the boundary of the object to be modeled in each plane image;
and respectively identifying the boundaries of the object to be modeled in each plane image by adopting a geometric boundary identification model to obtain geometric annotation information corresponding to each plane image.
According to an embodiment of the present application, some steps involved in the three-dimensional modeling methods shown in fig. 2 and 5 may be performed by respective units in the three-dimensional modeling apparatus shown in fig. 8. For example, step S201 and step S202 shown in fig. 2 may be executed by the acquisition unit 801 shown in fig. 8, and step S203 and step S204 may be executed by the processing unit 802 shown in fig. 8; step S501, step S502, and step S504 shown in fig. 5 can be executed by the acquisition unit 801 shown in fig. 8, and step S503, and step S505 to step S507 can be executed by the processing unit 802 shown in fig. 8. The units in the three-dimensional modeling apparatus shown in fig. 8 may be respectively or entirely combined into one or several other units to form one or several other units, or some unit(s) may be further split into multiple functionally smaller units to form one or several other units, which may achieve the same operation without affecting the achievement of the technical effect of the embodiments of the present application. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present application, the three-dimensional modeling apparatus may also include other units, and in practical applications, these functions may also be implemented by the assistance of other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present application, the three-dimensional modeling apparatus as shown in fig. 8 may be constructed by running a computer program (including program codes) capable of executing the steps involved in the respective methods as shown in fig. 2 and 5 on a general-purpose computing apparatus such as a computer device including a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and the like, and a storage element, and the three-dimensional modeling method of the embodiment of the present application may be implemented. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the above-described computing apparatus via the computer-readable recording medium.
Based on the same inventive concept, the principle and the beneficial effect of the problem solving of the three-dimensional modeling device provided in the embodiment of the present application are similar to the principle and the beneficial effect of the problem solving of the three-dimensional modeling method in the embodiment of the present application, and for the sake of brevity, the principle and the beneficial effect of the implementation of the method can be referred to, and are not described herein again.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure, where the computer device may be a terminal device or a server. As shown in fig. 9, the computer device includes at least a processor 901, a communication interface 902, and a memory 903. The processor 901, the communication interface 902, and the memory 903 may be connected by a bus or other means. The processor 901 (or Central Processing Unit, CPU) is a computing core and a control core of the computer device, and can analyze various instructions in the computer device and process various data of the computer device, for example: the CPU can be used for analyzing the on-off instruction sent by the object to the computer equipment and controlling the computer equipment to carry out on-off operation; and the following steps: the CPU may transmit various types of interactive data between the internal structures of the computer device, and so on. The communication interface 902 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI, mobile communication interface, etc.), and may be controlled by the processor 901 to transmit and receive data; communication interface 902 may also be used for the transmission and interaction of data within a computer device. The Memory 903 (Memory) is a Memory device in the computer device for storing programs and data. It will be appreciated that the memory 903 herein can comprise both internal memory of the computer device and, of course, extended memory supported by the computer device. The memory 903 provides storage space that stores the operating system of the computer device, which may include, but is not limited to: android system, iOS system, windows Phone system, etc., which are not limited in this application.
Embodiments of the present application also provide a computer-readable storage medium (Memory), which is a Memory device in a computer device and is used for storing programs and data. It is understood that the computer readable storage medium herein can include both built-in storage media in the computer device and, of course, extended storage media supported by the computer device. The computer readable storage medium provides a memory space that stores a processing system of the computer device. Also, a computer program adapted to be loaded and executed by the processor 901 is stored in the memory space. It should be noted that the computer-readable storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; optionally, at least one computer readable storage medium located remotely from the aforementioned processor is also possible.
In one embodiment, the processor 901 performs the following operations by executing the computer program in the memory 903:
acquiring a planar image set of an object to be modeled, wherein the planar image set comprises a first planar image and a second planar image; the first plane image and the second plane image are plane images of the object to be modeled under different visual angles;
acquiring boundary information of an object to be modeled, wherein the boundary information comprises first geometric boundary marking information and second geometric marking information, the first geometric marking information is used for indicating the actual boundary of the object to be modeled in a first plane image, and the second geometric marking information is used for indicating the actual boundary of the object to be modeled in a second plane image;
respectively carrying out depth prediction processing on the plane images in the plane image set to obtain depth information of an object to be modeled;
and modeling the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled to obtain a three-dimensional model of the object to be modeled.
As an optional embodiment, the depth information of the object to be modeled includes depth information of pixel points of the object to be modeled, which are associated in each plane image of the plane image set; the specific embodiment of the processor 901, according to the boundary information of the object to be modeled and the depth information of the object to be modeled, modeling the object to be modeled to obtain a three-dimensional model of the object to be modeled is as follows:
restoring the position of the pixel point associated with the object to be modeled in the three-dimensional space according to the depth information of the pixel point associated with the object to be modeled in each planar image;
stitching the actual boundary of the object to be modeled, which is indicated by the first geometric boundary marking information and the second geometric marking information, to obtain a three-dimensional boundary line of the object to be modeled;
and generating a three-dimensional model of the object to be modeled through the three-dimensional boundary line of the object to be modeled.
As an alternative embodiment, the specific embodiment of the processor 901 generating the three-dimensional model of the object to be modeled through the three-dimensional boundary line of the object to be modeled is as follows:
determining a grid template corresponding to the object to be modeled according to the topological classification of the three-dimensional boundary line of the object to be modeled;
and cutting the grid template corresponding to the object to be modeled based on the three-dimensional boundary line of the object to be modeled to obtain the three-dimensional model of the object to be modeled.
As an alternative embodiment, the processor 901 further performs the following operations by executing the computer program in the memory 903:
acquiring a smooth constraint condition of a grid template corresponding to an object to be modeled and a reduction degree constraint condition of the object to be modeled;
predicting a grid deformation parameter corresponding to the object to be modeled according to the position of the object to be modeled in a three-dimensional space, a smooth constraint condition of a grid template corresponding to the object to be modeled and a reduction degree constraint condition of the object to be modeled;
and according to the grid deformation parameters corresponding to the object to be modeled, performing model optimization processing on the three-dimensional model of the object to be modeled to obtain the three-dimensional model after the model optimization processing.
As an optional embodiment, the object to be modeled is composed of M object elements, the planar image set includes a planar image of each object element in a first view and a planar image in a second view, M is an integer greater than 1; the boundary information of the object to be modeled comprises geometric boundary marking information corresponding to each object element; the geometric boundary marking information corresponding to each object element is used for indicating the actual boundary of the object element in the plane image under the first view angle and the plane image under the second view angle; the depth information of the object to be modeled comprises depth information of M object elements;
the specific embodiment of the processor 901, according to the boundary information of the object to be modeled and the depth information of the object to be modeled, modeling the object to be modeled to obtain a three-dimensional model of the object to be modeled is as follows:
acquiring a matching relation between a planar image of the M object elements under a first view angle and a planar image of the M object elements under a second view angle;
determining boundary information corresponding to each object element according to the matching relation between the planar image of the M object elements under the first view angle and the planar image of the M object elements under the second view angle;
modeling the object elements according to the boundary information of each object element and the depth information of the object elements to obtain three-dimensional models of M object elements;
and stacking the three-dimensional models of the M object elements to obtain the three-dimensional model of the object to be modeled.
As an alternative embodiment, the first planar image is any one of a front view and a back view of the object to be modeled, and the second planar image is the other one of the front view and the back view of the object to be modeled except for the first planar image; a specific embodiment of the processor 901 obtaining a matching relationship between a planar image of M object elements in a first view and a planar image of M object elements in a second view is as follows:
carrying out view conversion processing on the planar image of the M object elements under the first visual angle according to the second visual angle to obtain M conversion views;
and determining the matching relation between the plane image of the M object elements under the first view angle and the plane image of the M object elements under the second view angle through the similarity of the boundary of the object element in each transformation view and the boundary of the object element in the plane image under the M second view angles.
As an optional embodiment, each object element is associated with an layer identifier, and the layer identifier is used to indicate a display priority of the associated object element; the processor 901, by executing the computer program in the memory 903, further performs the following operations:
if the three-dimensional models of the at least two object elements have the overlapped area, determining the display priority of the three-dimensional models of the at least two object elements through layer identifiers associated with the at least two object elements;
and displaying the three-dimensional model of the object element with the highest display priority of the three-dimensional models in the overlapping area.
As an optional embodiment, each object element is associated with an layer identifier, and the layer identifier is used to indicate a display priority of the associated object element; if it is detected that there is a mutual interpenetration between the three-dimensional models of at least two object elements in the three-dimensional model of the object to be modeled, the processor 901 further performs the following operations by executing the computer program in the memory 903:
according to the layer identifiers associated with the at least two object elements, carrying out grid optimization processing on a grid contained in a three-dimensional model of at least one object element in the at least two object elements to obtain a three-dimensional model of an object to be modeled after the grid optimization processing;
and the three-dimensional models of any two object elements in the three-dimensional model of the object to be modeled after the grid optimization processing are not mutually interspersed.
As an optional embodiment, the depth information of the object to be modeled is obtained by respectively performing depth prediction processing on the planar images in the planar image set by using a depth prediction model; the training process of the depth prediction model comprises the following steps:
performing depth prediction processing on target pixel points associated with target objects in the training images by adopting a depth prediction model to obtain depth prediction results corresponding to the target pixel points;
predicting the normal vector of each target pixel point according to the depth prediction result of each target pixel point;
performing joint optimization on the depth prediction model based on the depth difference information and the normal vector difference information to obtain an optimized depth prediction model;
the depth difference information is obtained based on the difference between the depth prediction result of each target pixel point and the corresponding labeling result of the training image; the normal vector difference information is obtained based on the difference between the prediction normal vector of each target pixel point and the true normal vector of the target pixel point.
As an alternative embodiment, a specific embodiment of the processor 901 obtaining the boundary information of the object to be modeled is as follows:
respectively carrying out boundary detection on the plane images in the plane image set of the object to be modeled to obtain the boundary of the object to be modeled in each plane image;
and respectively identifying the boundaries of the object to be modeled in each plane image by adopting a geometric boundary identification model to obtain geometric annotation information corresponding to each plane image.
Based on the same inventive concept, the principle and the beneficial effect of solving the problem of the computer device provided in the embodiment of the present application are similar to the principle and the beneficial effect of solving the problem of the three-dimensional modeling method in the embodiment of the present application, and for the sake of brevity, the principle and the beneficial effect of the implementation of the method can be referred to, and are not described herein again.
The embodiment of the application also provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and the computer program is suitable for being loaded by a processor and executing the three-dimensional modeling method of the method embodiment.
Embodiments of the present application also provide a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the three-dimensional modeling method.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device can be merged, divided and deleted according to actual needs.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, and the program may be stored in a computer readable storage medium, and the readable storage medium may include: flash disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (13)

1. A method of three-dimensional modeling, the method comprising:
acquiring a plane image set of an object to be modeled, wherein the plane image set comprises a first plane image and a second plane image; the first plane image and the second plane image are plane images of the object to be modeled under different view angles;
acquiring boundary information of the object to be modeled, wherein the boundary information comprises first geometric boundary annotation information and second geometric annotation information, the first geometric annotation information is used for indicating an actual boundary of the object to be modeled in the first planar image, and the second geometric annotation information is used for indicating an actual boundary of the object to be modeled in the second planar image;
respectively carrying out depth prediction processing on the plane images in the plane image set to obtain depth information of the object to be modeled;
and modeling the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled to obtain a three-dimensional model of the object to be modeled.
2. The method of claim 1, wherein the depth information of the object to be modeled comprises depth information of pixel points of the object to be modeled associated in each planar image of the set of planar images; the modeling the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled to obtain a three-dimensional model of the object to be modeled, including:
restoring the position of the pixel point associated with the object to be modeled in the three-dimensional space according to the depth information of the pixel point associated with the object to be modeled in each planar image;
stitching the actual boundary of the object to be modeled, which is indicated by the first geometric marking information and the second geometric marking information, to obtain a three-dimensional boundary line of the object to be modeled;
and generating a three-dimensional model of the object to be modeled through the three-dimensional boundary line of the object to be modeled.
3. The method of claim 2, wherein generating the three-dimensional model of the object to be modeled by a three-dimensional boundary line of the object to be modeled comprises:
determining a grid template corresponding to the object to be modeled according to the topological classification of the three-dimensional boundary line of the object to be modeled;
and cutting the grid template corresponding to the object to be modeled based on the three-dimensional boundary line of the object to be modeled to obtain a three-dimensional model of the object to be modeled.
4. The method of claim 3, wherein the method further comprises:
acquiring a smooth constraint condition of a grid template corresponding to the object to be modeled and a reduction degree constraint condition of the object to be modeled;
predicting a grid deformation parameter corresponding to the object to be modeled according to the position of the object to be modeled in a three-dimensional space, a smooth constraint condition of a grid template corresponding to the object to be modeled and a reduction degree constraint condition of the object to be modeled;
and performing model optimization processing on the three-dimensional model of the object to be modeled according to the grid deformation parameters corresponding to the object to be modeled to obtain the three-dimensional model after model optimization processing.
5. The method of claim 1, wherein the object to be modeled is composed of M object elements, the set of planar images includes a planar image at a first perspective and a planar image at a second perspective for each object element, M being an integer greater than 1; the boundary information of the object to be modeled comprises geometric boundary marking information corresponding to each object element; the geometric boundary marking information corresponding to each object element is used for indicating the actual boundary of the object element in the plane image under the first view angle and the plane image under the second view angle; the depth information of the object to be modeled comprises the depth information of the M object elements;
the modeling of the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled to obtain the three-dimensional model of the object to be modeled comprises the following steps:
acquiring a matching relation between the planar image of the M object elements under a first view angle and the planar image of the M object elements under a second view angle;
determining boundary information corresponding to each object element according to the matching relation between the planar image of the M object elements under the first view angle and the planar image of the M object elements under the second view angle;
modeling the object elements according to the boundary information of each object element and the depth information of the object elements to obtain three-dimensional models of the M object elements;
and stacking the three-dimensional models of the M object elements to obtain the three-dimensional model of the object to be modeled.
6. The method of claim 5, wherein the first planar image is either one of a front view and a back view of the object to be modeled, and the second planar image is the other one of the front view and the back view of the object to be modeled, except for the first planar image; the obtaining of the matching relationship between the planar image of the M object elements at the first view angle and the planar image of the M object elements at the second view angle includes:
carrying out view transformation processing on the planar image of the M object elements under the first visual angle according to the second visual angle to obtain M transformation views;
and determining the matching relation between the plane image of the M object elements under the first view angle and the plane image of the M object elements under the second view angle through the similarity of the boundary of the object element in each transformation view and the boundary of the object element in the plane image under the M second view angles.
7. The method of claim 5, wherein each object element is associated with a layer identifier, and the layer identifier is used for indicating the display priority of the associated object element; the method further comprises the following steps:
if the three-dimensional models of at least two object elements have an overlapped area, determining the display priority of the three-dimensional models of at least two object elements through layer identifiers associated with the at least two object elements;
and displaying the three-dimensional model of the object element with the highest display priority of the three-dimensional models in the at least two object elements in the overlapping area.
8. The method according to claim 5, wherein each object element is associated with a layer identifier for indicating a display priority of the associated object element; if it is detected that there is a mutual interpenetration between the three-dimensional models of at least two object elements in the three-dimensional model of the object to be modeled, the method further includes:
according to the layer identifiers associated with the at least two object elements, carrying out grid optimization processing on a grid contained in the three-dimensional model of at least one object element in the at least two object elements to obtain a three-dimensional model of the object to be modeled after the grid optimization processing;
and the three-dimensional models of any two object elements in the three-dimensional model of the object to be modeled after the grid optimization processing are not mutually interspersed.
9. The method of claim 1, wherein the depth information of the object to be modeled is obtained by performing depth prediction processing on the planar images in the planar image set respectively by using a depth prediction model; the training process of the depth prediction model comprises the following steps:
carrying out depth prediction processing on target pixel points associated with a target object in a training image by adopting a depth prediction model to obtain a depth prediction result corresponding to the target pixel points;
predicting the normal vector of each target pixel point according to the depth prediction result of each target pixel point;
performing joint optimization on the depth prediction model based on the depth difference information and normal vector difference information to obtain an optimized depth prediction model;
the depth difference information is obtained based on the difference between the depth prediction result of each target pixel point and the labeling result corresponding to the training image; the normal vector difference information is obtained based on the difference between the prediction normal vector of each target pixel point and the real normal vector of the target pixel point.
10. The method of claim 1, wherein the obtaining boundary information of the object to be modeled comprises:
respectively carrying out boundary detection on the plane images in the plane image set of the object to be modeled to obtain the boundary of the object to be modeled in each plane image;
and respectively identifying the boundaries of the object to be modeled in each plane image by adopting a geometric boundary identification model to obtain geometric annotation information corresponding to each plane image.
11. A three-dimensional modeling apparatus, characterized in that the three-dimensional modeling apparatus comprises:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a plane image set of an object to be modeled, and the plane image set comprises a first plane image and a second plane image; the first plane image and the second plane image are plane images of the object to be modeled under different view angles;
the boundary information is used for acquiring boundary information of the object to be modeled, the boundary information comprises first geometric boundary annotation information and second geometric annotation information, the first geometric annotation information is used for indicating an actual boundary of the object to be modeled in the first planar image, and the second geometric annotation information is used for indicating an actual boundary of the object to be modeled in the second planar image;
the processing unit is used for respectively carrying out depth prediction processing on the plane images in the plane image set to obtain the depth information of the object to be modeled;
and the three-dimensional modeling module is used for modeling the object to be modeled according to the boundary information of the object to be modeled and the depth information of the object to be modeled to obtain a three-dimensional model of the object to be modeled.
12. A computer device, comprising: a memory and a processor;
a memory having a computer program stored therein;
a processor for loading the computer program to implement the three-dimensional modeling method of any of claims 1-10.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program adapted to be loaded by a processor and to perform the three-dimensional modeling method according to any one of claims 1-10.
CN202310161488.5A 2023-02-24 2023-02-24 Three-dimensional modeling method, device, equipment and storage medium Active CN115861572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310161488.5A CN115861572B (en) 2023-02-24 2023-02-24 Three-dimensional modeling method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310161488.5A CN115861572B (en) 2023-02-24 2023-02-24 Three-dimensional modeling method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115861572A true CN115861572A (en) 2023-03-28
CN115861572B CN115861572B (en) 2023-05-23

Family

ID=85658863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310161488.5A Active CN115861572B (en) 2023-02-24 2023-02-24 Three-dimensional modeling method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115861572B (en)

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130124148A1 (en) * 2009-08-21 2013-05-16 Hailin Jin System and Method for Generating Editable Constraints for Image-based Models
US20140072170A1 (en) * 2012-09-12 2014-03-13 Objectvideo, Inc. 3d human pose and shape modeling
US20140229143A1 (en) * 2013-02-11 2014-08-14 Ramot At Tel-Aviv University Ltd. Three-dimensional modeling from single photographs
CN104134234A (en) * 2014-07-16 2014-11-05 中国科学技术大学 Full-automatic three-dimensional scene construction method based on single image
US20160071318A1 (en) * 2014-09-10 2016-03-10 Vangogh Imaging, Inc. Real-Time Dynamic Three-Dimensional Adaptive Object Recognition and Model Reconstruction
GB201706499D0 (en) * 2017-04-25 2017-06-07 Nokia Technologies Oy Three-dimensional scene reconstruction
US20180218535A1 (en) * 2017-02-02 2018-08-02 Adobe Systems Incorporated Generating a three-dimensional model from a scanned object
US20190026956A1 (en) * 2012-02-24 2019-01-24 Matterport, Inc. Employing three-dimensional (3d) data predicted from two-dimensional (2d) images using neural networks for 3d modeling applications and other applications
WO2019043734A1 (en) * 2017-09-02 2019-03-07 Fr Tech Innovations Private Limited System and method for generating 360 virtual view of a garment
CN110288642A (en) * 2019-05-25 2019-09-27 西南电子技术研究所(中国电子科技集团公司第十研究所) Three-dimension object fast reconstructing method based on camera array
CN110930503A (en) * 2019-12-05 2020-03-27 武汉纺织大学 Method and system for establishing three-dimensional model of clothing, storage medium and electronic equipment
WO2020069049A1 (en) * 2018-09-25 2020-04-02 Matterport, Inc. Employing three-dimensional data predicted from two-dimensional images using neural networks for 3d modeling applications
CN111970503A (en) * 2020-08-24 2020-11-20 腾讯科技(深圳)有限公司 Method, device and equipment for three-dimensionalizing two-dimensional image and computer readable storage medium
CN112818733A (en) * 2020-08-24 2021-05-18 腾讯科技(深圳)有限公司 Information processing method, device, storage medium and terminal
CN114494381A (en) * 2022-01-21 2022-05-13 北京三快在线科技有限公司 Model training and depth estimation method and device, storage medium and electronic equipment
US20220198750A1 (en) * 2019-04-12 2022-06-23 Beijing Chengshi Wanglin Information Technology Co., Ltd. Three-dimensional object modeling method, image processing method, image processing device
CN114758044A (en) * 2020-12-28 2022-07-15 北京陌陌信息技术有限公司 Three-dimensional clothing model manufacturing quality control method, equipment and storage medium
CN115115805A (en) * 2022-07-21 2022-09-27 深圳市腾讯计算机系统有限公司 Training method, device and equipment for three-dimensional reconstruction model and storage medium
CN115147527A (en) * 2021-03-31 2022-10-04 阿里巴巴新加坡控股有限公司 Three-dimensional grid generation model construction method, three-dimensional grid generation method and device
CN115222917A (en) * 2022-07-19 2022-10-21 腾讯科技(深圳)有限公司 Training method, device and equipment for three-dimensional reconstruction model and storage medium
WO2022227875A1 (en) * 2021-04-29 2022-11-03 中兴通讯股份有限公司 Three-dimensional imaging method, apparatus, and device, and storage medium
CN115601511A (en) * 2022-12-14 2023-01-13 深圳思谋信息科技有限公司(Cn) Three-dimensional reconstruction method and device, computer equipment and computer readable storage medium

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130124148A1 (en) * 2009-08-21 2013-05-16 Hailin Jin System and Method for Generating Editable Constraints for Image-based Models
US20190026956A1 (en) * 2012-02-24 2019-01-24 Matterport, Inc. Employing three-dimensional (3d) data predicted from two-dimensional (2d) images using neural networks for 3d modeling applications and other applications
US20140072170A1 (en) * 2012-09-12 2014-03-13 Objectvideo, Inc. 3d human pose and shape modeling
US20140229143A1 (en) * 2013-02-11 2014-08-14 Ramot At Tel-Aviv University Ltd. Three-dimensional modeling from single photographs
CN104134234A (en) * 2014-07-16 2014-11-05 中国科学技术大学 Full-automatic three-dimensional scene construction method based on single image
US20160071318A1 (en) * 2014-09-10 2016-03-10 Vangogh Imaging, Inc. Real-Time Dynamic Three-Dimensional Adaptive Object Recognition and Model Reconstruction
US20180218535A1 (en) * 2017-02-02 2018-08-02 Adobe Systems Incorporated Generating a three-dimensional model from a scanned object
GB201706499D0 (en) * 2017-04-25 2017-06-07 Nokia Technologies Oy Three-dimensional scene reconstruction
WO2019043734A1 (en) * 2017-09-02 2019-03-07 Fr Tech Innovations Private Limited System and method for generating 360 virtual view of a garment
CN112771539A (en) * 2018-09-25 2021-05-07 马特波特公司 Using three-dimensional data predicted from two-dimensional images using neural networks for 3D modeling applications
WO2020069049A1 (en) * 2018-09-25 2020-04-02 Matterport, Inc. Employing three-dimensional data predicted from two-dimensional images using neural networks for 3d modeling applications
US20220198750A1 (en) * 2019-04-12 2022-06-23 Beijing Chengshi Wanglin Information Technology Co., Ltd. Three-dimensional object modeling method, image processing method, image processing device
CN110288642A (en) * 2019-05-25 2019-09-27 西南电子技术研究所(中国电子科技集团公司第十研究所) Three-dimension object fast reconstructing method based on camera array
CN110930503A (en) * 2019-12-05 2020-03-27 武汉纺织大学 Method and system for establishing three-dimensional model of clothing, storage medium and electronic equipment
CN112818733A (en) * 2020-08-24 2021-05-18 腾讯科技(深圳)有限公司 Information processing method, device, storage medium and terminal
CN111970503A (en) * 2020-08-24 2020-11-20 腾讯科技(深圳)有限公司 Method, device and equipment for three-dimensionalizing two-dimensional image and computer readable storage medium
CN114758044A (en) * 2020-12-28 2022-07-15 北京陌陌信息技术有限公司 Three-dimensional clothing model manufacturing quality control method, equipment and storage medium
CN115147527A (en) * 2021-03-31 2022-10-04 阿里巴巴新加坡控股有限公司 Three-dimensional grid generation model construction method, three-dimensional grid generation method and device
WO2022227875A1 (en) * 2021-04-29 2022-11-03 中兴通讯股份有限公司 Three-dimensional imaging method, apparatus, and device, and storage medium
CN114494381A (en) * 2022-01-21 2022-05-13 北京三快在线科技有限公司 Model training and depth estimation method and device, storage medium and electronic equipment
CN115222917A (en) * 2022-07-19 2022-10-21 腾讯科技(深圳)有限公司 Training method, device and equipment for three-dimensional reconstruction model and storage medium
CN115115805A (en) * 2022-07-21 2022-09-27 深圳市腾讯计算机系统有限公司 Training method, device and equipment for three-dimensional reconstruction model and storage medium
CN115601511A (en) * 2022-12-14 2023-01-13 深圳思谋信息科技有限公司(Cn) Three-dimensional reconstruction method and device, computer equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN115861572B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN111754541B (en) Target tracking method, device, equipment and readable storage medium
CN111553267B (en) Image processing method, image processing model training method and device
CN111626218A (en) Image generation method, device and equipment based on artificial intelligence and storage medium
CN111784845B (en) Virtual try-on method and device based on artificial intelligence, server and storage medium
CN111754396B (en) Face image processing method, device, computer equipment and storage medium
CN112232293A (en) Image processing model training method, image processing method and related equipment
CN109299658B (en) Face detection method, face image rendering device and storage medium
TW201443807A (en) Visual clothing retrieval
CN111914775B (en) Living body detection method, living body detection device, electronic equipment and storage medium
CN113870401B (en) Expression generation method, device, equipment, medium and computer program product
CN111652974A (en) Method, device and equipment for constructing three-dimensional face model and storage medium
CN114758362B (en) Clothing changing pedestrian re-identification method based on semantic perception attention and visual shielding
CN113570684A (en) Image processing method, image processing device, computer equipment and storage medium
CN112419295A (en) Medical image processing method, apparatus, computer device and storage medium
CN109670517A (en) Object detection method, device, electronic equipment and target detection model
CN111950321A (en) Gait recognition method and device, computer equipment and storage medium
CN111507285A (en) Face attribute recognition method and device, computer equipment and storage medium
CN113593001A (en) Target object three-dimensional reconstruction method and device, computer equipment and storage medium
CN114937293A (en) Agricultural service management method and system based on GIS
CN115035367A (en) Picture identification method and device and electronic equipment
CN114764870A (en) Object positioning model processing method, object positioning device and computer equipment
CN113570615A (en) Image processing method based on deep learning, electronic equipment and storage medium
CN112712051A (en) Object tracking method and device, computer equipment and storage medium
CN113537187A (en) Text recognition method and device, electronic equipment and readable storage medium
US20230048147A1 (en) Method and apparatus for processing image signal, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40083141

Country of ref document: HK