CN111612880B - Three-dimensional model construction method based on two-dimensional drawing, electronic equipment and storage medium - Google Patents

Three-dimensional model construction method based on two-dimensional drawing, electronic equipment and storage medium Download PDF

Info

Publication number
CN111612880B
CN111612880B CN202010470963.3A CN202010470963A CN111612880B CN 111612880 B CN111612880 B CN 111612880B CN 202010470963 A CN202010470963 A CN 202010470963A CN 111612880 B CN111612880 B CN 111612880B
Authority
CN
China
Prior art keywords
dimensional
image
element object
dimensional element
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010470963.3A
Other languages
Chinese (zh)
Other versions
CN111612880A (en
Inventor
宋伟菖
熊友谊
熊爱武
熊四明
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yuwan Creative Culture Co ltd
Guangzhou Okay Information Technology Co ltd
Original Assignee
Guangzhou Yuwan Creative Culture Co ltd
Guangzhou Okay Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yuwan Creative Culture Co ltd, Guangzhou Okay Information Technology Co ltd filed Critical Guangzhou Yuwan Creative Culture Co ltd
Priority to CN202010470963.3A priority Critical patent/CN111612880B/en
Publication of CN111612880A publication Critical patent/CN111612880A/en
Application granted granted Critical
Publication of CN111612880B publication Critical patent/CN111612880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The embodiment of the application discloses a three-dimensional model construction method based on two-dimensional drawing, electronic equipment and a storage medium. According to the technical scheme provided by the embodiment of the application, the two-dimensional element objects in the extracted two-dimensional drawing image are input to the renderer to carry out graph extension on all the two-dimensional element objects to obtain corresponding three-dimensional element objects, and the image textures of the obtained two-dimensional element objects are attached to the three-dimensional element objects to carry out final graph rendering. According to the method, modeling work of the pictorial representation can be rapidly and effectively carried out, automatic modeling and element alignment of the three-dimensional model are achieved through the preset renderer, modeling efficiency of the pictorial representation is improved, lead time of modeling items of the pictorial representation is greatly reduced, display content in the pictorial representation can be better displayed through the three-dimensional model, better artistic immersion is provided for a user, user experience is improved, and the purpose of better artistic propagation is achieved.

Description

Three-dimensional model construction method based on two-dimensional drawing, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a three-dimensional model construction method based on two-dimensional drawing, electronic equipment and a storage medium.
Background
At present, in the process of building a large-scale drawing figure model, a constructor often performs independent model drawing on different objects in the drawing figure, so that the time consumption is long, the drawing needs to be performed according to original plate patterns in a one-to-one matching mode, the position correspondence of a plurality of objects possibly has deviation, the accuracy cannot be guaranteed to reach the standard, the space position alignment of a plurality of objects consumes a lot of time, the project construction period is inevitably increased, and the delivery credit is influenced. Therefore, designing a method capable of quickly constructing a three-dimensional model of two-dimensional painting is called a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides a three-dimensional model construction method and device based on two-dimensional drawing, which are used for extracting each element object in a two-dimensional drawing image, carrying out three-dimensional construction and image rendering on the extracted element object to obtain a final two-dimensional drawing three-dimensional model.
In a first aspect, an embodiment of the present application provides a method for constructing a three-dimensional model based on two-dimensional drawing, including:
acquiring an input two-dimensional drawing image;
extracting each two-dimensional element object in the two-dimensional drawing image, and recording the coordinate position and image texture of the two-dimensional element object;
inputting the two-dimensional element object into a preset renderer for three-dimensional rendering to obtain a corresponding three-dimensional element object;
mapping the image texture to the three-dimensional element object to form a texture model;
and moving the texture model to a corresponding position in a preset space coordinate system according to the coordinate position, and constructing to obtain a three-dimensional space model corresponding to the two-dimensional drawing image.
Further, the extracting each two-dimensional element object in the two-dimensional drawing image includes:
acquiring a background color value of the two-dimensional drawing image, and setting a background floating range according to the background color value;
acquiring pixel color values corresponding to all pixel points in the two-dimensional drawing image, and setting a pixel floating range according to the pixel color values;
and determining each two-dimensional element object according to the background floating range and the pixel floating range.
Further, after the extracting each two-dimensional element object in the two-dimensional drawing image, the method further includes:
converting the two-dimensional element object into a gray element object;
filtering the gray element object;
and obtaining the shape characteristics corresponding to the two-dimensional element object according to the gray element image after the filtering treatment.
Further, the inputting the two-dimensional element object into a preset renderer to perform three-dimensional rendering to obtain a corresponding three-dimensional element object includes:
acquiring shape characteristics of the two-dimensional element object;
performing feature matching on the shape features to obtain type features of the two-dimensional element objects;
and inputting the shape characteristics of the two-dimensional element object to a renderer according to the type characteristics to perform space expansion to obtain a corresponding three-dimensional element object.
Further, after the shape feature of the two-dimensional element object is input to a renderer according to the type feature to perform spatial expansion to obtain a corresponding three-dimensional element object, the method further includes:
and receiving adjustment operation information input by a user, and adjusting the shape of the three-dimensional element object according to the adjustment operation information.
Further, after the extracting each two-dimensional element object in the two-dimensional drawing image, the method further includes:
acquiring area parameters of the two-dimensional drawing image, determining whether the area parameters exceed a preset area value, and executing the next step if the area parameters exceed the preset area value;
performing segmentation operation on the two-dimensional drawing image to obtain a segmented image, wherein the segmented image comprises a two-dimensional element object;
matching the corresponding segmentation serial numbers with the segmentation images;
and inputting the split images into a sequencing window for display according to the split sequence numbers.
Further, after the input two-dimensional drawing image is acquired, the method further includes:
preprocessing the two-dimensional painting image, wherein the preprocessing comprises illumination correction processing, noise reduction processing, brightness contrast adjustment processing and saturation adjustment processing;
the illumination correction processing is processed by an illumination correction equation including:
Figure BDA0002514293540000031
wherein (1)>
Figure BDA0002514293540000032
The image after the illumination correction process is represented by i (x, y) representing the original image, μ representing the mean value of the image, σ representing the standard deviation of the image, and c representing a constant.
In a second aspect, an embodiment of the present application provides a three-dimensional model building apparatus based on two-dimensional drawing, including:
The acquisition module is used for: the method comprises the steps of acquiring an input two-dimensional drawing image;
and an extraction module: the method comprises the steps of extracting each two-dimensional element object in the two-dimensional drawing image, and recording the coordinate position and the image texture of the two-dimensional element object;
and a three-dimensional rendering module: the method comprises the steps of inputting the two-dimensional element object into a preset renderer for three-dimensional rendering to obtain a corresponding three-dimensional element object;
and a texture rendering module: the image texture mapping module is used for mapping the image texture onto the three-dimensional element object to form a texture model;
the construction module comprises: and the texture model is used for moving to the corresponding position in a preset space coordinate system according to the coordinate position, and a three-dimensional space model corresponding to the two-dimensional drawing image is constructed.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory and one or more processors;
the memory is used for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the two-dimensional sketch-based three-dimensional model building method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing the two-dimensional sketch-based three-dimensional model building method according to the first aspect.
According to the method, the two-dimensional element objects in the extracted two-dimensional drawing image are input to a renderer to carry out graphic extension on all the two-dimensional element objects to obtain corresponding three-dimensional element objects, and the image textures of the obtained two-dimensional element objects are attached to the three-dimensional element objects to carry out final graphic rendering. According to the method, modeling work of the painting can be rapidly and effectively carried out, automatic modeling and element alignment of the three-dimensional model are realized through the preset renderer, modeling efficiency of the painting is improved, delivery cycle of a modeling item of the painting is greatly reduced, in an application embodiment, as texture information of two-dimensional painting is directly collected, and collected image textures are attached to corresponding three-dimensional element objects, the fitting degree of a three-dimensional model construction result and the two-dimensional painting is higher, display content in the painting can be better displayed through the three-dimensional model, better artistic immersion is provided for a user, user experience is improved, and the aim of better artistic propagation is achieved.
Drawings
FIG. 1 is a flow chart of a three-dimensional model construction method based on two-dimensional painting provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart of extraction of a two-dimensional element object provided by an embodiment of the present application;
FIG. 3 is a schematic view showing a background color selection area of a two-dimensional drawing image according to an embodiment of the present application;
FIG. 4 is a flow chart of extraction of shape features provided by an embodiment of the present application;
FIG. 5 is a schematic flow chart of two-dimensional drawing image segmentation according to an embodiment of the present application;
FIG. 6 is a schematic view of a segmented ranking window display provided in an embodiment of the present application;
FIG. 7 is a schematic flow chart of an extension to a three-dimensional elemental object provided by an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating the effect of three-dimensional spatial extension provided by embodiments of the present application;
FIG. 9 is a schematic display diagram of a three-dimensional model building environment window provided by an embodiment of the present application;
fig. 10 is a schematic structural diagram of a three-dimensional model building device based on two-dimensional drawing according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the following detailed description of specific embodiments thereof is given with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the matters related to the present application are shown in the accompanying drawings. Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
In the existing large-scale picture model construction process, independent model drawing is often carried out by a constructor aiming at different objects in a picture, the time consumption is long, the drawing is carried out according to the original pattern in a one-to-one matching correspondence, the position correspondence of a plurality of objects possibly has differences, and the accuracy cannot be guaranteed to reach the standard. Based on the above, the embodiment of the application provides a three-dimensional model construction method based on two-dimensional painting, which is characterized in that a basic two-dimensional element object is formed by acquiring original high-definition two-dimensional painting image data, the coordinate position and image texture of the two-dimensional element object are recorded, then the two-dimensional element object is subjected to three-dimensional extension to obtain a corresponding three-dimensional extension model, the three-dimensional extension model is subjected to model rendering by combining with texture attachment to obtain a three-dimensional image corresponding to the original two-dimensional painting image, and finally three-dimensional model space data of the whole two-dimensional painting image is formed, and the three-dimensional model space data is loaded into a corresponding virtual reality engine to realize virtual reality tour operation. By the method, quick and effective two-dimensional drawing modeling can be realized, the delivery time of two-dimensional drawing modeling projects is greatly reduced, and customer satisfaction is improved.
In this embodiment, the two-dimensional painting refers to an ancient painting image, the ancient painting refers to an ancient painting work, and the ancient painting is a treasure of the Chinese artistic culture and is an important component of the Chinese civilization. In the field of computer technology, ancient paintings are displayed on each display screen in a two-dimensional drawing mode, so that people can appreciate various ancient painting works.
The three-dimensional modeling technology is a mature technology, aims at a special artwork of an ancient painting, often holds the emotions of ancient painters, provides a better environment for expressing the ancient painting emotions and enjoying the ancient painting for the masses, and can present the ancient painting emotions through the three-dimensional modeling technology, so that the three-dimensional effect of the ancient painting is realized, the use experience of a user is improved, and the purpose of artistic spreading of the ancient painting is further achieved.
Fig. 1 shows a flowchart of a three-dimensional model building method based on two-dimensional drawing according to an embodiment of the present application, where the three-dimensional model building method based on two-dimensional drawing provided in the embodiment may be executed by a three-dimensional model building device based on two-dimensional drawing, where the three-dimensional model building device based on two-dimensional drawing may be implemented by software and/or hardware, and the three-dimensional model building device based on two-dimensional drawing may be formed by two or more physical entities or may be formed by one physical entity. In general, the three-dimensional model building device based on two-dimensional drawing can be a computer, a mobile phone, a tablet or a background server.
The following description will be made taking a background server as an example of an apparatus that performs a three-dimensional model construction method based on two-dimensional painting. Referring to fig. 1, the method for constructing a three-dimensional model based on two-dimensional painting specifically includes:
s101: an input two-dimensional drawing image is acquired.
The two-dimensional drawing image may be obtained by capturing an image with a camera, or may be a stored two-dimensional drawing image, or may be a two-dimensional drawing image created on a computer. The two-dimensional painting in this embodiment refers to an ancient painting image. In the following description, the ancient painting image is mainly used as a description object, but the scheme protection is not limited to the ancient painting image, and other two-dimensional painting images can be included.
Firstly, carrying out detail scanning on an ancient painting image by a high-definition camera to obtain the content of the ancient painting image for data processing. In the data acquisition process, the influence of illumination and the like or the influence of display may exist, so that further processing is required on the ancient painting image to obtain the ancient painting image which meets the data processing requirement better.
Further, after the capturing of the captured ancient painting image, the method further includes:
And preprocessing the ancient painting image, wherein the preprocessing comprises illumination correction processing, noise reduction processing, brightness contrast adjustment processing and saturation adjustment processing. The illumination correction processing is processed by an illumination correction equation including: ī (x, y) = [ i (x, y) - μ ]/c σ; where ī (x, y) represents an image after the illumination modification process, i (x, y) represents an original image, μ represents a mean value of the image, σ represents a standard deviation of the image, and c represents a constant.
In this embodiment, the illumination correction refers to a process of performing a series of standard processing transformations on the image to transform the image into a fixed standard form, that is, performing standardization processing on the image in characteristics of translation, rotation, scaling, and the like. The illumination correction processing is mainly that the collected ancient painting images have different characteristics under different illumination conditions, so that a series of standard processing transformation is needed to obtain images which are more favorable for processing.
The original ancient painting image shot by the camera is subjected to illumination correction, and then the image subjected to illumination correction is subjected to noise reduction, brightness adjustment, contrast adjustment and saturation adjustment. Therefore, the detail display of the obtained image is clearer, and the image display effect which meets the actual requirements better is convenient to construct.
S102: and extracting each two-dimensional element object in the two-dimensional drawing image, and recording the coordinate position and the image texture of the two-dimensional element object.
When the actual processing is performed, a unified coordinate system is firstly constructed for the ancient painting image, then the ancient painting image is input into an element detection station to identify each element in the ancient painting image, in the embodiment, the element detection station is an independent module, the characteristic extraction work of any image can be realized, each independent element is identified, and finally, an element comparison table is output, namely, all independent two-dimensional element objects are output. The element detection station in this embodiment is an independent detection module built in advance, and includes element feature models such as pavilion, river, character, etc., and the corresponding element content in the ancient painting image is identified by matching the element features in the ancient painting image with the above features.
Specifically, as shown in fig. 2, fig. 2 is a schematic flow chart of extraction of two-dimensional element objects provided in an embodiment of the present application, where the extraction of each two-dimensional element object in the two-dimensional drawing image includes:
s102a: and obtaining a background color value of the ancient painting image, and setting a background floating range according to the background color value.
The background color of the ancient painting image is determined by a module similar to the function of a color taking pen, and as the color difference value of most ancient painting is smaller, the background color is difficult to distinguish by common means, and the tolerance is reduced by adopting the function similar to the function of the color taking pen so as to ensure the positioning of the background color. And obtaining the corresponding color code by taking the color value of the color extraction point position, and determining the color code as the background color. In the implementation, the background color can be automatically identified, and the floating range of the background RGB color value can be set. Specifically, the background color selection range is shown in fig. 3, and fig. 3 is a schematic view showing a background color selection area of the ancient painting image according to the embodiment of the present application.
S102b: and acquiring pixel color values corresponding to all pixel points in the two-dimensional drawing image, and setting a pixel floating range according to the pixel color values.
The method mainly comprises the steps of obtaining color codes of corresponding pixel points in all ancient painting images, obtaining RGB color values of corresponding positions through all coordinate positions in the ancient painting, and setting an RGB color value floating range of an element object.
S102c: and determining each two-dimensional element object according to the background floating range and the pixel floating range.
Specifically, the selection of all two-dimensional element objects is determined according to the background floating range and the element object floating range, and in the element object selection, element definition can be performed through the color value difference between the element objects and the element coordinate position spacing in the color value range, so that each element object is independently obtained. The process of the step is an automatic identification process, wherein corresponding parameters can be preset when each parameter is preprocessed in the first step.
Further, fig. 4 is a schematic flow chart of extraction of shape features according to an embodiment of the present application, as shown in fig. 4, after the extracting each two-dimensional element object in the two-dimensional drawing image, the method further includes:
s102d: converting the two-dimensional element object into a gray element object;
s102e: filtering the gray element object;
s102f: and obtaining the shape characteristics corresponding to the two-dimensional element object according to the gray element image after the filtering treatment.
The method mainly aims at acquiring the shape characteristics of the corresponding two-dimensional element object, and when the shape characteristics are extracted, the acquired RGB format image is subjected to or degree processing, and then the XY direction filter factor is extracted for filtering processing, so that the corresponding shape characteristics are obtained.
The extraction of the selected region features in the embodiment comprises color feature extraction, shape feature extraction, image information retrieval and the like. The step of extracting color features includes converting img image format of the element object into double data type format, and extracting r, g and b components of the two-dimensional element object, wherein the functions are expressed as follows: rmatix = sourceimg (: 1); gmatix = sourceimg (: 2); bmatix = sourceimg (: 3). And extracting the color values in the corresponding selection area through the functions.
And extracting each element characteristic in the ancient painting according to the elements in the element comparison table, representing the characteristic, recording the coordinate positions of each element object in the same coordinate system, and carrying out one-to-one element analysis on the element objects according to the identification codes.
Further, fig. 5 is a schematic flow chart of two-dimensional drawing image segmentation provided in the embodiment of the present application, as shown in fig. 5, after the extracting each two-dimensional element object in the two-dimensional drawing image, the method further includes:
acquiring area parameters of the two-dimensional drawing image, determining whether the area parameters exceed a preset area value, and executing the next step if the area parameters exceed the preset area value;
performing segmentation operation on the two-dimensional drawing image to obtain a segmented image, wherein the segmented image comprises a two-dimensional element object;
Matching the corresponding segmentation serial numbers with the segmentation images;
and inputting the split images into a sequencing window for display according to the split sequence numbers.
The above-mentioned divided image includes a complete two-dimensional element object, and in the dividing process, the unified element object cannot be divided into two parts, for example, the same person is divided into two parts or the ship is divided into two parts, so that the subsequent attaching and three-dimensional expanding operations are inconvenient.
Fig. 6 is a schematic view showing a segmented ranking window according to an embodiment of the present application, in which the segmented ancient image sheets are placed in corresponding display boxes to be displayed, and each ancient image sheet is marked, and when the corresponding ancient image sheet is selected to be operated, the operation tool in the operation panel in fig. 6 can be selected to perform further processing operation.
Specifically, before element analysis, whether the ancient painting has a segmentation requirement is judged, namely, the area of the two-dimensional graph of the plane of the ancient painting is calculated through the coordinate system, if the area of the ancient painting exceeds a certain range, the ancient painting is segmented according to each independent element object to form each independent ancient painting piece, and a segmentation sequence number is formed according to the segmentation position. The sorting and position searching of the divided ancient painting pieces can be facilitated for a user through the division sequence numbers.
The standard of segmentation is that an element object cannot be cut into an ancient painting. If the picture without exceeding the range belongs to the range capable of being directly analyzed and operated, the segmentation result or the non-segmentation result is input into a type analysis module for element object type analysis, the type analysis module is connected with an ancient painting element object type database for element object feature matching to obtain the types of the element objects, including characters, trees, pavilions, boats and the like. The type analysis module can identify the type of each specific object in the ancient painting.
S103: and inputting the two-dimensional element object into a preset renderer for three-dimensional rendering to obtain a corresponding three-dimensional element object.
As shown in fig. 7, fig. 7 is a schematic flow chart of expanding the two-dimensional image element into a three-dimensional element object, where the three-dimensional element object is input into a renderer to perform three-dimensional rendering to obtain the corresponding three-dimensional element object, and the method includes:
s103a: acquiring shape characteristics of the two-dimensional element object;
s103b: performing feature matching on the shape features to obtain type features of the two-dimensional element objects;
S103c: and inputting the shape characteristics of the two-dimensional element object to a renderer according to the type characteristics to perform space expansion to obtain a corresponding three-dimensional element object.
The renderer is the core of the 3D engine, which enables three-dimensional expansion of two-dimensional element objects. In this embodiment, the renderer is constructed on the basis of a model library formed by a plurality of ancient painting element models, the rendering process needs to be performed by butting and extracting data in the model library, and then the renderer performs three-dimensional expansion on two-dimensional elements in the ancient painting image by constructing image features in the ancient painting image in advance.
In the step, the basic rendering model is mainly constructed, and all the image features in the ancient painting image, such as houses, trees, figures, roads, flowers and birds, shops and the like, are constructed before the shape recognition is carried out; there is a basic framework for the model in each case. When the corresponding shape is recognized as a house, and whether the house belongs to an official house or a folk house or a shop is determined for the house, the house is expanded in space according to the acquired two-dimensional image based on the belonging type to obtain a corresponding three-dimensional image so as to form an adjustable space model. As shown in fig. 8, fig. 8 is a schematic diagram of the effect of three-dimensional expansion provided in the embodiment of the present application, that is, the change condition that two dimensions are expanded into three dimensions is shown.
The model construction cannot completely meet the actual requirements, and although each picture has a small house, the forms of the small houses are different, and certain differences exist in the height shapes, so that the three-dimensional image needs to be further adjusted, and when the model is generated, the model is constructed, and the final space model is not obtained but the adjustable space model. Further, after the shape feature is input to the renderer and matched with a pre-constructed shape model to perform spatial expansion to obtain a corresponding three-dimensional element object, the method further comprises the steps of:
s1031: and receiving adjustment operation information input by a user, and adjusting the shape of the three-dimensional element object according to the adjustment operation information.
According to the size of a single element object, a three-dimensional drawing environment is built under the size, the operation is jumped to an independent window, the size of the independent element object in the window is adjusted, the size of the independent element object is not changed, and a three-dimensional model is built by building independent multi-element rendering of a renderer. The corresponding model is resized by a stretching operation. As shown in fig. 9, fig. 9 is a schematic display diagram of a three-dimensional model building environment window provided in the embodiment of the present application, in which multiple operations on a three-dimensional model can be implemented in the three-dimensional model building window, and by performing fine adjustment on the model in the three-dimensional environment window, more accurate three-dimensional features that are more consistent with the display of ancient paintings are obtained. By three-dimensionally expanding all elements appearing in the ancient painting, a three-dimensional model of all images in the ancient painting can be obtained, and then the construction of a complete ancient painting model can be conveniently carried out in the later stage.
S104: mapping the image texture to the three-dimensional element object to form a texture model; .
S105: and moving the texture model to a corresponding position in a preset space coordinate system according to the coordinate position, and constructing and obtaining an ancient painting three-dimensional space model corresponding to the two-dimensional painting image. .
In this embodiment, the image texture is actually a two-dimensional array whose elements are color values. The individual color values are referred to as texels or texels. Each texel has a unique address in the texture. This address can be considered as a column and row value, denoted by U and V, respectively. The texture coordinates are located in texture space. That is, they correspond to the (0, 0) position in the texture. When we apply a texture to a primitive, its texel address must be mapped into the object coordinate system. And then translated to a screen coordinate system or pixel location.
In step S104 and step S105 of the present embodiment, two operations are performed, one is to attach a two-dimensional texture, and the other is to build a three-dimensional space coordinate system. Firstly, for two-dimensional texture attachment, all the ancient picture sheets are segmented in the early stage, so that the two-dimensional textures corresponding to the ancient picture sheets are attached to the corresponding three-dimensional models according to the segmentation serial numbers to form final three-dimensional models. The three-dimensional model obtained by the attachment is already exactly the same in color shape as the display in the two-dimensional image. I.e. a two-dimensional to three-dimensional transformation is achieved. The logic sequence for attaching the two-dimensional textures can be ensured through the segmentation sequence numbers, so that the attaching efficiency is improved, and if the segmentation sequence numbers are not used, the high precise alignment cannot be achieved when the two-dimensional textures are attached. And the automatic texture attachment can be realized by setting the segmentation serial numbers, and each texture and model are corresponding to a specific segmentation serial number, so that the automatic one-to-one attachment can be realized.
The above is texture attached content, and in this step, the method further comprises constructing a space coordinate system, and implementing final spatial display by corresponding coordinate positions of the obtained three-dimensional model one-to-one corresponding values. If the two-dimensional ancient picture is divided, after the three-dimensional model formed by the two-dimensional ancient picture is correspondingly moved to the corresponding position of the two-dimensional ancient picture, the divided ancient picture has two coordinates, wherein the first coordinate is corresponding position coordinate information under the current coordinate system of the ancient picture, and the second coordinate is corresponding position coordinate information under the coordinate system of the whole ancient picture. And establishing an association relation between the two coordinates according to the segmented ancient painting, so as to facilitate the mutual conversion of the two. And after the three-dimensional model is moved to the corresponding position of the two-dimensional antique drawing, generating a first z-axis coordinate, generating a corresponding second z-axis coordinate according to the association relation of the first coordinate and the second coordinate under the two-dimensional coordinate system, and further obtaining the spatial coordinate position information of the corresponding three-dimensional model under the spatial coordinate system. The spatial position information of the element object can be obtained through the steps, the information is the basis of data display, and when data transmission is carried out, the accurate position display can be carried out on all information contents through the spatial position information. In this embodiment, the space coordinate system is built based on the basic size of the ancient painting and the size of the texture model. Assuming that the area of the ancient painting is 5, and the comprehensive area of the independent elements in the ancient painting is 2; the volume after the texture model is built is 8, so the space volume of the ancient painting should be more than 125 cubes.
The construction of the two-dimensional texture attachment and the spatial coordinate system can be flexibly processed, and the sequence of the two can be adjusted according to the actual situation. For the ancient painting with large size, texture attachment can be carried out on the basis of segmentation, and the attachment is associated to a space coordinate system. Aiming at the small size of the ancient painting, the texture attachment can be carried out after a space coordinate system is built.
After all the two-dimensional element objects are transformed into three-dimensional element objects through the texture attachment and the construction of the space coordinate system, the three-dimensional element objects are moved to corresponding positions in the space coordinate system to obtain a three-dimensional model with complete ancient painting, and the three-dimensional information display is realized through the three-dimensional model.
S106: and obtaining model space data of the ancient painting three-dimensional model, and loading the model space data to a virtual reality engine for display.
After model space data of the three-dimensional model of the ancient painting is obtained, information is displayed mainly, the displayed carrier has multiple forms, three-dimensional display can be directly performed through a display, more preferably, the three-dimensional data can be displayed through virtual reality glasses, and the received three-dimensional data is displayed by loading all the model space data into corresponding virtual reality engines, so that a viewer wearing glasses has an immersive feeling. For example, when the obtained three-dimensional model data of the upper map of the Qingming dynasty is input into the corresponding virtual reality engine, the user can browse all real contents in the upper map of the Qingming dynasty through the virtual reality glasses, and the color textures and the model sizes which are built through the three-dimensional model are basically the same as those in the images, but the user can achieve more different effects through the browsing mode of the virtual reality, and can browse streets, bridges and view all commodity transactions in the Song dynasty through the virtual reality glasses, and various scenes which occur at the moment, so that the tour fun is greatly improved, and the reality of the tour is improved.
According to the method, the two-dimensional element objects in the extracted ancient painting image are input to a renderer to conduct graphic extension on all the two-dimensional element objects to obtain corresponding three-dimensional element objects, and the image textures of the obtained two-dimensional element objects are attached to the three-dimensional element objects to conduct final graphic rendering. By the method, the ancient painting modeling can be rapidly and effectively performed, the delivery time of the ancient painting modeling project is greatly reduced, and the customer satisfaction is improved.
On the basis of the above embodiment, fig. 10 is a schematic structural diagram of a three-dimensional model building device based on two-dimensional drawing according to the embodiment of the present application. Referring to fig. 10, the three-dimensional model building apparatus based on two-dimensional drawing provided in this embodiment specifically includes:
the acquisition module 21: the method comprises the steps of acquiring an input two-dimensional drawing image;
extraction module 22: the method comprises the steps of extracting each two-dimensional element object in the two-dimensional drawing image, and recording the coordinate position and the image texture of the two-dimensional element object;
three-dimensional rendering module 23: the method comprises the steps of inputting the two-dimensional element object into a preset renderer for three-dimensional rendering to obtain a corresponding three-dimensional element object;
texture rendering module 24: the image texture mapping module is used for mapping the image texture onto the three-dimensional element object to form a texture model;
Building block 25: and the texture model is used for moving to the corresponding position in a preset space coordinate system according to the coordinate position, and an ancient painting three-dimensional space model corresponding to the two-dimensional painting image is constructed.
Further, the extracting each two-dimensional element object in the two-dimensional drawing image includes:
a first color acquisition module: the background color value is used for acquiring the two-dimensional drawing image, and a background floating range is set according to the background color value;
a second color acquisition module: the method comprises the steps of obtaining pixel color values corresponding to all pixel points in the two-dimensional drawing image, and setting a pixel floating range according to the pixel color values;
and a determination module: for determining respective two-dimensional elemental objects from the background floating range and the pixel floating range.
Further, after the extracting each two-dimensional element object in the two-dimensional drawing image, the method further includes:
area acquisition module: the area parameter is used for acquiring the ancient painting image, determining whether the area parameter exceeds a preset area value, and executing the next module if the area parameter exceeds the preset area value;
and a segmentation module: the method comprises the steps of performing segmentation operation on the ancient painting image to obtain a segmented image, wherein the segmented image comprises a two-dimensional element object;
And a matching module: the segmentation sequence numbers are used for matching the corresponding segmentation images;
and a display module: and the segmentation images are input into a sequencing window for display according to the segmentation sequence numbers.
According to the method, the two-dimensional element objects in the extracted two-dimensional drawing image are input to a renderer to carry out graphic extension on all the two-dimensional element objects to obtain corresponding three-dimensional element objects, and the image textures of the obtained two-dimensional element objects are attached to the three-dimensional element objects to carry out final graphic rendering. According to the method, modeling work of the painting works can be rapidly and effectively carried out, automatic modeling and element alignment of the three-dimensional model are achieved through the preset renderer, modeling efficiency of the painting works is improved, lead time of the painting modeling project is greatly reduced, in the application embodiment, the collected image textures are attached to corresponding three-dimensional element objects, the fitting degree of a three-dimensional model building result and the two-dimensional painting is higher, display content in the painting can be better shown through the three-dimensional model, better artistic immersion is provided for a user, user experience is improved, and the purpose of better artistic propagation is achieved.
The three-dimensional model construction device based on the two-dimensional drawing provided by the embodiment of the application can be used for executing the three-dimensional model construction method based on the two-dimensional drawing provided by the embodiment of the application, and has corresponding functions and beneficial effects.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and referring to fig. 11, the electronic device includes: processor 31, memory 32, communication module 33, input device 34 and output device 35. The number of processors 31 in the electronic device may be one or more and the number of memories 32 in the electronic device may be one or more. The processor 31, memory 32, communication module 33, input device 34 and output device 35 of the electronic device may be connected by a bus or other means.
The memory 32 is a computer readable storage medium, and may be used to store a software program, a computer executable program, and a module corresponding to the three-dimensional model construction method based on two-dimensional painting according to any embodiment of the present application (for example, the acquisition module 21, the extraction module 22, the three-dimensional rendering module 23, the texture rendering module 24, and the construction module 25 in the three-dimensional model construction device based on two-dimensional painting). The memory 32 may mainly include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the device, etc. In addition, memory 32 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, the memory may further include memory remotely located with respect to the processor, the remote memory being connectable to the device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The communication module 33 is used for data transmission.
The processor 31 executes various functional applications of the apparatus and data processing by executing software programs, instructions and modules stored in the memory 32, i.e., implements the above-described two-dimensional drawing-based three-dimensional model construction method.
The input means 34 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the device. The output means 35 may comprise a display device such as a display screen.
The electronic equipment provided by the embodiment can be used for executing the three-dimensional model construction method based on the two-dimensional drawing, and has corresponding functions and beneficial effects.
The present embodiments also provide a storage medium containing computer executable instructions, which when executed by the computer processor 31, are for performing a two-dimensional painting-based three-dimensional model construction method comprising:
acquiring an input two-dimensional drawing image; extracting each two-dimensional element object in the two-dimensional drawing image, and recording the coordinate position and image texture of the two-dimensional element object;
Inputting the two-dimensional element object into a preset renderer for three-dimensional rendering to obtain a corresponding three-dimensional element object; mapping the image texture to the three-dimensional element object to form a texture model;
and moving the texture model to a corresponding position in a preset space coordinate system according to the coordinate position, and constructing to obtain a two-dimensional drawing three-dimensional space model corresponding to the two-dimensional drawing image.
Storage media-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, lanbas (Rambus) RAM, etc.; nonvolatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a second, different computer system connected to the first computer system through a network such as the internet. The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media residing in different locations (e.g., in different computer systems connected by a network). The storage medium may store program instructions (e.g., embodied as a computer program) executable by the one or more processors 31.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present application is not limited to the three-dimensional model building method based on two-dimensional painting as described above, and may also perform the relevant operations in the three-dimensional model building method based on two-dimensional painting provided in any embodiment of the present application.
The three-dimensional model construction device based on two-dimensional drawing, the storage medium and the electronic device provided in the foregoing embodiments may execute the three-dimensional model construction method based on two-dimensional drawing provided in any embodiment of the present application, and technical details not described in detail in the foregoing embodiments may be referred to the three-dimensional model construction method based on two-dimensional drawing provided in any embodiment of the present application.
The foregoing description is only of the preferred embodiments of the present application and the technical principles employed. The present application is not limited to the specific embodiments described herein, but is capable of numerous obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the present application. Therefore, while the present application has been described in connection with the above embodiments, the present application is not limited to the above embodiments, but may include many other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the claims.

Claims (8)

1. A three-dimensional model construction method based on two-dimensional painting is characterized by comprising the following steps:
acquiring an input two-dimensional drawing image; extracting each two-dimensional element object in the two-dimensional drawing image, and recording the coordinate position and image texture of the two-dimensional element object; wherein the extracting each two-dimensional element object in the two-dimensional drawing image comprises: acquiring a background color value of the two-dimensional drawing image, setting a background floating range according to the background color value, acquiring a pixel color value corresponding to each pixel point in the two-dimensional drawing image, setting a pixel floating range according to the pixel color value, and determining each two-dimensional element object according to the background floating range and the pixel floating range;
inputting the two-dimensional element object into a preset renderer for three-dimensional rendering to obtain a corresponding three-dimensional element object; wherein, include: acquiring shape characteristics of the two-dimensional element object, performing characteristic matching on the shape characteristics to obtain type characteristics of the two-dimensional element object, and inputting the shape characteristics of the two-dimensional element object to a renderer according to the type characteristics to perform space expansion to obtain a corresponding three-dimensional element object; mapping the image texture to the three-dimensional element object to form a texture model;
And moving the texture model to a corresponding position in a preset space coordinate system according to the coordinate position, and constructing to obtain a three-dimensional space model corresponding to the two-dimensional drawing image.
2. The two-dimensional sketch-based three-dimensional model construction method according to claim 1, further comprising, after the extracting each two-dimensional element object in the two-dimensional sketch image:
converting the two-dimensional element object into a gray element object;
filtering the gray element object;
and obtaining the shape characteristics corresponding to the two-dimensional element object according to the gray element image after the filtering treatment.
3. The method for constructing a three-dimensional model based on two-dimensional drawing according to claim 1, further comprising, after the shape feature of the two-dimensional element object is input to a renderer according to the type feature to perform spatial expansion to obtain a corresponding three-dimensional element object:
and receiving adjustment operation information input by a user, and adjusting the shape of the three-dimensional element object according to the adjustment operation information.
4. The two-dimensional sketch-based three-dimensional model construction method according to claim 1, further comprising, after the extracting each two-dimensional element object in the two-dimensional sketch image:
Acquiring area parameters of the two-dimensional drawing image, determining whether the area parameters exceed a preset area value, and executing the next step if the area parameters exceed the preset area value;
performing segmentation operation on the two-dimensional drawing image to obtain a segmented image, wherein the segmented image comprises a two-dimensional element object;
matching the corresponding segmentation serial numbers with the segmentation images;
and inputting the split images into a sequencing window for display according to the split sequence numbers.
5. The two-dimensional drawing-based three-dimensional model construction method according to any one of claims 1 to 4, further comprising, after the acquiring the input two-dimensional drawing image:
preprocessing the two-dimensional painting image, wherein the preprocessing comprises illumination correction processing, noise reduction processing, brightness contrast adjustment processing and saturation adjustment processing;
the illumination correction processing is processed by an illumination correction equation including: ī (x, y) = [ i (x, y) - μ ]/c σ; where ī (x, y) represents an image after the illumination correction process, i (x, y) represents an original image, μ represents a mean value of the image, σ represents a standard deviation of the image, and c represents a constant.
6. The three-dimensional model construction device based on two-dimensional drawing is characterized by comprising:
The acquisition module is used for: the method comprises the steps of acquiring an input two-dimensional drawing image;
and an extraction module: the method comprises the steps of extracting each two-dimensional element object in the two-dimensional drawing image, and recording the coordinate position and the image texture of the two-dimensional element object; wherein, the extraction module includes: a first color acquisition module: the background color value is used for acquiring the two-dimensional drawing image, and a background floating range is set according to the background color value; a second color acquisition module: the method comprises the steps of obtaining pixel color values corresponding to all pixel points in the two-dimensional drawing image, and setting a pixel floating range according to the pixel color values; and a determination module: for determining respective two-dimensional elemental objects from the background floating range and the pixel floating range;
and a three-dimensional rendering module: the method comprises the steps of inputting the two-dimensional element object into a preset renderer for three-dimensional rendering to obtain a corresponding three-dimensional element object; the extraction module is specifically configured to: acquiring shape characteristics of the two-dimensional element object, performing characteristic matching on the shape characteristics to obtain type characteristics of the two-dimensional element object, and inputting the shape characteristics of the two-dimensional element object to a renderer according to the type characteristics to perform space expansion to obtain a corresponding three-dimensional element object;
And a texture rendering module: the image texture mapping module is used for mapping the image texture onto the three-dimensional element object to form a texture model;
the construction module comprises: and the texture model is used for moving to the corresponding position in a preset space coordinate system according to the coordinate position, and a three-dimensional space model corresponding to the two-dimensional drawing image is constructed.
7. An electronic device, comprising:
a memory and one or more processors;
the memory is used for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the two-dimensional sketch-based three-dimensional model building method as claimed in any one of claims 1-5.
8. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the two-dimensional sketch-based three-dimensional model building method of any of claims 1-5.
CN202010470963.3A 2020-05-28 2020-05-28 Three-dimensional model construction method based on two-dimensional drawing, electronic equipment and storage medium Active CN111612880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010470963.3A CN111612880B (en) 2020-05-28 2020-05-28 Three-dimensional model construction method based on two-dimensional drawing, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010470963.3A CN111612880B (en) 2020-05-28 2020-05-28 Three-dimensional model construction method based on two-dimensional drawing, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111612880A CN111612880A (en) 2020-09-01
CN111612880B true CN111612880B (en) 2023-05-09

Family

ID=72200290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010470963.3A Active CN111612880B (en) 2020-05-28 2020-05-28 Three-dimensional model construction method based on two-dimensional drawing, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111612880B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102155A (en) * 2020-09-09 2020-12-18 青岛黄海学院 System and method for converting planar design into non-planar design
CN112184884A (en) * 2020-09-23 2021-01-05 上海眼控科技股份有限公司 Three-dimensional model construction method and device, computer equipment and storage medium
CN112560158A (en) * 2020-12-23 2021-03-26 杭州群核信息技术有限公司 Table preview body generation method and table design system in home decoration design
CN115129191B (en) * 2021-03-26 2023-08-15 北京新氧科技有限公司 Three-dimensional object pickup method, device, equipment and storage medium
CN113139217B (en) * 2021-04-30 2023-08-29 深圳市行识未来科技有限公司 Conversion system for planar design and three-dimensional space design
CN114612606A (en) * 2022-02-11 2022-06-10 广东时谛智能科技有限公司 Shoe body exclusive customization method and device based on graphic elements and color matching data
CN114723601B (en) * 2022-04-08 2023-05-09 山东翰林科技有限公司 Model structured modeling and rapid rendering method under virtual scene

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018195485A1 (en) * 2017-04-21 2018-10-25 Mug Life, LLC Systems and methods for automatically creating and animating a photorealistic three-dimensional character from a two-dimensional image
CN109718554A (en) * 2018-12-29 2019-05-07 深圳市创梦天地科技有限公司 A kind of real-time rendering method, apparatus and terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018195485A1 (en) * 2017-04-21 2018-10-25 Mug Life, LLC Systems and methods for automatically creating and animating a photorealistic three-dimensional character from a two-dimensional image
CN109718554A (en) * 2018-12-29 2019-05-07 深圳市创梦天地科技有限公司 A kind of real-time rendering method, apparatus and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
三维动画图像纹理实时渲染系统设计;孔素然;殷均平;;现代电子技术(05);全文 *

Also Published As

Publication number Publication date
CN111612880A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN111612880B (en) Three-dimensional model construction method based on two-dimensional drawing, electronic equipment and storage medium
CN106778928B (en) Image processing method and device
Zheng et al. Non-local scan consolidation for 3D urban scenes
Pan et al. Rapid scene reconstruction on mobile phones from panoramic images
WO2014071060A2 (en) Scale-invariant superpixel region edges
CN111401266B (en) Method, equipment, computer equipment and readable storage medium for positioning picture corner points
Xiaokang et al. Research on augmented reality method based on improved ORB algorithm
CN114972646B (en) Method and system for extracting and modifying independent ground objects of live-action three-dimensional model
CN109118576A (en) Large scene three-dimensional reconstruction system and method for reconstructing based on BDS location-based service
Tingdahl et al. Arc3d: A public web service that turns photos into 3d models
Fernández-Palacios et al. Augmented reality for archaeological finds
Rasheed et al. 3D face creation via 2D images within blender virtual environment
CN114529689A (en) Ceramic cup defect sample amplification method and system based on antagonistic neural network
CN114330708A (en) Neural network training method, system, medium and device based on point cloud data
Leung et al. Tileable btf
CN113486941A (en) Live image training sample generation method, model training method and electronic equipment
Han et al. The application of augmented reality technology on museum exhibition—a museum display project in Mawangdui Han dynasty tombs
Pan et al. Salient structural elements based texture synthesis
Tomalini et al. Real-Time Identification of Artifacts: Synthetic Data for AI Model
Lee et al. Using an LCD Monitor and a Robotic Arm to Quickly Establish Image Datasets for Object Detection
CN113192204B (en) Three-dimensional reconstruction method for building in single inclined remote sensing image
Kang et al. Lightweight method with controllable appearance details for 3D reconstructed building models
CN115619985A (en) Augmented reality content display method and device, electronic equipment and storage medium
Phursule et al. Augmented Reality Snipping Tool
Oriti et al. A single RGB image based 3D object reconstruction system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant