CN117454495A - CAD vector model generation method and device based on building sketch outline sequence - Google Patents

CAD vector model generation method and device based on building sketch outline sequence Download PDF

Info

Publication number
CN117454495A
CN117454495A CN202311786349.8A CN202311786349A CN117454495A CN 117454495 A CN117454495 A CN 117454495A CN 202311786349 A CN202311786349 A CN 202311786349A CN 117454495 A CN117454495 A CN 117454495A
Authority
CN
China
Prior art keywords
building
model based
profile
model
cad
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311786349.8A
Other languages
Chinese (zh)
Other versions
CN117454495B (en
Inventor
朱旭平
宋彬
何文武
张宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Feidu Technology Co ltd
Original Assignee
Beijing Feidu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Feidu Technology Co ltd filed Critical Beijing Feidu Technology Co ltd
Priority to CN202311786349.8A priority Critical patent/CN117454495B/en
Publication of CN117454495A publication Critical patent/CN117454495A/en
Application granted granted Critical
Publication of CN117454495B publication Critical patent/CN117454495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Civil Engineering (AREA)
  • Architecture (AREA)
  • Medical Informatics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Structural Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of building model generation, and discloses a CAD vector model generation method and device based on a building sketch outline sequence. The CAD vector model generation method based on the building sketch outline sequence comprises the following steps: acquiring a monomer building image; slicing the monomer building image so as to obtain a plurality of contour maps; obtaining a trained model based on a transducer algorithm; inputting each profile map into the model based on the transform algorithm, so as to obtain a corrected profile map corresponding to each profile map; and reconstructing the model by using the corrected contour maps through a lifting and stretching method. According to the CAD vector model generation method based on the building sketch contour sequence, the corrected contour map is obtained through the trained model based on the transducer algorithm, and the robustness of the algorithm is greatly improved through the end-to-end execution of the neural network and the training of the big data driving mode. Can generate a high-quality CAD-grade building model and can edit parameters.

Description

CAD vector model generation method and device based on building sketch outline sequence
Technical Field
The present disclosure relates to the field of building model generation technologies, and in particular, to a method and a device for generating a CAD vector model based on a building sketch outline sequence.
Background
In many fields of digital twinning, game movies, etc., high quality modeling of large-scale urban architecture is required. With the development of data acquisition technology, computing power and three-dimensional reconstruction algorithms, people can reconstruct the surface three-dimensional structure of an observed object through images or laser scanning sequences and express the surface three-dimensional structure through dense point clouds or triangular grids. Extracting structural and topological information from models of discrete, dense point clouds and triangular mesh representations and presenting them in a more compact, compact form is a core problem of three-dimensional vectorization modeling. The essence is to regress the high quality three model expression combined by CAD-level parameter command. At present, CAD vectorization modeling for an image three-dimensional reconstruction model needs a great amount of manual operation, has huge modeling cost and low efficiency, and is difficult to meet the requirement of vectorization modeling of urban scale scenes.
It is therefore desirable to have a solution that solves or at least alleviates the above-mentioned drawbacks of the prior art.
Disclosure of Invention
The invention aims to provide a CAD vector model generating method based on a building sketch outline sequence, which at least solves one technical problem.
The invention provides the following scheme:
according to one aspect of the present invention, there is provided a CAD vector model generating method based on a sequence of architectural sketch contours, the CAD vector model generating method based on the sequence of architectural sketch contours comprising:
acquiring a monomer building image;
slicing the monomer building image so as to obtain a plurality of contour maps;
obtaining a trained model based on a transducer algorithm;
inputting each profile map into the model based on the transform algorithm, so as to obtain a corrected profile map corresponding to each profile map;
and reconstructing the model by using the corrected contour maps through a lifting and stretching method.
Optionally, the acquiring the individualized building image includes:
acquiring oblique photographic image information;
and acquiring the individualized building image according to the oblique photographic image information.
Optionally, the inputting each profile into the model based on the transducer algorithm, so as to obtain a corrected profile corresponding to each profile includes:
the following operations are performed for each profile:
acquiring fusion coding features input to the contour map of the model based on the transform algorithm;
inputting the fusion coding features into an encoder for feature conversion, so as to obtain conversion features;
inputting the conversion characteristics into a decoder so as to obtain command type prediction information and command parameter information corresponding to the conversion characteristics;
and generating a corrected profile according to the command type prediction information and the command parameter information.
Optionally, the model based on the transducer algorithm adopts the following loss function when predicting the command type:
wherein, the method comprises the steps of, wherein,
the real label representing sample i, n is the total category number.
Optionally, the model based on the transducer algorithm adopts the following loss function when predicting command parameters:
wherein, the method comprises the steps of, wherein,
true tag representing sample i, +.>Predictive labels, S1 and S2, representing the model for sample i represent two sets of 3D point clouds, +.>Representing the sum of the minimum distances from any point x to S2 in S1; />Representing the sum of the minimum distances from any point y in S2 to S1.
Optionally, the acquiring the fusion coding feature of the contour map input to the model based on the transform algorithm includes:
dividing the outline map into 64 patches with the size of 16 multiplied by 16, and then carrying out feature coding on each small patch to obtain a first feature;
combining the relative spatial position information of each patch in the image, and coding to obtain a second characteristic;
and fusing the first feature and the second feature, thereby obtaining a fusion coding feature.
Optionally, the command type includes a straight line type, a curve type, a circle type, and a point type.
The application also provides a CAD vector model generating device based on the building sketch outline sequence, which comprises:
the system comprises a monomer building image acquisition module, a control module and a control module, wherein the monomer building image acquisition module is used for acquiring a monomer building image;
the profile acquisition module is used for slicing the individualized building image so as to acquire a plurality of profile images;
the model acquisition module is used for acquiring a trained model based on a transducer algorithm;
the corrected profile acquisition module is used for inputting each profile into the model based on the transform algorithm respectively so as to acquire a corrected profile corresponding to each profile;
and the model reconstruction module is used for reconstructing the model through the lifting and stretching method by using each corrected profile.
The application also provides an electronic device, which comprises: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus; the memory has stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the CAD vector model generation method based on the sequence of building sketch contours as described above.
The present application also provides a computer readable storage medium storing a computer program executable by an electronic device, which when run on the electronic device is capable of implementing the steps of a CAD vector model generation method based on a sequence of building sketch contours as described above.
According to the CAD vector model generation method based on the building sketch contour sequence, the corrected contour map is obtained through the trained model based on the transducer algorithm, and the robustness of the algorithm is greatly improved through the end-to-end execution of the neural network and the training of the big data driving mode. Can generate a high-quality CAD-grade building model and can edit parameters.
Drawings
FIG. 1 is a flow diagram of a CAD vector model generating method based on a sequence of sketch contours in an embodiment of the present application;
fig. 2 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a flow diagram of a CAD vector model generation method based on a sequence of architectural sketch contours in an embodiment of the present application.
The CAD vector model generating method based on the building sketch outline sequence shown in fig. 1 comprises the following steps:
step 1: acquiring a monomer building image;
step 2: slicing the monomer building image so as to obtain a plurality of contour maps;
step 3: obtaining a trained model based on a transducer algorithm;
step 4: inputting each profile map into the model based on the transform algorithm, so as to obtain a corrected profile map corresponding to each profile map;
step 5: and reconstructing the model by using the corrected contour maps through a lifting and stretching method.
According to the CAD vector model generation method based on the building sketch contour sequence, the corrected contour map is obtained through the trained model based on the transducer algorithm, and the robustness of the algorithm is greatly improved through the end-to-end execution of the neural network and the training of the big data driving mode. Can generate a high-quality CAD-grade building model and can edit parameters.
In this embodiment, the lift and stretch method is prior art and may be performed by an ex-trude function, such as pyvista, for example.
In this embodiment, the acquiring the individualized building image includes:
acquiring oblique photographic image information;
and acquiring the individualized building image according to the oblique photographic image information.
In this embodiment, the inputting each profile into the model based on the transducer algorithm, so as to obtain a corrected profile corresponding to each profile includes:
the following operations are performed for each profile:
acquiring fusion coding features input to the contour map of the model based on the transform algorithm;
inputting the fusion coding features into an encoder for feature conversion, so as to obtain conversion features;
inputting the conversion characteristics into a decoder so as to obtain command type prediction information and command parameter information corresponding to the conversion characteristics;
and generating a corrected profile according to the command type prediction information and the command parameter information.
In this embodiment, the model based on the transform algorithm adopts the following loss function when predicting the command type:
wherein, the method comprises the steps of, wherein,
the real label representing sample i, n is the total category number.
In this embodiment, the model based on the transform algorithm adopts the following loss function when predicting the command parameters:
wherein, the method comprises the steps of, wherein,
true tag representing sample i, +.>Predictive labels, S1 and S2, representing the model for sample i represent two sets of 3D point clouds, +.>Representing the sum of the minimum distances from any point x to S2 in S1; />Representing the sum of the minimum distances from any point y in S2 to S1.
In this embodiment, the acquiring the fusion encoding feature of the contour map input to the model based on the transform algorithm includes:
dividing the outline map into 64 patches with the size of 16 multiplied by 16, and then carrying out feature coding on each small patch to obtain a first feature;
combining the relative spatial position information of each patch in the image, and coding to obtain a second characteristic;
and fusing the first feature and the second feature, thereby obtaining a fusion coding feature.
In this embodiment, the command types include a straight line type, a curve type, a circle type, and a dot type.
The present application is described in further detail below by way of examples, which are not to be construed as limiting the present application in any way.
Acquiring a monomer building image;
slicing the monomer building image so as to obtain a plurality of contour maps;
obtaining a trained model based on a transducer algorithm;
inputting each profile map into the model based on the transform algorithm, so as to obtain a corrected profile map corresponding to each profile map;
and reconstructing the model by using the corrected contour maps through a lifting and stretching method.
In this embodiment, slicing is performed on the individualized building image, so as to obtain a plurality of contour maps specifically includes: the building ply model was cut by the pyvista slice command. And outputting a plurality of contour maps.
In this embodiment, a trained transducer algorithm-based model is used for command type prediction and command parameter inference on the profile.
In this embodiment, the specific usage method of the model based on the transducer algorithm is as follows:
1, firstly, dividing each profile into 64 patches with the size of 16 multiplied by 16, and then, carrying out feature coding on each small patch into 256-dimensional features;
2, simultaneously, combining the relative spatial position information of each patch in the image, and coding the information into 256-dimensional characteristics;
3, adding the two coding features, and inputting the two coding features into a standard Transformer encoder;
4, utilizing self-attribute mechanism in the transducer to make feature transformation so as to obtain conversion feature;
5, we propose a learner-able region CAD command query feature coding mechanism, which can effectively learn and divide the whole input image into different regions (position, region size, etc.) i 200 in PPT, i.e. a super parameter set by us, assuming that at most 200 CAD command sets exist in each slice. Essentially, 200 areas in the image are queried, and whether a valid CAD command exists in each area or not; if present, what the type of CAD command is (straight line, curve, circle, dot); what the parameters of this CAD command are.
6, inputting the two features (image features and region query features) into a decoder to analyze and judge whether CAD commands exist in the query region or not according to the integral feature information of the input image for each region to be queried; what type of CAD command, if any, is, and the corresponding CAD command parameters.
7, for the above task, design of Loss function,
7.1, in essence, we cannot have one CAD command (straight line/curve/circle/point) per query area; assuming that K CAD commands actually exist in the input slice, and we inquire that 200 CAD commands are predicted;
thus, using a hungarian matching algorithm, matching two sets (GT CAD command sets, predicted CAD command sets), and for the GT CAD command sets, finding a set of predicted CAD command subsets with minimum total matching loss from the predicted command sets;
7.2, then, training the K prediction CAD commands selected from the 200 prediction command sets by using the supervision data of the category and the parameter of the K GT CAD commands;
7.3, training of categories, namely cross entropy loss;
for a single sample, the true distribution is assumed to beThe network output distribution is +.>If the total class number is n, the method for calculating the cross entropy loss function is as follows:
wherein, the method comprises the steps of, wherein,
the real label representing sample i, n is the total category number.
Training command parameters, namely L1 loss;
the method comprises the steps of carrying out a first treatment on the surface of the Wherein,
true tag representing sample i, +.>Pre-preparation of a representation model for sample iAnd (5) measuring labels.
Meanwhile, aiming at the specificity of CAD commands, a loss is designed, which is called 'fit loss', the purpose of this loss design is mainly that the prediction of each CAD command predicts the category and the corresponding parameters, but when the supervision training is carried out, we have isolated supervision, such as straight line, with two points, namely four numbers, and need to carry out the supervision training,
however, once there is a poor comparison of the digital predictions, the situation is all-disc-wide, mainly because the relation between the parameters is not taken into account,
therefore, we also specifically design such a loss function:
for the regression prediction parameters, we combine the GT type of this command itself to sample it;
meanwhile, the GT command corresponding to the command can be combined with the corresponding type and parameter (a complete straight line segment, a circle or the like is constructed) for sampling;
then, the two sampling point sets measure the fitting degree between the two sampling point sets by using the chamfering distance, so that the fitting degree loss is designed:
the method comprises the steps of carrying out a first treatment on the surface of the Wherein,
s1 and S2 represent two sets of 3D point cloudsRepresenting the sum of the minimum distances from any point x to S2 in S1; />Representing the sum of the minimum distances from any point y in S2 to S1.
If the example is larger, it is stated that the two sets of point clouds differ significantly; if the example is smaller, it is indicated that the reconstruction is better.
Alternatively, it may be: the ratio of the number of points with the shortest distance from the predicted point cloud to the original point cloud being smaller than 0.3 to the number of the whole points is used as a fraction.
The aim of jointly estimating all parameters in each command is achieved, instead of one command parameter, regression learning is carried out on each parameter independently in isolation.
The method has the advantages that: can effectively perform joint learning, can effectively inhibit the adverse situation of 'one mouse feces and bad pot soup' caused by independent learning.
Introduction of data set: tens of millions of data pairs are generated by a parametric modeling system.
The scheme converts building vectorization modeling into CAD command set prediction tasks. Through the end-to-end execution of the neural network, the big data driving mode is trained, and the robustness of the algorithm is greatly improved. Can generate a high-quality CAD-grade building model and can edit parameters.
The application also provides a CAD vector model generating device based on the building sketch outline sequence, which comprises a monomerized building image acquisition module, an outline image acquisition module, a model acquisition module, a corrected outline image acquisition module and a model reconstruction module,
the monomer building image acquisition module is used for acquiring a monomer building image;
the profile image acquisition module is used for slicing the individualized building image so as to acquire a plurality of profile images;
the model acquisition module is used for acquiring a trained model based on a transducer algorithm;
the corrected contour map acquisition module is used for respectively inputting each contour map into the model based on the Transformer algorithm so as to acquire a corrected contour map corresponding to each contour map;
the model reconstruction module is used for reconstructing the model through the lifting and stretching method by each corrected profile.
Fig. 2 is a block diagram of a client architecture provided by one or more embodiments of the invention.
As shown in fig. 2, the present application further discloses an electronic device, including: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus; the memory has stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of a CAD vector model generating method based on a sequence of building sketch contours.
The present application also provides a computer-readable storage medium storing a computer program executable by an electronic device, which when run on the electronic device is capable of implementing the steps of a CAD vector model generation method based on a sketch outline sequence.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The electronic device includes a hardware layer, an operating system layer running on top of the hardware layer, and an application layer running on top of the operating system. The hardware layer includes hardware such as a central processing unit (CPU, central Processing Unit), a memory management unit (MMU, memory Management Unit), and a memory. The operating system may be any one or more computer operating systems that implement electronic device control via processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a windows operating system, etc. In addition, in the embodiment of the present invention, the electronic device may be a handheld device such as a smart phone, a tablet computer, or an electronic device such as a desktop computer, a portable computer, which is not particularly limited in the embodiment of the present invention.
The execution body controlled by the electronic device in the embodiment of the invention can be the electronic device or a functional module in the electronic device, which can call a program and execute the program. The electronic device may obtain firmware corresponding to the storage medium, where the firmware corresponding to the storage medium is provided by the vendor, and the firmware corresponding to different storage media may be the same or different, which is not limited herein. After the electronic device obtains the firmware corresponding to the storage medium, the firmware corresponding to the storage medium can be written into the storage medium, specifically, the firmware corresponding to the storage medium is burned into the storage medium. The process of burning the firmware into the storage medium may be implemented by using the prior art, and will not be described in detail in the embodiment of the present invention.
The electronic device may further obtain a reset command corresponding to the storage medium, where the reset command corresponding to the storage medium is provided by the provider, and the reset commands corresponding to different storage media may be the same or different, which is not limited herein.
At this time, the storage medium of the electronic device is a storage medium in which the corresponding firmware is written, and the electronic device may respond to a reset command corresponding to the storage medium in which the corresponding firmware is written, so that the electronic device resets the storage medium in which the corresponding firmware is written according to the reset command corresponding to the storage medium. The process of resetting the storage medium according to the reset command may be implemented in the prior art, and will not be described in detail in the embodiments of the present invention.
For convenience of description, the above devices are described as being functionally divided into various units and modules. Of course, the functions of each unit, module, etc. may be implemented in one or more pieces of software and/or hardware when implementing the present application.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
For the purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated by one of ordinary skill in the art that the methodologies are not limited by the order of acts, as some acts may, in accordance with the methodologies, take place in other order or concurrently. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform the methods described in the embodiments or some parts of the embodiments of the present application.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (8)

1. The CAD vector model generation method based on the building sketch outline sequence is characterized by comprising the following steps of:
acquiring a monomer building image;
slicing the monomer building image so as to obtain a plurality of contour maps;
obtaining a trained model based on a transducer algorithm;
inputting each profile map into the model based on the transform algorithm, so as to obtain a corrected profile map corresponding to each profile map;
and reconstructing the model by using the corrected contour maps through a lifting and stretching method.
2. The method for generating a CAD vector model based on a sequence of sketch contours of a building according to claim 1, wherein said obtaining a personalized building image comprises:
acquiring oblique photographic image information;
and acquiring the individualized building image according to the oblique photographic image information.
3. The method for generating a CAD vector model based on a sequence of sketch contours of a building according to claim 2, wherein said respectively inputting each contour map to the model based on the fransformer algorithm, thereby obtaining a corrected contour map corresponding to each contour map comprises:
the following operations are performed for each profile:
acquiring fusion coding features input to the contour map of the model based on the transform algorithm;
inputting the fusion coding features into an encoder for feature conversion, so as to obtain conversion features;
inputting the conversion characteristics into a decoder so as to obtain command type prediction information and command parameter information corresponding to the conversion characteristics;
and generating a corrected profile according to the command type prediction information and the command parameter information.
4. The method for generating a CAD vector model based on a sequence of sketch contours of a building according to claim 3, wherein the model based on a transform algorithm uses the following loss function when predicting the type of command:
wherein, the method comprises the steps of, wherein,
the real label representing sample i, n is the total category number.
5. The method for generating a CAD vector model based on a sequence of sketch contours of a building according to claim 4, wherein the model based on a transform algorithm uses the following loss function when predicting command parameters:
wherein, the method comprises the steps of, wherein,
true tag representing sample i, +.>The predictive labels S1 and S2 representing the model for sample i represent two sets of 3D point clouds,representing the sum of the minimum distances from any point x to S2 in S1; />Representing the sum of the minimum distances from any point y in S2 to S1.
6. The method for generating a CAD vector model based on a sequence of sketch contours of a building according to claim 3, wherein said obtaining fusion encoded features of contours input to said model based on a transform algorithm comprises:
dividing the outline map into 64 patches with the size of 16 multiplied by 16, and then carrying out feature coding on each small patch to obtain a first feature;
combining the relative spatial position information of each patch in the image, and coding to obtain a second characteristic;
and fusing the first feature and the second feature, thereby obtaining a fusion coding feature.
7. The method for generating a CAD vector model based on a sequence of sketch contours of a building according to claim 5, wherein the command types include a straight line type, a curve type, a circle type, and a point type.
8. The CAD vector model generating device based on the building sketch outline sequence is characterized by comprising:
the system comprises a monomer building image acquisition module, a control module and a control module, wherein the monomer building image acquisition module is used for acquiring a monomer building image;
the profile acquisition module is used for slicing the individualized building image so as to acquire a plurality of profile images;
the model acquisition module is used for acquiring a trained model based on a transducer algorithm;
the corrected profile acquisition module is used for inputting each profile into the model based on the transform algorithm respectively so as to acquire a corrected profile corresponding to each profile;
and the model reconstruction module is used for reconstructing the model through the lifting and stretching method by using each corrected profile.
CN202311786349.8A 2023-12-25 2023-12-25 CAD vector model generation method and device based on building sketch outline sequence Active CN117454495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311786349.8A CN117454495B (en) 2023-12-25 2023-12-25 CAD vector model generation method and device based on building sketch outline sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311786349.8A CN117454495B (en) 2023-12-25 2023-12-25 CAD vector model generation method and device based on building sketch outline sequence

Publications (2)

Publication Number Publication Date
CN117454495A true CN117454495A (en) 2024-01-26
CN117454495B CN117454495B (en) 2024-03-15

Family

ID=89593278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311786349.8A Active CN117454495B (en) 2023-12-25 2023-12-25 CAD vector model generation method and device based on building sketch outline sequence

Country Status (1)

Country Link
CN (1) CN117454495B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117725966A (en) * 2024-02-18 2024-03-19 粤港澳大湾区数字经济研究院(福田) Training method of sketch sequence reconstruction model, geometric model reconstruction method and equipment
CN117935291A (en) * 2024-03-22 2024-04-26 粤港澳大湾区数字经济研究院(福田) Training method, sketch generation method, terminal and medium for sketch generation model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100995400B1 (en) * 2010-06-29 2010-11-19 한진정보통신(주) System for extracting building outline using terrestrial lidar
CN111652892A (en) * 2020-05-02 2020-09-11 王磊 Remote sensing image building vector extraction and optimization method based on deep learning
CN113963177A (en) * 2021-11-11 2022-01-21 电子科技大学 CNN-based building mask contour vectorization method
CN114219819A (en) * 2021-11-19 2022-03-22 上海建工四建集团有限公司 Oblique photography model unitization method based on orthoscopic image boundary detection
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN116628834A (en) * 2023-07-26 2023-08-22 北京飞渡科技股份有限公司 Contour segmentation correction method and device based on neural network
CN116958454A (en) * 2023-09-21 2023-10-27 北京飞渡科技股份有限公司 Construction contour construction method and module based on graph network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100995400B1 (en) * 2010-06-29 2010-11-19 한진정보통신(주) System for extracting building outline using terrestrial lidar
CN111652892A (en) * 2020-05-02 2020-09-11 王磊 Remote sensing image building vector extraction and optimization method based on deep learning
CN113963177A (en) * 2021-11-11 2022-01-21 电子科技大学 CNN-based building mask contour vectorization method
CN114219819A (en) * 2021-11-19 2022-03-22 上海建工四建集团有限公司 Oblique photography model unitization method based on orthoscopic image boundary detection
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN116628834A (en) * 2023-07-26 2023-08-22 北京飞渡科技股份有限公司 Contour segmentation correction method and device based on neural network
CN116958454A (en) * 2023-09-21 2023-10-27 北京飞渡科技股份有限公司 Construction contour construction method and module based on graph network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘虎: "基于施工图的建筑物快速三维建模研究", 《中国优秀硕士学位论文全文数据库》, no. 7, 15 July 2015 (2015-07-15) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117725966A (en) * 2024-02-18 2024-03-19 粤港澳大湾区数字经济研究院(福田) Training method of sketch sequence reconstruction model, geometric model reconstruction method and equipment
CN117725966B (en) * 2024-02-18 2024-06-11 粤港澳大湾区数字经济研究院(福田) Training method of sketch sequence reconstruction model, geometric model reconstruction method and equipment
CN117935291A (en) * 2024-03-22 2024-04-26 粤港澳大湾区数字经济研究院(福田) Training method, sketch generation method, terminal and medium for sketch generation model
CN117935291B (en) * 2024-03-22 2024-06-11 粤港澳大湾区数字经济研究院(福田) Training method, sketch generation method, terminal and medium for sketch generation model

Also Published As

Publication number Publication date
CN117454495B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN117454495B (en) CAD vector model generation method and device based on building sketch outline sequence
CN110390638B (en) High-resolution three-dimensional voxel model reconstruction method
CN103020935B (en) The image super-resolution method of the online dictionary learning of a kind of self-adaptation
US11276218B2 (en) Method for skinning character model, device for skinning character model, storage medium and electronic device
CN111386536A (en) Semantically consistent image style conversion
WO2022105117A1 (en) Method and device for image quality assessment, computer device, and storage medium
CN111524216B (en) Method and device for generating three-dimensional face data
CN113516133B (en) Multi-modal image classification method and system
US20230267686A1 (en) Subdividing a three-dimensional mesh utilizing a neural network
CN113194493B (en) Wireless network data missing attribute recovery method and device based on graph neural network
CN115205488A (en) 3D human body mesh completion method based on implicit nerve field representation
Zheng et al. Design of a quantum convolutional neural network on quantum circuits
Zhang et al. Hybrid feature CNN model for point cloud classification and segmentation
CN117218300A (en) Three-dimensional model construction method, three-dimensional model construction training method and device
CN117292007A (en) Image generation method and device
CN116821113A (en) Time sequence data missing value processing method and device, computer equipment and storage medium
CN111723186A (en) Knowledge graph generation method based on artificial intelligence for dialog system and electronic equipment
CN116431827A (en) Information processing method, information processing device, storage medium and computer equipment
CN115661340A (en) Three-dimensional point cloud up-sampling method and system based on source information fusion
Hu et al. IMMAT: Mesh reconstruction from single view images by medial axis transform prediction
CN112837420B (en) Shape complement method and system for terracotta soldiers and horses point cloud based on multi-scale and folding structure
CN114596203A (en) Method and apparatus for generating images and for training image generation models
CN117078867B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, storage medium and electronic equipment
CN117058668B (en) Three-dimensional model face reduction evaluation method and device
CN117011650B (en) Method and related device for determining image encoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant