CN113724357B - Image processing method, system, terminal and storage medium - Google Patents
Image processing method, system, terminal and storage medium Download PDFInfo
- Publication number
- CN113724357B CN113724357B CN202111295161.4A CN202111295161A CN113724357B CN 113724357 B CN113724357 B CN 113724357B CN 202111295161 A CN202111295161 A CN 202111295161A CN 113724357 B CN113724357 B CN 113724357B
- Authority
- CN
- China
- Prior art keywords
- view
- operation surface
- filling
- sectional
- components
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses an image processing method, a system, a terminal and a storage medium, wherein the method comprises the steps of establishing an operation surface corresponding to a profile view by acquiring the profile view, wherein planes respectively corresponding to the operation surface and the profile view are the same, and respectively corresponding data types are different; acquiring an integral component entity corresponding to the section view, wherein the integral component entity is a complete entity corresponding to all components in the section view; and filling the section view according to the integral component entity and the operation surface to obtain a target section view. According to the invention, the line segment at the lap joint in the sectional view can be automatically covered in an image filling mode, and the problem that in the prior art, a designer needs to manually process the line at the lap joint of the components in the drawing into invisible after drawing the sectional view by using Revit, so that a large amount of time and cost are consumed is solved.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, system, terminal, and storage medium.
Background
Revit is a set of software series of Autodesk corporation, which is mainly constructed to support Building Information Modeling (BIM), and can help structural engineers use intelligent models to deeply understand building projects through simulation and analysis and predict performance before construction. At present, after a designer uses Revit to draw a section view, the designer needs to manually process lines at the overlapped part of the components in the drawing into invisible lines so as to meet the drawing requirements. Since the number of components of the cross-sectional view in the actual project is large, it consumes a lot of time and cost for the designer.
Thus, there is still a need for improvement and development of the prior art.
Disclosure of Invention
The present invention is to provide an image processing method, system, terminal and storage medium, aiming at solving the problem that designers need to manually process lines at the joint of components in a drawing to be invisible after drawing a cross-sectional view by using Revit, which results in a large amount of time and cost.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, where the method includes:
acquiring a profile view, and creating an operation surface corresponding to the profile view, wherein planes corresponding to the operation surface and the profile view are the same, and data types corresponding to the operation surface and the profile view are different;
acquiring an integral component entity corresponding to the section view, wherein the integral component entity is a complete entity corresponding to all components in the section view;
and filling the section view according to the integral component entity and the operation surface to obtain a target section view.
In one embodiment, the creating an operation surface corresponding to the cross-sectional view includes:
determining an origin and a normal vector corresponding to the section view, wherein the origin is one vertex of a graph corresponding to the section view;
and creating the operation surface according to the origin and the normal vector.
In one embodiment, the obtaining the monolithic structural body corresponding to the cross-sectional view includes:
determining all the components displayed in the section view, and acquiring entities respectively corresponding to all the components to obtain a plurality of entities;
and combining a plurality of entities to obtain the integral member entity.
In one embodiment, the filling the cross-sectional view according to the monolithic entity and the operation plane includes:
determining a filling area on the section view according to the integral component entity and the operation surface;
and filling the filling area.
In one embodiment, said determining a fill area on said cross-sectional view based on said monolithic entity and said operational surface comprises;
performing intersection operation on the integral component entity and the operation surface to obtain an intersection surface of the integral component entity and the operation surface;
and determining the filling area according to the intersecting surface.
In one embodiment, the determining the filling area according to the intersecting surface includes:
determining a plurality of target components which have an intersection relation with the operation surface from all the components according to the intersection surface;
and taking the integral closed area corresponding to a plurality of target members in the section view as the filling area.
In one embodiment, the filling region includes:
acquiring user operation data, and determining a filling type according to the user operation data;
and filling the filling area according to the filling type.
In a second aspect, an embodiment of the present invention further provides an image processing system, where the system includes:
the creating module is used for acquiring a profile view and creating an operation surface corresponding to the profile view, wherein the operation surface and the profile view respectively correspond to the same plane and different data types;
the acquisition module is used for acquiring an integral component entity corresponding to the cross-sectional view, wherein the integral component entity is a complete entity corresponding to all components in the cross-sectional view;
and the filling module is used for filling the section view according to the integral component entity and the operation surface to obtain a target section view.
In a third aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a memory and one or more processors; the memory stores one or more programs; the program comprises instructions for performing the image processing method as described in any of the above; the processor is configured to execute the program.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a plurality of instructions are stored, wherein the instructions are adapted to be loaded and executed by a processor to implement any of the steps of the image processing method described above.
The invention has the beneficial effects that: according to the embodiment of the invention, a profile view is obtained, and an operation surface corresponding to the profile view is created, wherein the operation surface and the profile view respectively correspond to the same plane and different data types; acquiring an integral component entity corresponding to the section view, wherein the integral component entity is a complete entity corresponding to all components in the section view; and filling the section view according to the integral component entity and the operation surface to obtain a target section view. According to the invention, the line segment at the lap joint in the sectional view can be automatically covered in an image filling mode, and the problem that in the prior art, a designer needs to manually process the line at the lap joint of the components in the drawing into invisible after drawing the sectional view by using Revit, so that a large amount of time and cost are consumed is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of an original cross-sectional view provided by an embodiment of the present invention.
Fig. 3 is a schematic diagram of a target cross-sectional view provided by an embodiment of the invention.
Fig. 4 is a schematic diagram of merging multiple entities according to an embodiment of the present invention.
Fig. 5 is a connection diagram of internal modules of the image processing system according to the embodiment of the present invention.
Fig. 6 is a functional block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
Revit is a set of software series of Autodesk corporation, which is mainly constructed to support Building Information Modeling (BIM), and can help structural engineers use intelligent models to deeply understand building projects through simulation and analysis and predict performance before construction. At present, after a designer uses Revit to draw a section view, the designer needs to manually process lines at the overlapped part of the components in the drawing into invisible lines so as to meet the drawing requirements. Since the number of components of the cross-sectional view in the actual project is large, it consumes a lot of time and cost for the designer.
In view of the above-mentioned drawbacks of the prior art, the present invention provides an image processing method, which creates an operation surface corresponding to a cross-sectional view by acquiring the cross-sectional view, wherein the operation surface and the cross-sectional view respectively correspond to the same plane and different data types; acquiring an integral component entity corresponding to the section view, wherein the integral component entity is a complete entity corresponding to all components in the section view; and filling the section view according to the integral component entity and the operation surface to obtain a target section view. According to the invention, the line segment at the lap joint in the sectional view can be automatically covered in an image filling mode, and the problem that in the prior art, a designer needs to manually process the line at the lap joint of the components in the drawing into invisible after drawing the sectional view by using Revit, so that a large amount of time and cost are consumed is solved.
As shown in fig. 1, the method comprises the steps of:
step S100, a profile view is obtained, and an operation surface corresponding to the profile view is created, wherein planes corresponding to the operation surface and the profile view are the same, and data types corresponding to the operation surface and the profile view are different.
Specifically, the cross-sectional view in this embodiment may be any one of cross-sectional views acquired through the Revit API. In order to satisfy the drawing requirements, the present embodiment needs to hide the lines at the joint of the members in the cross-sectional view. Because the data type corresponding to the cross-sectional view cannot be directly applied to image operation, an operation surface needs to be created according to the cross-sectional view, the operation surface is theoretically the same as the cross-sectional view and both correspond to the same plane, but the operation surface and the cross-sectional view have different data types, that is, the operation surface and the cross-sectional view respectively correspond to different classes in a program, and the data type corresponding to the operation surface can be applied to image operation.
For example, the cross-sectional view may be a view selection class, and the operation surface corresponding to the cross-sectional view may be a planar face class, both of which correspond to the same plane, but the classes of the two in the program are different.
In one implementation, the step S100 specifically includes the following steps:
step S101, determining an origin and a normal vector corresponding to the profile view, wherein the origin is one vertex of a graph corresponding to the profile view;
and S102, creating the operation surface according to the origin and the normal vector.
In particular, the cross-sectional view typically has a certain shape, for example, the cross-sectional view may be rectangular, triangular, pentagonal, etc. Since the cross-sectional view has a certain shape, there are usually several vertexes, and one of the vertexes of the cross-sectional view is used as its corresponding origin, for example, if the cross-sectional view is rectangular, the cross-sectional view has 4 vertexes, and one of the 4 vertexes is used as the origin. In addition, each cross-sectional view is given a normal vector by the designer during drawing, and the normal vector can reflect the cross-sectional direction corresponding to the cross-sectional view. The operation surface corresponding to the profile view can be created through the origin and the normal vector, and it can be understood that the operation surface includes the origin, and the operation surface and the normal vector have a perpendicular relationship.
As shown in fig. 1, the method further comprises the steps of:
step S200, obtaining an integral component entity corresponding to the cross-sectional view, wherein the integral component entity is a complete entity corresponding to all components in the cross-sectional view.
Specifically, as shown in FIG. 2, a plurality of components are typically visible in the original cross-sectional view, wherein the visible components may or may not have an overlapping relationship. The original cross-sectional view is presented in a two-dimensional form, and the building model is usually a three-dimensional structure, so that each component in the two-dimensional cross-sectional view actually has a three-dimensional entity, and in this embodiment, it is necessary to obtain complete entities corresponding to all components in the cross-sectional view, so as to obtain an integral component entity. In other words, a monolithic building block entity corresponds to the sum of the individual entities of the individual building blocks.
In an implementation manner, the step S200 specifically includes the following steps:
step S201, determining all the components displayed in the section view, and acquiring entities respectively corresponding to all the components to obtain a plurality of entities;
step S202, combining a plurality of entities to obtain the integral member entity.
Specifically, in order to obtain a whole member entity, the present embodiment needs to determine all members visible in the cross-sectional view, and then obtain their respective individual entities, resulting in a plurality of entities. In short, regardless of whether there is an overlapping relationship between the components, as long as all the constructions are visible in a sectional view, the corresponding entities need to be obtained. These entities are then combined to yield a monolithic building block entity. The merging of entities is equivalent to the union in mathematics, i.e. a plurality of entities with intersection or tangency are changed into a whole entity.
For example, as shown in fig. 4, after the three entities of A, B, C are combined, the overlapping areas of the three entities are combined into one entity, and finally the entity becomes an entity D.
As shown in fig. 1, the method further comprises the steps of:
and step S300, filling the section view according to the whole component entity and the operation surface to obtain a target section view.
Specifically, since the whole component entity can reflect the positions and shapes of all the components in the cross-sectional view in the three-dimensional space, and the operation surface can reflect the corresponding cross-sectional plane of the cross-sectional view in the three-dimensional space, the components located on the cross-sectional plane can be determined based on the whole component entity and the operation surface, the components have a lapping relationship in the cross-sectional view, and a lapping line segment between the components needs to be hidden through filling processing to obtain a target cross-sectional view meeting the requirements of a drawing.
In an implementation manner, the step S300 specifically includes the following steps:
step S301, determining a filling area on the section view according to the integral component entity and the operation surface;
and step S302, filling the filling area.
Specifically, the components located on the sectional plane can be determined based on the whole component entity and the operation surface, and the components have an overlapping relationship in the sectional view, so that the areas corresponding to the constructions in the sectional view are filling areas, and it can be understood that the filling area is a closed whole area. In order to conceal the line segment at the joint between these members, it is necessary to perform a filling process on the filling region.
In one implementation, the step S301 specifically includes the following steps:
step S3011, performing intersection operation on the integral component entity and the operation surface to obtain an intersection surface of the integral component entity and the operation surface;
and step S3012, determining the filling area according to the intersection surface.
Specifically, in order to determine which components are located on the cross-sectional plane, it is necessary to perform intersection operation according to the whole component entity and the operation surface, so as to obtain an intersection surface, which may reflect which components in the whole component entity have an intersection relationship with the operation surface. Since the plurality of members having an intersecting relationship with the intersecting surface have an overlapping relationship with each other in the cross-sectional view, the filling region can be determined based on the intersecting surface.
In an implementation manner, the step S3012 specifically includes the following steps:
step S30121, according to the said intersecting surface, confirm several target components that have intersecting relation with the said operation surface from all the said components;
step S30122, regarding an entire closed region corresponding to a plurality of target members in the cross-sectional view as the filling region.
In short, since the cross-sectional view is two-dimensional, all the members that appear to have the overlapping relationship in the cross-sectional view do not necessarily have the overlapping relationship in the three-dimensional space, and therefore, it is necessary to screen out a plurality of members that actually have the overlapping relationship from all the members, that is, to obtain a plurality of target members. Specifically, a plurality of members having an intersecting relationship with the operation face means that they have a lapping relationship not only in a three-dimensional space on a two-dimensional cross-sectional view, and thus they are regarded as target members. And the whole closed areas corresponding to the target members in the cross-sectional view are all used as filling areas, and it can be understood that after the filling area is subjected to filling treatment, the line segments at the overlapped parts in the filling area can be automatically covered.
In an implementation manner, the step S302 specifically includes the following steps:
step S3021, acquiring user operation data, and determining a filling type according to the user operation data;
and step S3022, performing filling processing on the filling area according to the filling type.
Specifically, the manner of filling in revit is various, and for example, the filling may be performed by a dotted line or by color. A user can select one of the filling types according to the self requirement and preference, and after the terminal calls a filling method provided by the Revit API to fill the filling area based on the filling type selected by the user, the line segment at the lap joint in the filling area can be automatically hidden.
Based on the above embodiment, the present invention further provides an image processing system, as shown in fig. 5, the system including:
the creating module 01 is configured to acquire a profile view and create an operation surface corresponding to the profile view, where planes corresponding to the operation surface and the profile view are the same, and data types corresponding to the operation surface and the profile view are different;
an obtaining module 02, configured to obtain an integral component entity corresponding to the cross-sectional view, where the integral component entity is a complete entity corresponding to all components in the cross-sectional view;
and the filling module 03 is configured to perform filling processing on the cross-sectional view according to the integral component entity and the operation plane to obtain a target cross-sectional view.
In an implementation manner, the creating module 01 specifically includes:
the parameter determining unit is used for determining an origin and a normal vector corresponding to the section view, wherein the origin is one vertex of a graph corresponding to the section view;
and the surface creating unit is used for creating the operation surface according to the origin and the normal vector.
In an implementation manner, the obtaining module 02 specifically includes:
the component determining unit is used for determining all components displayed in the section view, and acquiring entities respectively corresponding to all the components to obtain a plurality of entities;
and the entity merging unit is used for merging the entities to obtain the integral member entity.
In an implementation manner, the filling module 03 specifically includes:
the computing unit is used for determining a filling area on the section view according to the integral component entity and the operation surface;
and the processing unit is used for filling the filling area.
In one implementation, the region determining unit includes:
the intersection operation unit is used for performing intersection operation on the integral component entity and the operation surface to obtain an intersection surface of the integral component entity and the operation surface;
and the area determining unit is used for determining the filling area according to the intersecting surface.
In an implementation manner, the region determining unit specifically includes:
the component screening unit is used for determining a plurality of target components which have an intersection relation with the operation surface from all the components according to the intersection surface;
and the area selection unit is used for taking the integral closed area corresponding to a plurality of target components in the cross-sectional view as the filling area.
In one implementation, the processing unit specifically includes:
the user selection unit is used for acquiring user operation data and determining a filling type according to the user operation data;
and the filling processing unit is used for filling the filling area according to the filling type.
Based on the above embodiments, the present invention further provides a terminal, and a schematic block diagram thereof may be as shown in fig. 6. The terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein the processor of the terminal is configured to provide computing and control capabilities. The memory of the terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the terminal is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement an image processing method. The display screen of the terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be appreciated by those skilled in the art that the block diagram of fig. 6 is only a block diagram of a portion of the structure associated with the inventive arrangements and does not constitute a limitation of the terminal to which the inventive arrangements are applied, and that a particular terminal may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one implementation, one or more programs are stored in a memory of the terminal and configured to be executed by one or more processors, the one or more programs including instructions for performing the image processing method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the present invention discloses an image processing method, a system, a terminal and a storage medium, wherein the method creates an operation surface corresponding to a profile view by acquiring the profile view, wherein planes corresponding to the operation surface and the profile view respectively are the same, and data types corresponding to the operation surface and the profile view respectively are different; acquiring an integral component entity corresponding to the section view, wherein the integral component entity is a complete entity corresponding to all components in the section view; and filling the section view according to the integral component entity and the operation surface to obtain a target section view. According to the invention, the line segment at the lap joint in the sectional view can be automatically covered in an image filling mode, and the problem that in the prior art, a designer needs to manually process the line at the lap joint of the components in the drawing into invisible after drawing the sectional view by using Revit, so that a large amount of time and cost are consumed is solved.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.
Claims (7)
1. An image processing method, characterized in that the method comprises:
acquiring a profile view, and creating an operation surface corresponding to the profile view, wherein planes corresponding to the operation surface and the profile view are the same, and data types corresponding to the operation surface and the profile view are different;
acquiring an integral component entity corresponding to the section view, wherein the integral component entity is a complete entity corresponding to all components in the section view;
performing intersection operation on the integral component entity and the operation surface to obtain an intersection surface of the integral component entity and the operation surface;
determining a plurality of target components which have an intersection relation with the operation surface from all the components according to the intersection surface;
taking the integral closed area corresponding to a plurality of target components in the cross-sectional view as a filling area;
and filling the filling area.
2. The image processing method according to claim 1, wherein the creating of the operation surface corresponding to the cross-sectional view includes:
determining an origin and a normal vector corresponding to the section view, wherein the origin is one vertex of a graph corresponding to the section view;
and creating the operation surface according to the origin and the normal vector.
3. The image processing method according to claim 1 or 2, wherein the acquiring the monolithic entity corresponding to the cross-sectional view comprises:
determining all the components displayed in the section view, and acquiring entities respectively corresponding to all the components to obtain a plurality of entities;
and combining a plurality of entities to obtain the integral member entity.
4. The image processing method according to claim 1, wherein the filling area comprises:
acquiring user operation data, and determining a filling type according to the user operation data;
and filling the filling area according to the filling type.
5. An image processing system, characterized in that the system comprises:
the creating module is used for acquiring a profile view and creating an operation surface corresponding to the profile view, wherein the operation surface and the profile view respectively correspond to the same plane and different data types;
the acquisition module is used for acquiring an integral component entity corresponding to the cross-sectional view, wherein the integral component entity is a complete entity corresponding to all components in the cross-sectional view;
the filling module is used for carrying out intersection operation on the integral component entity and the operation surface to obtain an intersection surface of the integral component entity and the operation surface; determining a plurality of target components which have an intersection relation with the operation surface from all the components according to the intersection surface; taking the integral closed area corresponding to a plurality of target components in the cross-sectional view as a filling area; and filling the filling area.
6. A terminal, comprising a memory and one or more processors; the memory stores one or more programs; the program comprises instructions for performing the image processing method of any one of claims 1-4; the processor is configured to execute the program.
7. A computer readable storage medium having stored thereon a plurality of instructions adapted to be loaded and executed by a processor for performing the steps of the image processing method according to any of the preceding claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111295161.4A CN113724357B (en) | 2021-11-03 | 2021-11-03 | Image processing method, system, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111295161.4A CN113724357B (en) | 2021-11-03 | 2021-11-03 | Image processing method, system, terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113724357A CN113724357A (en) | 2021-11-30 |
CN113724357B true CN113724357B (en) | 2022-02-18 |
Family
ID=78686592
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111295161.4A Active CN113724357B (en) | 2021-11-03 | 2021-11-03 | Image processing method, system, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113724357B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145356A (en) * | 2019-12-31 | 2020-05-12 | 威创集团股份有限公司 | Cutting method based on Unity3D model |
CN112802169A (en) * | 2021-01-29 | 2021-05-14 | 中国地质大学(武汉) | Real-time sectioning and rendering method for three-dimensional geological model |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104809266B (en) * | 2015-02-11 | 2017-10-31 | 山东科技大学 | A kind of accurate Forecasting Methodology of working face ocurrence of coal seam situation based on SPL |
US10949805B2 (en) * | 2015-11-06 | 2021-03-16 | Anguleris Technologies, Llc | Method and system for native object collaboration, revision and analytics for BIM and other design platforms |
CN111582005B (en) * | 2019-02-18 | 2023-08-15 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable medium and electronic equipment |
CN110441823B (en) * | 2019-08-09 | 2021-06-01 | 浙江财经大学 | Stratum contrast uncertainty visualization method based on multi-source data fusion |
CN110807214B (en) * | 2019-09-16 | 2022-08-05 | 重庆长厦安基建筑设计有限公司 | Application method of multi-axis curved curtain wall based on BIM |
CN110728086B (en) * | 2019-09-20 | 2020-12-08 | 中建一局集团建设发展有限公司 | Electromechanical BIM two-three-dimensional integrated assembly type automatic universal family manufacturing method |
CN111428295B (en) * | 2020-03-17 | 2023-02-28 | 中色科技股份有限公司 | Heating map making method for large workshop for BIM forward design |
CN111462023B (en) * | 2020-03-31 | 2023-05-23 | 上海大学 | Image texture line vectorization system and method |
CN111597606A (en) * | 2020-04-08 | 2020-08-28 | 中交二公局第三工程有限公司 | High-efficiency design method for digital large-scale engineering construction station |
CN111488664B (en) * | 2020-04-17 | 2023-06-30 | 同瀚建筑科技(杭州)有限公司 | Automatic connection system and method for spray pipeline based on revit |
CN112070890B (en) * | 2020-07-21 | 2024-03-01 | 郑州大学 | Rapid and refined three-dimensional geological modeling method based on multi-source data |
CN112016146A (en) * | 2020-08-27 | 2020-12-01 | 上海交通建设总承包有限公司 | Hydraulic model modeling system based on Revit |
CN112233050B (en) * | 2020-12-17 | 2021-03-09 | 之江实验室 | Light ray self-adaptive green field extraction method |
-
2021
- 2021-11-03 CN CN202111295161.4A patent/CN113724357B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145356A (en) * | 2019-12-31 | 2020-05-12 | 威创集团股份有限公司 | Cutting method based on Unity3D model |
CN112802169A (en) * | 2021-01-29 | 2021-05-14 | 中国地质大学(武汉) | Real-time sectioning and rendering method for three-dimensional geological model |
Also Published As
Publication number | Publication date |
---|---|
CN113724357A (en) | 2021-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3753614B1 (en) | Map rendering method and apparatus, computer device and storage medium | |
CN112182886B (en) | Engineering model generation method and device, computer equipment and storage medium | |
CN113724357B (en) | Image processing method, system, terminal and storage medium | |
Barladian et al. | Integration of realistic computer graphics into computer-aided design and product lifecycle management systems | |
US6571146B1 (en) | Method for updating feature recipes in a history-based solid model | |
CN112364421A (en) | Rendering method and device of building information model, computer equipment and storage medium | |
CN110688798A (en) | Deformation prediction method, device, equipment and storage medium for shell structural part | |
CN112836265B (en) | Interior wall generation method and device, computer equipment and storage medium | |
CN111563955A (en) | Building model monomer visualization method, terminal and storage medium | |
CN112632679B (en) | Building information model scene rendering method and device and computer equipment | |
CN110727981B (en) | Method and device for generating column splicing node of light steel structure and storage medium | |
CN112836263B (en) | Axle network generation method and device, computer equipment and storage medium | |
CN110737934B (en) | Tiger window roof truss node generation method, device and storage medium | |
CN115238338A (en) | Component model updating method and device and electronic equipment | |
CN111159796B (en) | Method and device for generating beam of building, computer equipment and storage medium | |
CN114116620A (en) | Building data processing method and device, terminal and storage medium | |
CN113158280B (en) | Roof main body structure generation method, device, computer equipment and storage medium | |
CN112395783B (en) | Method and device for moving out grids in batches based on CAE | |
CN112464327A (en) | Door and window model arrangement method and device, computer equipment and storage medium | |
CN113987652A (en) | Wall model generation method and device | |
CN110704897A (en) | Method for placing connecting node between wall keel model and bottom guide beam model and product | |
CN110750817A (en) | Connecting piece placing method, adjacent relation determining method, device, equipment and medium | |
CN110765509A (en) | Method and device for generating main roof hole-opening reinforcing node | |
CN112148358A (en) | Family file processing method and device, computer equipment and storage medium | |
CN112836266B (en) | Floor generation method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |