CN112613097A - BIM rapid modeling method based on computer vision - Google Patents

BIM rapid modeling method based on computer vision Download PDF

Info

Publication number
CN112613097A
CN112613097A CN202011479172.3A CN202011479172A CN112613097A CN 112613097 A CN112613097 A CN 112613097A CN 202011479172 A CN202011479172 A CN 202011479172A CN 112613097 A CN112613097 A CN 112613097A
Authority
CN
China
Prior art keywords
component
image
bim
construction
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011479172.3A
Other languages
Chinese (zh)
Inventor
王飞球
金顺利
谢以顺
李超男
王春峰
倪有豪
温学华
茅建校
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Jiangsu Engineering Co Ltd of China Railway 24th Bureau Group Co Ltd
Original Assignee
Southeast University
Jiangsu Engineering Co Ltd of China Railway 24th Bureau Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University, Jiangsu Engineering Co Ltd of China Railway 24th Bureau Group Co Ltd filed Critical Southeast University
Priority to CN202011479172.3A priority Critical patent/CN112613097A/en
Publication of CN112613097A publication Critical patent/CN112613097A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a BIM rapid modeling method based on computer vision, which comprises the following specific steps: 1) photographing the construction blueprint to obtain a construction map image; 2) preprocessing the construction drawing image to obtain a complete training set; 3) building a deep convolutional neural network model for multi-class target identification; 4) acquiring component types and corresponding positions based on the deep convolutional neural network model; 5) and generating a building information model in BIM engine software. According to the method, the definition of the construction drawing image is improved by adopting image pixel equalization, the boundary of the component is optimized by using a Laplacian Gaussian operator, data enhancement is carried out, a data set is added, and the identification precision of the model to the component is improved; based on a multi-class target recognition network model, quickly recognizing the member characteristics and returning position information; and converting the spatial position and the geometric dimension information of the component into a conversion format, and realizing the rapid establishment of the BIM model in BIM engine software.

Description

BIM rapid modeling method based on computer vision
Technical Field
The invention relates to a BIM rapid modeling method based on computer vision, and belongs to the field of civil engineering structure building information modeling.
Technical Field
With the development of computer technology, the cross application of computers to the civil engineering industry has grown to maturity. The BIM (building information coding) technology proposed by doctor Chuck Eastman in 1975 has been widely used in the building industry in China. The BIM technology can bring innovation improvement in aspects of building design, energy consumption analysis, three-dimensional simulation construction, cost analysis and the like. When the building information model technology represented by the Autodesk Revit is applied, a three-view drawing, component size detail and door and window materials need to be fused in the same BIM database. A parameterization modification engine based on Revit software utilizes a drawing view and a detail table to establish a model in the Revit software, so that three-dimensional coordination design can be realized. At present, when Revit software is used for building information modeling, repeated and mechanized labor reduces the production efficiency of engineers to a certain extent, especially in some engineering projects with regular structural arrangement. In recent years, the rapid development of computer image recognition technology and deep learning concept, the accuracy of obtaining and classifying multiple objects of images is obviously improved, so that the possibility of reducing such mechanized modeling steps and improving the modeling speed is realized.
At present, BIM rapid modeling based on deep neural networks and computer vision has only been studied a little domestically. In the conventional BIM modeling method, a two-dimensional CAD (computer-aided design) drawing is usually used as a data source, different component types are corresponded to corresponding layers by strictly classifying and setting layering standards of different components, and then a data file format is converted and imported into Revit software for rapid modeling. When a source file is lacked, such as old building reinforcement engineering and the like, the two-dimensional CAD design drawing needs to be redrawn, so that the workload of BIM modeling is greatly increased, and meanwhile, sufficient engineering experience is needed to ensure the accuracy of modeling when the drawing layer differentiation is carried out on the CAD design drawing.
As a new generation of image recognition technology, a multi-class object recognition algorithm is emerging in recent years, and not only can quickly recognize a required object in an image, but also can finish classifying and recognizing a plurality of objects in the same image and return the position coordinates of the objects. For relatively small component objects on the construction blueprint, the multi-class object identification algorithm also has higher identification precision. On the basis, after the spatial position and the geometric dimension information of different components are acquired, data integration processing is carried out, one-key modeling in BIM engine software can be realized, and the method has important application value for reducing the BIM modeling workload of engineers and improving the production efficiency.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the BIM rapid modeling method based on computer vision is provided, the definition of an image training set is improved, the information of the spatial position and the geometric size of a component is optimally extracted, and the working efficiency of building information modeling in a BIM engine is improved.
The technical scheme is as follows: in order to achieve the purpose, the technical scheme of the invention is as follows: a BIM rapid modeling method based on computer vision comprises the following steps:
the first step is as follows: shooting and collecting the construction blueprint, carrying out parallel shooting on the construction blueprint at a certain distance, ensuring that the negative is parallel to the construction blueprint, and determining the conversion relation between the size of the component in the picture and the actual size;
the second step is that: image preprocessing, including image mode conversion, image pixel equalization processing, component boundary optimization, component object framing and data enhancement;
the third step: acquiring a component type and a corresponding position through the image of the second step, and acquiring txt text with a two-dimensional coordinate of the component and the corresponding component type;
the fourth step: training a neural network model through the data of the third step to obtain the neural network model for identifying the target on the construction blueprint;
the fifth step: inputting the construction drawing picture into a deep convolutional neural network model, acquiring various component types and position information, inputting the component types and the position information into a txt text, and integrating floor elevation information with component information output by model identification;
the fifth step: and generating a building information model in BIM engine software, carrying out format conversion according to the txt text in the step five, converting the text into a text format which can be recognized by the BIM engine, and importing the text into the BIM model.
Further, the parallel shooting requires that the camera negative and the construction blueprint are two parallel planes; the conversion relation formula of the dimension and the actual dimension of the construction blueprint photographing component is as follows:
Smember=φ(f,pix,pro,sact)
wherein S ismemberFor actually identifying the size of the component, phi is a conversion function, f is the focal length of the camera, pix is the size of a marked pixel point, pro is the construction blueprint drawing proportion, and sactDimensions are measured for the components in the blueprint.
Further, the picture preprocessing step is as follows:
the first step is as follows: converting the construction blueprint photo from an RGB mode into a gray scale mode, wherein the conversion formula is as follows:
Gray=R*0.299+G*0.587+B*0.114
wherein, the RGB image is a true color image, R, G, B represents 3 different basic colors of red, green and blue respectively, and Gray is a Gray value;
the second step is that: carrying out image pixel equalization processing in a gray scale mode;
the third step: automatically optimizing and identifying the boundary of the component by using a Laplace Gaussian operator;
the fourth step: and labeling the component boundary of the image training set. And manually marking a component target in the image, wherein marking information comprises the type and the position of the component, and acquiring an initial training set.
The fifth step: and performing data enhancement on the image training set to obtain an enhanced training set.
Further, the deep convolutional neural network model for multi-class recognition is trained by the following steps:
the first step is as follows: setting an initial weight value and an initial learning rate;
the second step is that: carrying out batch standardization processing on the complete training set;
the third step: inputting the complete training set subjected to batch standardization processing into an iteration layer;
the fourth step: calculating the current mAP value, changing the weight value and the learning rate, and repeating the first step to the fourth step;
the fifth step: and when iteration reaches a certain number of times, taking a deep convolution neural network model corresponding to the highest value of the mAP as a selection model.
Has the advantages that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
(1) the image pixel equalization is adopted to improve the definition of the construction drawing image, the Laplacian Gaussian operator is used for optimizing the component boundary, the data enhancement is carried out on the graph by using the methods of turning, rotating, scaling and cutting, and the identification precision of the convolutional neural network model to the component characteristics is improved.
(2) And quickly identifying the member features based on computer vision, returning the position, and quickly establishing the BIM model in BIM engine software by converting the spatial position and the geometric dimension information of the member into format conversion.
Drawings
FIG. 1 is a BIM rapid modeling method based on computer vision;
FIG. 2 is an image pre-processing flow diagram;
FIG. 3 is a flow chart of optimal model training and evaluation.
Detailed Description
The present invention will be further illustrated with reference to the accompanying drawings and detailed description, which will be understood as being illustrative only and not limiting in scope.
As shown in fig. 1, the implementation process of the method of the present invention is described in detail by taking YOLO as an example of a multi-class target recognition algorithm, and the method mainly includes the following steps:
1) and shooting and collecting the construction blueprint. The construction blueprint is used as a target for construction of the engineering structure, and various aspects of the external dimension, the internal structure and the like of the structure are drawn in detail. In the construction drawings, the component dimensions can be read by experienced engineers and directed to the site construction. When information collection is carried out on the construction blueprint, the influence of the influence factors such as shooting distance and visual angle on conversion between the size of the construction blueprint picture and the actual size is considered, and the camera negative is required to be parallel to the construction blueprint when in shooting. The conversion relation formula of the dimension and the actual dimension of the construction blueprint photographing component is as follows:
Smember=φ(f,pix,pro,sact)
wherein S ismemberFor actually identifying the size of the component, phi is a conversion function, f is the focal length of the camera, pix is the size of a marked pixel point, pro is the construction blueprint drawing proportion, and sactDimensions are measured for the components in the blueprint.
2) And (5) image preprocessing. Influence factors such as illumination adjustment are considered, image pixel equalization processing is adopted after image mode conversion is carried out on the construction image picture, pixel distribution distance is effectively increased, and original image definition is improved; marking the coordinates and the types of the boundary frames of the components on the original construction drawing image by artificial vision, and automatically identifying the boundaries of the components by using a Laplacian Gaussian operator to form a boundary initial training set of the components; and performing data enhancement on the initial training set to obtain an enhanced training set.
3) Building a deep convolutional neural network model based on the YOLO, wherein the building and the training of the deep convolutional neural network model are included. The structure of the deep convolutional neural network comprises a batch standardization layer, a convolutional iteration layer and a mAP value calculation layer; and setting a learning rate and a weight value and calculating a model mAP value by the deep convolutional neural network.
4) Acquiring the category and the corresponding position of a component, inputting a data set subjected to image preprocessing into a deep convolutional neural network model, acquiring a two-dimensional coordinate of the component and the corresponding category of the component, outputting a txt text, and integrating floor elevation information with component information output by model identification;
5) and generating a building information model in BIM engine software, carrying out format conversion according to the txt text in the step five, converting the text into a text format which can be recognized by the BIM engine, and importing the text into the BIM model.
As shown in fig. 2, the image preprocessing is explained in detail. The image preprocessing comprises the following steps:
1) converting the construction blueprint photo from an RGB mode into a gray scale mode, wherein the conversion formula is as follows:
Gray=R*0.299+G*0.587+B*0.114
wherein, the RGB image is a true color image, R, G, B represents 3 different basic colors of red, green and blue respectively, and Gray is a Gray value;
2) counting a gray level image histogram, dividing 255 by a gray level interval value of an image to be used as an equalization coefficient, multiplying the equalization coefficient by RGB three-channel colors at corresponding pixel point positions respectively, and reconstructing the image to obtain an image training set;
3) and labeling the component boundary of the image training set. And marking a component target in the image, wherein marking information comprises the type and the position of the component, and acquiring an initial training set. And normalizing the coordinate of the marked rectangular frame to be in the range of 0-1. The YOLO labeling information is stored in a text file having the same name as the image, and includes 5 parameters, which are a target category number, a center X coordinate of a rectangular frame, a center Y coordinate, and a width and a height of the rectangular frame, respectively. The 5 parameters are the member category number, the member center X coordinate, the member center Y coordinate, the width and the height of the member, and the calculation formula of the coordinate normalization is as follows:
center coordinates are:
Figure BDA0002836907570000051
the size of the rectangular frame is as follows:
Figure BDA0002836907570000052
wherein X is the central X coordinate of the actual identification component, XminActual recognition of the center X-wise minimum coordinate, X, of the componentmaxFor actually identifying the X-direction maximum coordinate, y, of the center of the componentminFor actually identifying the Y-direction minimum coordinate of the center of the component, YmaxThe center Y-direction maximum coordinate of the actual recognized member, Width, and Height are the actual recognized member Width and the actual recognized member length, respectively.
4) And performing data enhancement on the graph by adopting methods of turning, rotating, scaling and cutting on the initial training set to obtain an enhanced training set.
As shown in fig. 3, training the optimal model based on YOLO, and setting an initial weight value and an initial learning rate of the deep neural network model; carrying out batch standardization processing on the complete training set; inputting the complete training set subjected to batch standardization processing into an iteration layer; calculating the current mAP value, changing the weight value and the learning rate, performing batch standard and processing again, and inputting the processed values into an iteration layer; and when iteration is carried out for a certain number of times, the above circulation is terminated, the model training is finished, and the deep convolution neural network model corresponding to the highest value of the mAP is taken as a selection model.
The above description is only the preferred embodiment of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications can be made without departing from the principles of the invention and these modifications are to be considered within the scope of the invention.

Claims (6)

1. A BIM rapid modeling method based on computer vision is characterized by comprising the following steps:
the first step is as follows: shooting and collecting the construction blueprint, carrying out parallel shooting on the construction blueprint at a certain distance, ensuring that the negative is parallel to the construction blueprint, and determining the conversion relation between the size of the component in the picture and the actual size;
the second step is that: image preprocessing, including image mode conversion, image pixel equalization processing, component boundary optimization, component object framing and data enhancement;
the third step: acquiring a component type and a corresponding position through the image of the second step, and acquiring txt text with a two-dimensional coordinate of the component and the corresponding component type;
the fourth step: training a neural network model through the data of the third step to obtain the neural network model for identifying the target on the construction blueprint;
the fifth step: inputting the construction drawing picture into a deep convolutional neural network model, acquiring various component types and position information, inputting the component types and the position information into a txt text, and integrating floor elevation information with component information output by model identification;
and a sixth step: and generating a building information model in BIM engine software, carrying out format conversion according to the txt text in the step five, converting the text into a text format which can be recognized by the BIM engine, and importing the text into the BIM model.
2. The BIM rapid modeling method based on computer vision as claimed in claim 1, wherein in the first step, the parallel shooting requirement is as follows:
the camera negative and the construction blueprint are two parallel planes;
the conversion relation formula of the dimension and the actual dimension of the construction blueprint photographing component is as follows:
Smember=φ(f,pix,pro,sact)
wherein S ismemberPhi is a conversion function, f is a camera focal length,pix is the marked pixel point representative size, pro is the construction blueprint drawing proportion, sactDimensions are measured for the components in the blueprint.
3. The BIM rapid modeling method based on computer vision as claimed in claim 2, wherein in the second step, the picture preprocessing step is as follows:
the first step is as follows: converting the construction blueprint photo from an RGB mode into a gray mode;
the second step is that: carrying out image pixel equalization processing in a gray scale mode;
the third step: automatically optimizing and identifying the boundary of the component by using a Laplace Gaussian operator;
the fourth step: marking a component target in the image, wherein marking information comprises the type and the position of the component, and acquiring an initial training set;
the fifth step: and performing data enhancement on the initial training set by adopting methods of turning, rotating, scaling and cutting to obtain an enhanced training set.
4. The BIM rapid modeling method based on computer vision as claimed in claim 3, wherein the gray scale mode conversion method is as follows:
Gray=R*0.299+G*0.587+B*0.114
wherein, the RGB image is a true color image, R, G, B represents 3 different basic colors of red, green and blue respectively, and Gray is a Gray value.
5. The BIM rapid modeling method based on computer vision according to claim 3 or 4, characterized in that the image pixel equalization processing in the gray scale mode is as follows: and (4) counting a gray level image histogram, dividing 255 by a gray level interval value of the image to be used as an equalization coefficient, and multiplying the equalization coefficient by RGB three-channel colors at the corresponding pixel point position respectively to reconstruct the image.
6. The BIM rapid modeling method based on computer vision as claimed in claim 5, wherein in the fourth step, the training step of the neural network model is as follows:
step 1: setting an initial weight value and an initial learning rate;
step 2: carrying out batch standardization processing on the complete training set;
and 3, step 3: inputting the complete training set subjected to batch standardization processing into an iteration layer;
and 4, step 4: calculating the current mAP value which is the average precision of all categories of target detection, changing the weight value and the learning rate, and repeating the first step to the fourth step;
and 5, step 5: and when iteration reaches a certain number of times, taking a deep convolution neural network model corresponding to the highest value of the mAP as a selection model.
CN202011479172.3A 2020-12-15 2020-12-15 BIM rapid modeling method based on computer vision Pending CN112613097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011479172.3A CN112613097A (en) 2020-12-15 2020-12-15 BIM rapid modeling method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011479172.3A CN112613097A (en) 2020-12-15 2020-12-15 BIM rapid modeling method based on computer vision

Publications (1)

Publication Number Publication Date
CN112613097A true CN112613097A (en) 2021-04-06

Family

ID=75239248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011479172.3A Pending CN112613097A (en) 2020-12-15 2020-12-15 BIM rapid modeling method based on computer vision

Country Status (1)

Country Link
CN (1) CN112613097A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515799A (en) * 2021-07-07 2021-10-19 中铁电气化局集团有限公司 Batch arrangement method and device for equipment models of building information models
CN113655750A (en) * 2021-09-08 2021-11-16 北华航天工业学院 Building construction supervision system and method based on AI object detection algorithm
CN114840900A (en) * 2022-05-18 2022-08-02 滁州学院 Derivative BIM component automatic generation method based on i-GBDT technology
CN115097974A (en) * 2022-07-06 2022-09-23 成都建工第四建筑工程有限公司 Intelligent auxiliary consultation system and method for BIM (building information modeling)
CN115775116A (en) * 2023-02-13 2023-03-10 华设设计集团浙江工程设计有限公司 BIM-based road and bridge engineering management method and system
WO2023088087A1 (en) * 2021-11-19 2023-05-25 华为云计算技术有限公司 Design drawing conversion method and apparatus and related device
CN115510530B (en) * 2022-09-20 2023-08-22 东南大学 Method for automatically constructing Revit three-dimensional model by CAD (computer aided design) plane drawing
CN117685881A (en) * 2024-01-31 2024-03-12 成都建工第七建筑工程有限公司 Sensing and detecting system for concrete structure entity position and size deviation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507416A (en) * 2020-04-21 2020-08-07 湖北马斯特谱科技有限公司 Smoking behavior real-time detection method based on deep learning
CN111523167A (en) * 2020-04-17 2020-08-11 平安城市建设科技(深圳)有限公司 BIM model generation method, device, equipment and storage medium
WO2020164282A1 (en) * 2019-02-14 2020-08-20 平安科技(深圳)有限公司 Yolo-based image target recognition method and apparatus, electronic device, and storage medium
CN111914612A (en) * 2020-05-21 2020-11-10 淮阴工学院 Construction graph primitive self-adaptive identification method based on improved convolutional neural network
CN111985499A (en) * 2020-07-23 2020-11-24 东南大学 High-precision bridge apparent disease identification method based on computer vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020164282A1 (en) * 2019-02-14 2020-08-20 平安科技(深圳)有限公司 Yolo-based image target recognition method and apparatus, electronic device, and storage medium
CN111523167A (en) * 2020-04-17 2020-08-11 平安城市建设科技(深圳)有限公司 BIM model generation method, device, equipment and storage medium
CN111507416A (en) * 2020-04-21 2020-08-07 湖北马斯特谱科技有限公司 Smoking behavior real-time detection method based on deep learning
CN111914612A (en) * 2020-05-21 2020-11-10 淮阴工学院 Construction graph primitive self-adaptive identification method based on improved convolutional neural network
CN111985499A (en) * 2020-07-23 2020-11-24 东南大学 High-precision bridge apparent disease identification method based on computer vision

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515799A (en) * 2021-07-07 2021-10-19 中铁电气化局集团有限公司 Batch arrangement method and device for equipment models of building information models
CN113655750A (en) * 2021-09-08 2021-11-16 北华航天工业学院 Building construction supervision system and method based on AI object detection algorithm
CN113655750B (en) * 2021-09-08 2023-08-18 北华航天工业学院 Building construction supervision system and method based on AI object detection algorithm
WO2023088087A1 (en) * 2021-11-19 2023-05-25 华为云计算技术有限公司 Design drawing conversion method and apparatus and related device
CN114840900A (en) * 2022-05-18 2022-08-02 滁州学院 Derivative BIM component automatic generation method based on i-GBDT technology
CN115097974A (en) * 2022-07-06 2022-09-23 成都建工第四建筑工程有限公司 Intelligent auxiliary consultation system and method for BIM (building information modeling)
CN115510530B (en) * 2022-09-20 2023-08-22 东南大学 Method for automatically constructing Revit three-dimensional model by CAD (computer aided design) plane drawing
CN115775116A (en) * 2023-02-13 2023-03-10 华设设计集团浙江工程设计有限公司 BIM-based road and bridge engineering management method and system
CN117685881A (en) * 2024-01-31 2024-03-12 成都建工第七建筑工程有限公司 Sensing and detecting system for concrete structure entity position and size deviation

Similar Documents

Publication Publication Date Title
CN112613097A (en) BIM rapid modeling method based on computer vision
CN110543878B (en) Pointer instrument reading identification method based on neural network
CN109410321B (en) Three-dimensional reconstruction method based on convolutional neural network
CN110992317A (en) PCB defect detection method based on semantic segmentation
CN111062915A (en) Real-time steel pipe defect detection method based on improved YOLOv3 model
CN113239954B (en) Attention mechanism-based image semantic segmentation feature fusion method
CN113139453A (en) Orthoimage high-rise building base vector extraction method based on deep learning
CN112907519A (en) Metal curved surface defect analysis system and method based on deep learning
CN111553949A (en) Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning
CN113344956B (en) Ground feature contour extraction and classification method based on unmanned aerial vehicle aerial photography three-dimensional modeling
CN111523540A (en) Metal surface defect detection method based on deep learning
CN112347882A (en) Intelligent sorting control method and intelligent sorting control system
CN111640116B (en) Aerial photography graph building segmentation method and device based on deep convolutional residual error network
CN115032648A (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN114820471A (en) Visual inspection method for surface defects of intelligent manufacturing microscopic structure
CN112967296B (en) Point cloud dynamic region graph convolution method, classification method and segmentation method
CN110751690B (en) Visual positioning method for milling machine tool bit
CN113052234A (en) Jade classification method based on image features and deep learning technology
CN115849202B (en) Intelligent crane operation target identification method based on digital twin technology
CN116763295A (en) Livestock scale measuring method, electronic equipment and storage medium
CN116797733A (en) Real-time three-dimensional object dynamic reconstruction method
CN114882095A (en) Object height online measurement method based on contour matching
CN112949641A (en) Image segmentation method, electronic device and computer-readable storage medium
CN112069923A (en) 3D face point cloud reconstruction method and system
CN111627033A (en) Hard sample instance segmentation method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination