CN116630974A - Quick marking processing method and system for building image data - Google Patents

Quick marking processing method and system for building image data Download PDF

Info

Publication number
CN116630974A
CN116630974A CN202310559962.XA CN202310559962A CN116630974A CN 116630974 A CN116630974 A CN 116630974A CN 202310559962 A CN202310559962 A CN 202310559962A CN 116630974 A CN116630974 A CN 116630974A
Authority
CN
China
Prior art keywords
building
difference
image data
marking
machine vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310559962.XA
Other languages
Chinese (zh)
Other versions
CN116630974B (en
Inventor
陈奕昆
唐陵衡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Zhiyun Urban Construction Technology Co ltd
Original Assignee
Guangdong Zhiyun Urban Construction Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Zhiyun Urban Construction Technology Co ltd filed Critical Guangdong Zhiyun Urban Construction Technology Co ltd
Priority to CN202310559962.XA priority Critical patent/CN116630974B/en
Publication of CN116630974A publication Critical patent/CN116630974A/en
Application granted granted Critical
Publication of CN116630974B publication Critical patent/CN116630974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a rapid marking processing method and a rapid marking processing system for building image data, which particularly relate to the technical field of computer vision, and comprise the following steps of S001, establishing a standard database for building acceptance; step S002, building a building image marking template according to building acceptance standards and building design rendering drawings, and matching the building image marking template corresponding to the building to be marked; step S003, acquiring a difference set of an actual building image and a building design rendering diagram based on the corrected machine vision recognition model; step S004, the obtained difference is transmitted to a difference evaluation unit, and a quality evaluation index of the building is obtained; and S005, marking the acquired difference and quality assessment index in a building image marking template to finish quick marking of the building image data.

Description

Quick marking processing method and system for building image data
Technical Field
The application relates to the technical field of computer vision, in particular to a rapid marking processing method and system for building image data.
Background
Construction is to be carried out based on building drawings and building standards in building construction activities, after building construction is finished, in order to verify the quality of building construction, quality acceptance needs to be carried out on a building, in the existing building acceptance links, the field acceptance is carried out manually, places where problems are generated are recorded, finally, the construction is finished and summarized, acceptance reports are filled, and the marking work of building images is finished, but the problems of the method are as follows: the construction acceptance work comprises a large number of manual identification and manual marking works, repeated labor increases labor cost, and the manual identification of construction appearance differences has strong subjectivity, so that the marking of construction images is inaccurate; when building image data is manually collected, the lack of marks on the image data causes data confusion, which is unfavorable for marking and acceptance of buildings.
Based on the above, a quick marking means for building images is needed, which can quickly identify building quality data, process the data in time to obtain difference data, and finish quick marking of building images.
Disclosure of Invention
In order to overcome the defects of the prior art, the embodiment of the application provides a rapid marking processing method and a rapid marking processing system for building image data, which are used for establishing quality acceptance projects based on design standards of buildings, comparing actual building images with a standard database based on machine vision to obtain a difference set of the actual building images and the standard, evaluating the difference set, and emphasizing out-of-range differences to mark the differences so as to solve the problem that the marking of the building images is inaccurate in the prior art.
In order to achieve the above purpose, the present application provides the following technical solutions: a quick marking processing method of building image data comprises the following steps:
s001, building a standard database of building acceptance, marking the type of the item to be accepted by a manual marking method, and setting a project qualification standard to obtain the standard database of building acceptance;
step S002, building a building image marking template: building a building image marking template according to building acceptance standards and building design rendering drawings, and matching the building to be marked with the corresponding building image marking template;
step S003, collecting an actual building image: acquiring a difference set of an actual building image and a building design rendering diagram based on the corrected machine vision recognition model
Step S004, evaluation of differences: transmitting the obtained difference to a difference evaluation unit to obtain a quality evaluation index of the building;
step S005, data marking: and marking the acquired difference and quality evaluation index in a building image marking template to finish quick marking of building image data.
Preferably, the correction of the machine vision recognition model includes the steps of:
s21, taking the manually marked actual image data as a training sample, inputting a task set R, and outputting a difference set CR;
step S22, feature extraction performance verification: inputting a sample training sample and a standard image into a machine vision recognition model, respectively extracting features to obtain feature vectors, and obtaining feature extraction performance indexes TX;
step S23, verification of difference recognition performance: inputting the sample into a machine identification model, obtaining a difference set CR of an actual building image and a standard through feature comparison, and comparing the obtained difference set with a manually identified difference set to obtain a difference identification performance parameter XCR;
and S24, adjusting the model parameters to enable the characteristic extraction performance parameters and the difference identification performance parameters to obtain the parameters of the correction model.
Preferably, the feature extraction performance index TX satisfies the formulaWherein w1i represents the weight coefficient of the item, and Sri represents the feature similarity parameter of the item r.
Preferably, the difference recognition performance parameter satisfies the formulaWhere pcs_i represents the accuracy parameter of the difference recognition and auc _i represents the differenceAnd (5) different accuracy parameters of identification.
Preferably, the building design rendering graph is an effect graph of a building, and comprises an integral image and a local image, so that the accuracy of machine vision identification is ensured, and the image resolution and the brightness parameters of the building design rendering graph are required to be consistent.
Preferably, the machine vision recognition model is used for acquiring a difference set between actual building image data and a standard database, and comprises the following steps:
step S11, correcting a machine vision recognition model: respectively inputting the normalized standard database and the verification sample into a machine vision recognition model, and adjusting parameters of the machine vision recognition model by taking the characteristic extraction performance index and the difference recognition performance value as threshold values to finish the correction of the machine vision recognition model;
step S12, acquiring building image data, and obtaining item types to be checked from a standard database according to building basic information, wherein the item types to be checked form a task set R= [ R1, R2, …, rn ], and ri is item identification information;
s13, processing building image data by using the corrected model to obtain difference information corresponding to the task subset, and marking the difference information as an empty set if the difference information does not exist;
step S14, recording the differences into a difference set CR, wherein the differences are recorded as CR= [ CR1, CR2, … ], and the CR1 comprises item identification information, position information generated by the differences and difference image information, and the difference image information comprises difference area information and difference degree information;
and step S15, verifying whether the task is completed, and repeatedly entering step S13 when the task is not completed until the number of the task sets is equal to the number of the difference sets, so as to complete the task of checking and accepting the project.
Preferably, the process for obtaining the feature similarity parameter Sri includes: the image data comprises n items to be identified, each item comprises m features, and the feature vector of each item is obtained based on a machine vision identification model and is marked as A= [ a1, a2, …, am]The feature vector of the corresponding item in the obtained standard database is marked as B= [ B1, B2, …, bm]The feature similarity Sri of the calculation items satisfiesFormula (VI)
Preferably, the accuracy parameter satisfiesTP represents the number of correctly identified differences, ZY represents the total number of differences in a standard database, and the acquired accuracy parameters meet +.>Where ZYs represents the total number of differences acquired by the machine vision recognition model.
Preferably, the difference evaluation module comprises the following steps:
step S31, dividing the building according to the regions to obtain n regions, namely 1,2, … and n;
s32, each area needs to accept m items, the items are divided into non-items and measurement items, and the difference positions in each area are marked to obtain a difference set;
step S33, obtaining a quality evaluation index Zhil_i of each region, and matching weight coefficients w1i for different differences;
step S34, obtaining a quality index of the building project, and obtaining a quality evaluation index ZL_i of the marked building project in the building image data.
Preferably, the difference range parameter Sai satisfies the formulaWhere Mi is the area parameter of the difference, shu_l represents the number of differences, and μ1, μ2 are coefficient constants.
Preferably, the region quality assessment index satisfies the formulaWherein Sai is a difference range parameter, sbi is a difference degree parameter.
Further, the quality evaluation index ZL_i of the building project satisfies the following conditionsFormula (VI)Wherein w2i is a preset weight coefficient of each building.
Preferably, the obtaining process of the difference degree parameter Sbi is: setting standard acceptance criteria, obtaining acceptance criteria of the project, and representing image data of a project standard state by pb 0; setting gradient anomaly criteria: respectively representing image data of different degrees of difference of items by pb1, pb2 and pb3, and marking the corresponding numerical value of each gradient as yt1, tyt2, … and ytn; and carrying out similarity evaluation on the image difference by using an image recognition technology to obtain the similarity between the actual image data and each gradient, and taking the value corresponding to the gradient with the highest similarity value as the value of the difference degree parameter Sbi.
In order to achieve the above purpose, the present application provides the following technical solutions: the system comprises a standard database construction module, a machine vision recognition model correction module, a building image acquisition module, a difference recognition and evaluation module and a data marking module;
the standard database building module is used for building reference standards of building acceptance, and the acceptance standards comprise acceptance item types and acceptance qualification standards;
the machine vision recognition model correction module is used for finishing correction of a machine vision recognition model, and the machine vision recognition model is used for acquiring differences between an actual building image and a standard database building image;
the building image acquisition module is used for acquiring building image data, acquiring the building image data through a high-definition camera, and preprocessing the building image data to obtain a building image with unified standard;
the difference recognition and evaluation module acquires a difference set of building image data and a standard database based on the corrected machine vision recognition model, and evaluates the degree of the difference;
and the data marking module imports the acquired data into a building acceptance template to finish quick marking of building image data.
The application has the technical effects and advantages that:
the specific machine vision recognition model acquires the difference between the image data and the standard data, the accuracy of recognition is ensured through model correction, the rapid early warning of the building acceptance result is obtained through the evaluation of the acquired difference, the management of the building image data is increased by the marking of the early warning data, and the problem of inaccurate marking of the building image in the prior art is solved.
Drawings
FIG. 1 is a flow chart of the method of the present application.
FIG. 2 is a flow chart of the calibration of the machine vision recognition model of the present application.
Fig. 3 is a block diagram of the system architecture of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "module," "system," and the like as used herein are intended to include a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a module may be, but is not limited to: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a module. One or more modules may be located in one process and/or thread of execution, and one module may be located on one computer and/or distributed between two or more computers.
Example 1
The embodiment provides a rapid marking processing method for building image data, as shown in fig. 1, comprising the following steps:
s001, building a standard database of building acceptance, marking the type of the item to be accepted by a manual marking method, and setting a project qualification standard to obtain the standard database of building acceptance;
step S002, building a building image marking template: building a building image marking template according to building acceptance standards and building design rendering drawings, and matching the building to be marked with the corresponding building image marking template;
step S003, collecting an actual building image: acquiring a difference set of an actual building image and a building design rendering diagram based on the corrected machine vision recognition model;
step S004, evaluation of differences: transmitting the obtained difference to a difference evaluation unit to obtain a quality evaluation index of the building;
step S005, data marking: and marking the acquired difference and quality evaluation index in a building image marking template to finish quick marking of building image data.
As shown in fig. 2, the correction of the machine vision recognition model includes the steps of:
s21, taking the manually marked actual image data as a training sample, inputting a task set R, and outputting a difference set CR;
step S22, feature extraction performance verification: inputting a sample training sample and a standard image into a machine vision recognition model, respectively extracting features to obtain feature vectors, and obtaining feature extraction performance indexes TX;
step S23, verification of difference recognition performance: inputting the sample into a machine identification model, obtaining a difference set CR of an actual building image and a standard through feature comparison, and comparing the obtained difference set with a manually identified difference set to obtain a difference identification performance parameter XCR;
and S24, adjusting the model parameters to enable the characteristic extraction performance parameters and the difference identification performance parameters to obtain the parameters of the correction model.
Further, the feature extraction performance index TX satisfies the formulaWherein w1i represents the weight coefficient of the item, and Sri represents the feature similarity parameter of the item r.
Further, the difference recognition performance parameter satisfies a formulaWhere pcs_i represents the accuracy parameter of the variance identification and auc _i represents the accuracy parameter of the variance identification.
Furthermore, the building design rendering graph comprises an integral image and a local image in the effect graph of the building, and in order to ensure the accuracy of machine vision identification, the image resolution and the brightness parameters of the building design rendering graph are required to be consistent.
Further, the machine vision recognition model is used for acquiring a difference set between actual building image data and a standard database, and comprises the following steps:
step S11, correcting a machine vision recognition model: respectively inputting the normalized standard database and the verification sample into a machine vision recognition model, and adjusting parameters of the machine vision recognition model by taking the characteristic extraction performance index and the difference recognition performance value as threshold values to finish the correction of the machine vision recognition model;
step S12, acquiring building image data, and obtaining item types to be checked from a standard database according to building basic information, wherein the item types to be checked form a task set R= [ R1, R2, …, rn ], and ri is item identification information;
s13, processing building image data by using the corrected model to obtain difference information corresponding to the task subset, and marking the difference information as an empty set if the difference information does not exist;
step S14, recording the differences into a difference set CR, wherein the differences are recorded as CR= [ CR1, CR2, … ], and the CR1 comprises item identification information, position information generated by the differences and difference image information, and the difference image information comprises difference area information and difference degree information;
and step S15, verifying whether the task is completed, and repeatedly entering step S13 when the task is not completed until the number of the task sets is equal to the number of the difference sets, so as to complete the task of checking and accepting the project.
Further, the process for obtaining the feature similarity parameter Sri includes: the image data comprises n items to be identified, each item comprises m features, and the feature vector of each item is obtained based on a machine vision identification model and is marked as A= [ a1, a2, …, am]The feature vector of the corresponding item in the obtained standard database is marked as B= [ B1, B2, …, bm]The feature similarity Sri of the calculation items satisfies the formula
Further, the accuracy parameter satisfiesTP represents the number of correctly identified differences, ZY represents the total number of differences in a standard database, and the acquired accuracy parameters meet +.>Where ZYs represents the total number of differences acquired by the machine vision recognition model.
Further, the variance assessment module includes the steps of:
step S31, dividing the building according to the regions to obtain n regions, namely 1,2, … and n;
s32, each area needs to accept m items, the items are divided into non-items and measurement items, and the difference positions in each area are marked to obtain a difference set;
step S33, obtaining a quality evaluation index Zhil_i of each region, and matching weight coefficients w1i for different differences;
step S34, obtaining a quality index of the building project, and obtaining a quality evaluation index ZL_i of the marked building project in the building image data.
Further, the difference range parameter Sai satisfies the formulaWhere Mi is the area parameter of the difference, shu_l represents the number of differences,mu 1 and mu 2 are coefficient constants.
Further, the region quality assessment index satisfies the formulaWherein Sai is a difference range parameter, sbi is a difference degree parameter.
Further, the quality evaluation index zl_i of the building project satisfies the formulaWherein w2i is a preset weight coefficient of each building.
Further, the obtaining process of the difference degree parameter Sbi is: setting standard acceptance criteria, obtaining acceptance criteria of the project, and representing image data of a project standard state by pb 0; setting gradient anomaly criteria: respectively representing image data of different degrees of difference of items by pb1, pb2 and pb3, and marking the corresponding numerical value of each gradient as yt1, tyt2, … and ytn; and carrying out similarity evaluation on the image difference by using an image recognition technology to obtain the similarity between the actual image data and each gradient, and taking the value corresponding to the gradient with the highest similarity value as the value of the difference degree parameter Sbi.
The embodiment provides a rapid marking processing system for building image data, as shown in fig. 3, comprising the following modules: the system comprises a standard database building module, a machine vision recognition model correction module, a building image acquisition module, a difference recognition and evaluation module and a data marking module;
the standard database building module is used for building reference standards of building acceptance, and the acceptance standards comprise acceptance item types and acceptance qualification standards;
the machine vision recognition model correction module is used for finishing correction of a machine vision recognition model, and the machine vision recognition model is used for acquiring differences between an actual building image and a standard database building image;
the building image acquisition module is used for acquiring building image data, acquiring the building image data through a high-definition camera, and preprocessing the building image data to obtain a building image with unified standard;
the difference recognition and evaluation module acquires a difference set of building image data and a standard database based on the corrected machine vision recognition model, and evaluates the degree of the difference;
and the data marking module imports the acquired data into a building acceptance template to finish quick marking of building image data.
Further, in the process of difference recognition in the difference recognition and evaluation module, the method comprises a data positioning unit, and according to the position information of the data to be recognized, the data to be recognized are matched with a standard and a building acceptance template.
The present embodiment provides only one implementation and does not specifically limit the protection scope of the present application.
Finally: the foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and principles of the application are intended to be included within the scope of the application.

Claims (9)

1. A rapid marking processing method for building image data is characterized in that: comprises the following steps:
s001, building a standard database of building acceptance, marking the type of the item to be accepted by a manual marking method, and setting a project qualification standard to obtain the standard database of building acceptance;
step S002, building a building image marking template: building a building image marking template according to building acceptance standards and building design rendering drawings, and matching the building to be marked with the corresponding building image marking template;
step S003, collecting an actual building image: acquiring a difference set of an actual building image and a building design rendering diagram based on a corrected machine vision recognition model, wherein the correction of the machine vision recognition model comprises the following steps:
s21, taking the manually marked actual image data as a training sample, inputting a task set R, and outputting a difference set CR; step by stepStep S22, feature extraction performance verification: inputting a sample training sample and a standard image into a machine vision recognition model, respectively extracting features to obtain feature vectors, and obtaining feature extraction performance indexes TX to satisfy a formulaWherein w1i represents the weight coefficient of the item, and Sri represents the feature similarity parameter of the item r; step S23, verification of difference recognition performance: inputting the sample into a machine recognition model, obtaining a difference set CR of an actual building image and a standard through feature comparison, comparing the obtained difference set with a manually identified difference set to obtain a difference recognition performance parameter XCR, and satisfying a formulaWherein pcs_i represents the accuracy parameter of the difference recognition, and auc _i represents the accuracy parameter of the difference recognition; s24, adjusting model parameters to enable the characteristic extraction performance parameters and the difference identification performance parameters to obtain parameters of a correction model;
step S004, evaluation of differences: transmitting the obtained difference to a difference evaluation unit to obtain a quality evaluation index of the building;
step S005, data marking: and marking the acquired difference and quality evaluation index in a building image marking template to finish quick marking of building image data.
2. The rapid marking process of building image data according to claim 1, wherein: the building design rendering diagram is an effect diagram of a building and comprises an integral image and a local image, and in order to ensure the accuracy of machine vision identification, the image resolution and brightness parameters of the building design rendering diagram are required to be consistent.
3. The rapid marking process of building image data according to claim 1, wherein: the machine vision recognition model is used for acquiring a difference set of actual building image data and a standard database, and comprises the following steps:
step S11, correcting a machine vision recognition model: respectively inputting the normalized standard database and the verification sample into a machine vision recognition model, and adjusting parameters of the machine vision recognition model by taking the characteristic extraction performance index and the difference recognition performance value as threshold values to finish the correction of the machine vision recognition model;
step S12, acquiring building image data, and obtaining item types to be checked from a standard database according to building basic information, wherein the item types to be checked form a task set R= [ R1, R2, …, rn ], and ri is item identification information;
s13, processing building image data by using the corrected model to obtain difference information corresponding to the task subset, and marking the difference information as an empty set if the difference information does not exist;
step S14, recording the differences into a difference set CR, wherein the differences are recorded as CR= [ CR1, CR2, … ], and the CR1 comprises item identification information, position information generated by the differences and difference image information, and the difference image information comprises difference area information and difference degree information;
and step S15, verifying whether the task is completed, and repeatedly entering step S13 when the task is not completed until the number of the task sets is equal to the number of the difference sets, so as to complete the task of checking and accepting the project.
4. The rapid marking process of building image data according to claim 1, wherein: the process for obtaining the characteristic similarity parameter Sri comprises the following steps: the image data comprises n items to be identified, each item comprises m features, and the feature vector of each item is obtained based on a machine vision identification model and is marked as A= [ a1, a2, …, am]The feature vector of the corresponding item in the obtained standard database is marked as B= [ B1, B2, …, bm]The feature similarity Sri of the calculation items satisfies the formula
5. A building image data as claimed in claim 4The rapid marking processing method is characterized in that: the accuracy parameter satisfiesTP represents the number of correctly identified differences, ZY represents the total number of differences in a standard database, and the acquired accuracy parameters meet +.>Where ZYs represents the total number of differences acquired by the machine vision recognition model.
6. The rapid marking process of building image data according to claim 1, wherein: the difference evaluation module comprises the following steps:
step S31, dividing the building according to the regions to obtain n regions, namely 1,2, … and n;
s32, each area needs to accept m items, the items are divided into non-items and measurement items, and the difference positions in each area are marked to obtain a difference set;
step S33, obtaining a quality assessment index of each region, including a difference area and a difference degree, matching weight coefficients for different differences, wherein the quality assessment index satisfies a formulaWherein Sai is a difference range parameter, sbi is a difference degree parameter, w1i is a preset weight coefficient of each item, and the difference range parameter Sai satisfies the formulaWherein Mi is the area parameter of the difference, shu_l represents the number of the difference, and mu 1 and mu 2 are coefficient constants;
step S34, obtaining the quality index of the building project, marking the quality evaluation index of the building project in the building image data, and meeting the formulaWherein w2i is a preset weight coefficient of each building.
7. The method for rapid marking of building image data according to claim 4, wherein: the acquisition process of the difference degree parameter comprises the following steps: setting standard acceptance criteria, obtaining acceptance criteria of the project, and representing image data of a project standard state by pb 0; setting gradient anomaly criteria: respectively representing image data of different degrees of difference of items by pb1, pb2 and pb3, and marking the corresponding numerical value of each gradient as yt1, tyt2, … and ytn; and carrying out similarity evaluation on the image difference by using an image recognition technology to obtain the similarity between the actual image data and each gradient, and taking the value corresponding to the gradient with the highest similarity value as the value of the difference degree parameter Sbi.
8. A rapid marking processing system for building image data, for executing the rapid marking processing method for building image data according to any one of claims 1 to 7, characterized in that: comprises the following modules:
the standard database building module is used for building reference standards of building acceptance, and the acceptance standards comprise acceptance item types and acceptance qualification standards;
the machine vision recognition model correction module is used for finishing correction of a machine vision recognition model, and the machine vision recognition model is used for acquiring differences between an actual building image and a standard database building image;
the building image acquisition module is used for acquiring building image data, acquiring the building image data through a high-definition camera, and preprocessing the building image data to obtain a building image with unified standard;
the difference recognition and evaluation module acquires a difference set of building image data and a standard database based on the corrected machine vision recognition model, and evaluates the degree of the difference;
and the data marking module imports the acquired data into a building acceptance template to finish quick marking of building image data.
9. A rapid marking processing system for building image data as claimed in claim 8, wherein: the difference recognition process in the difference recognition and evaluation module comprises a data positioning unit, and the data positioning unit is used for matching the standard and the building acceptance template for the image data to be recognized according to the position information of the data to be recognized.
CN202310559962.XA 2023-05-17 2023-05-17 Quick marking processing method and system for building image data Active CN116630974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310559962.XA CN116630974B (en) 2023-05-17 2023-05-17 Quick marking processing method and system for building image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310559962.XA CN116630974B (en) 2023-05-17 2023-05-17 Quick marking processing method and system for building image data

Publications (2)

Publication Number Publication Date
CN116630974A true CN116630974A (en) 2023-08-22
CN116630974B CN116630974B (en) 2024-02-02

Family

ID=87620665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310559962.XA Active CN116630974B (en) 2023-05-17 2023-05-17 Quick marking processing method and system for building image data

Country Status (1)

Country Link
CN (1) CN116630974B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437235A (en) * 2023-12-21 2024-01-23 四川新康意众申新材料有限公司 Plastic film quality detection method based on image processing
CN118245855A (en) * 2024-05-30 2024-06-25 山东天意机械股份有限公司 Artificial intelligence-based building board quality classification method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358343A (en) * 2017-06-28 2017-11-17 中国能源建设集团甘肃省电力设计院有限公司 Power engineering safe early warning method based on view data feature difference
US20180082414A1 (en) * 2016-09-21 2018-03-22 Astralink Ltd. Methods Circuits Assemblies Devices Systems Platforms and Functionally Associated Machine Executable Code for Computer Vision Assisted Construction Site Inspection
CN108121704A (en) * 2016-11-28 2018-06-05 星际空间(天津)科技发展有限公司 A kind of three-dimensional final acceptance of construction system
WO2019137814A1 (en) * 2018-01-09 2019-07-18 Robert Bosch Gmbh Method for monitoring a construction site
CN111650863A (en) * 2020-06-09 2020-09-11 湖南城市学院 Engineering safety monitoring instrument and monitoring method
CN111986191A (en) * 2020-08-31 2020-11-24 江苏工程职业技术学院 Building construction acceptance method and system
US20210374297A1 (en) * 2020-05-26 2021-12-02 Clair Marie McDade Building quality indexing system
CN113869681A (en) * 2021-09-17 2021-12-31 湖南方圆工程咨询监理有限公司 BIM and laser scanning based house building acceptance method, device and system
CN114745399A (en) * 2022-02-23 2022-07-12 安徽金蓓检测认证股份有限公司 Intelligent building engineering quality inspection and acceptance management system
WO2022260165A1 (en) * 2021-06-10 2022-12-15 ダイキン工業株式会社 Construction acceptance inspection method, device, and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180082414A1 (en) * 2016-09-21 2018-03-22 Astralink Ltd. Methods Circuits Assemblies Devices Systems Platforms and Functionally Associated Machine Executable Code for Computer Vision Assisted Construction Site Inspection
CN108121704A (en) * 2016-11-28 2018-06-05 星际空间(天津)科技发展有限公司 A kind of three-dimensional final acceptance of construction system
CN107358343A (en) * 2017-06-28 2017-11-17 中国能源建设集团甘肃省电力设计院有限公司 Power engineering safe early warning method based on view data feature difference
WO2019137814A1 (en) * 2018-01-09 2019-07-18 Robert Bosch Gmbh Method for monitoring a construction site
US20210374297A1 (en) * 2020-05-26 2021-12-02 Clair Marie McDade Building quality indexing system
CN111650863A (en) * 2020-06-09 2020-09-11 湖南城市学院 Engineering safety monitoring instrument and monitoring method
CN111986191A (en) * 2020-08-31 2020-11-24 江苏工程职业技术学院 Building construction acceptance method and system
WO2022260165A1 (en) * 2021-06-10 2022-12-15 ダイキン工業株式会社 Construction acceptance inspection method, device, and system
CN113869681A (en) * 2021-09-17 2021-12-31 湖南方圆工程咨询监理有限公司 BIM and laser scanning based house building acceptance method, device and system
CN114745399A (en) * 2022-02-23 2022-07-12 安徽金蓓检测认证股份有限公司 Intelligent building engineering quality inspection and acceptance management system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALI KATEBI等: "cceptance model of precast concrete components in building construction based on Technology Acceptance Model (TAM) and Technology, Organization, and Environment (TOE) framework", 《JOURNAL OF BUILDING ENGINEERING》, pages 1 - 17 *
汪雅婕;: "倾斜摄影实景三维模型成果质量检查与验收方法研究", 现代测绘, no. 04, pages 20 - 23 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437235A (en) * 2023-12-21 2024-01-23 四川新康意众申新材料有限公司 Plastic film quality detection method based on image processing
CN117437235B (en) * 2023-12-21 2024-03-12 四川新康意众申新材料有限公司 Plastic film quality detection method based on image processing
CN118245855A (en) * 2024-05-30 2024-06-25 山东天意机械股份有限公司 Artificial intelligence-based building board quality classification method

Also Published As

Publication number Publication date
CN116630974B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN116630974B (en) Quick marking processing method and system for building image data
Korn et al. Color supported generalized-ICP
CN110188769B (en) Method, device, equipment and storage medium for auditing key point labels
CN108154498A (en) A kind of rift defect detecting system and its implementation
CN104952753A (en) Measurement Sampling Method
CN116434266B (en) Automatic extraction and analysis method for data information of medical examination list
CN105334185A (en) Spectrum projection discrimination-based near infrared model maintenance method
CN114937264B (en) Transformer manufacturing assembly process monitoring and analyzing system based on artificial intelligence
CN112163624A (en) Data abnormity judgment method and system based on deep learning and extreme value theory
CN115082472A (en) Quality detection method and system for hub mold casting molding product
TWI694250B (en) Surface defect detection system and method thereof
CN109145752A (en) For assessing the method, apparatus, equipment and medium of object detection and track algorithm
CN116071335A (en) Wall surface acceptance method, device, equipment and storage medium
CN113033469B (en) Tool damage identification method, device, equipment, system and readable storage medium
CN111368792B (en) Feature point labeling model training method and device, electronic equipment and storage medium
Demos et al. Lagrange regularisation approach to compare nested data sets and determine objectively financial bubbles' inceptions
CN117576098B (en) Cell division balance evaluation method and device based on segmentation
CN117911412B (en) Dimension detection method and system for caterpillar track section for engineering machinery
JP7343646B1 (en) How to collect teacher data
CN116611850B (en) System for detecting and tracing engine assembly quality curve
CN117078681B (en) Three-dimensional simulation method and system for dispensing track
CN117851961B (en) Method for detecting production quality of parachute-free carbon fiber throwing box
CN118536881B (en) Dynamic evaluation method, system and storage medium for engineering construction quality
CN117115488B (en) Water meter detection method based on image processing
CN116664699B (en) Automobile production line data management system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant