CN115713664B - Intelligent marking method and device for fire inspection and acceptance - Google Patents
Intelligent marking method and device for fire inspection and acceptance Download PDFInfo
- Publication number
- CN115713664B CN115713664B CN202211555374.0A CN202211555374A CN115713664B CN 115713664 B CN115713664 B CN 115713664B CN 202211555374 A CN202211555374 A CN 202211555374A CN 115713664 B CN115713664 B CN 115713664B
- Authority
- CN
- China
- Prior art keywords
- information
- image
- screening
- temporary
- structural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000007689 inspection Methods 0.000 title claims abstract description 16
- 238000012216 screening Methods 0.000 claims abstract description 125
- 238000005259 measurement Methods 0.000 claims abstract description 55
- 238000002372 labelling Methods 0.000 claims abstract description 36
- 238000007781 pre-processing Methods 0.000 claims abstract description 26
- 239000007787 solid Substances 0.000 claims description 38
- 238000004590 computer program Methods 0.000 claims description 5
- 238000011156 evaluation Methods 0.000 abstract description 3
- 238000004891 communication Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000010276 construction Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000009423 ventilation Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention provides an intelligent marking method and device for fire inspection and acceptance, wherein the method comprises the following steps: acquiring attribute information of a structural panel of a target building; preprocessing to establish a reference image library; collecting target building image information through a shooting module, and determining measurement information and shooting parameters of image characteristics; the image information is subjected to self-contrast selection to obtain temporary image information, and a first screening image result is screened based on temporary shooting parameters; and image feature matching is carried out on the temporary image information, the first screening image result is compared with the measurement information of the temporary image information during matching, the second screening image result is output during matching, the target position is determined based on the attribute information of the screening result, and the labeling information is input to the target position of the target building structure drawing. By adopting the method, the acceptance problem can be accurately positioned on the engineering drawing, and a data basis is provided for later-stage problem analysis, tracing and evaluation. The fire-fighting completion acceptance is transparent, fair and efficient.
Description
Technical Field
The invention relates to the technical field of digital Internet, in particular to an intelligent marking method and device for fire inspection and acceptance.
Background
The construction engineering fire inspection and acceptance means that the construction site performs site sampling inspection on the appearance of a building fire protection (extinguishing) facility according to laws and regulations related to fire protection, national engineering construction fire technical standards, construction completion drawings related to fire protection, fire protection design inspection opinions, and performs site sampling measurement on measurable indexes related to distance, height, width, length, area, thickness and the like through professional instrument equipment.
At present, the fire-fighting acceptance mainly adopts a traditional working mode of manual marking of paper drawings and on-site examination. Because fire control acceptance projects relate to various professional drawings such as building, water, electricity, heating ventilation and the like, the on-site acceptance mode of paper drawings has the problems that key nodes on the on-site acceptance site of completion cannot be checked in real time, the point positions of the problems are not clear, the problem list and the correction feedback are repeatedly submitted, the data are filed and called, objective, accurate and strict records are not available for relevant acceptance contents, the post-tracing is difficult, and the acceptance process is not just and transparent.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides an intelligent marking method and device for fire inspection and acceptance.
The embodiment of the invention provides an intelligent marking method for fire inspection and acceptance, which comprises the following steps:
acquiring a three-dimensional solid model of a target building, extracting a structural surface patch of a unit space according to the three-dimensional solid model, and determining attribute information of the structural surface patch by combining model information of the three-dimensional solid model;
preprocessing the structural surface patch based on the attribute information, and establishing a corresponding reference image library through the preprocessed structural surface patch, wherein the preprocessing comprises encoding, registering, classifying and storing the structural surface patch;
acquiring image information of a unit space of the target building through a shooting module, and determining measurement information of image features corresponding to the image information and shooting parameters of the shooting module based on the image information;
based on the similarity of image features in the image information, carrying out self-comparison on the image information, selecting temporary image information according to the self-comparison result, acquiring temporary shooting parameters corresponding to the temporary image information, and screening from the reference image library based on the temporary shooting parameters to obtain a first screening image result;
acquiring measurement information of image features corresponding to the temporary image information, performing image feature matching on the first screening image result and the temporary image information, comparing the first screening image result with the measurement information of the temporary image information when the image features are matched, and outputting a second screening image result when the first screening image result is matched with the measurement information of the temporary image information;
And acquiring attribute information of the second screening image result and pre-input labeling information, determining a corresponding target position based on the attribute information of the second screening image result, and inputting the labeling information to the target position of the target building structure drawing.
In one embodiment, the attribute information includes:
azimuth information, structure information, association number information.
In one embodiment, the method further comprises:
acquiring a preset coding rule, and carrying out serialization coding on the structural surface patch by combining the association number information;
obtaining a structure drawing corresponding to the target building, superposing the position relation between the structure drawing and a three-dimensional entity model, and registering the position information of the structure surface patch in the structure drawing by combining the azimuth information, the azimuth information and the structure information;
creating a hierarchical directory of the structural patches based on the spatial information of the unit space, and determining the hierarchical directory corresponding to the structural patches;
and determining a corresponding database storage field based on the coding rule, and storing the structural patch in a corresponding position of a reference image library according to the database storage field.
In one embodiment, the method further comprises:
acquiring feature matching factors in the first screening image result and the temporary image information, and matching the first screening image result with the feature matching factors of the temporary image information, wherein the feature matching factors comprise structural factors, crease factors, inflection factors and factor factors;
the comparing the measurement information of the first screening image result and the temporary image information includes:
and comparing the first screening image result with the measurement information of the feature matching factor of the temporary image information.
In one embodiment, the shooting parameters include:
shooting position, shooting angle and elevation information;
screening from the reference image library based on the temporary shooting parameters to obtain a first screening image result, wherein the method comprises the following steps:
and determining an azimuth interval and a position interval corresponding to the temporary image information based on the shooting position, the shooting angle and the elevation information corresponding to the temporary image information, and retrieving a corresponding first screening image result from the reference image library through the azimuth interval and the position interval.
In one embodiment, the method further comprises:
And when the number of the images of the second screening image result is larger than 1, sending the second screening image result to a binding terminal corresponding to the shooting module, and determining the second screening image result according to the feedback information of the binding terminal.
The embodiment of the invention provides an intelligent marking device for fire inspection and acceptance, which comprises the following components:
the acquisition module is used for acquiring a three-dimensional solid model of a target building, extracting a structural surface patch of a unit space according to the three-dimensional solid model, and determining attribute information of the structural surface patch by combining model information of the three-dimensional solid model;
the preprocessing module is used for preprocessing the structural surface patches based on the attribute information, and establishing a corresponding reference image library through the preprocessed structural surface patches, wherein the preprocessing comprises encoding, registering, classifying and storing of the structural surface patches;
the shooting module is used for acquiring the image information of the unit space of the target building through the shooting module and determining the measurement information of the image characteristics corresponding to the image information and the shooting parameters of the shooting module based on the image information;
the first screening module is used for carrying out self-comparison on the image information based on the image feature similarity in the image information, selecting temporary image information according to the self-comparison result, acquiring temporary shooting parameters corresponding to the temporary image information, and screening from the reference image library based on the temporary shooting parameters to obtain a first screening image result;
The second screening module is used for acquiring the measurement information of the image features corresponding to the temporary image information, carrying out image feature matching on the first screening image result and the temporary image information, comparing the first screening image result with the measurement information of the temporary image information when the image features are matched, and outputting a second screening image result when the first screening image result is matched with the measurement information of the temporary image information;
and the labeling module is used for acquiring the attribute information of the second screening image result and the pre-input labeling information, determining a corresponding target position based on the attribute information of the second screening image result, and inputting the labeling information to the target position of the target building structure drawing.
In one embodiment, the apparatus further comprises:
the coding module is used for acquiring a preset coding rule and carrying out serialization coding on the structural surface piece by combining the association number information;
the registration module is used for acquiring a structural drawing corresponding to the target building, superposing the positional relationship between the structural drawing and the three-dimensional solid model, and registering the positional information of the structural surface patch on the structural drawing by combining the azimuth information, the azimuth information and the structural information;
The classification module is used for creating a hierarchical directory of the structural patches based on the spatial information of the unit space and determining the hierarchical directory corresponding to the structural patches;
and the storage module is used for determining a corresponding database storage field based on the coding rule and storing the structural patch in a corresponding position of a reference image library according to the database storage field.
The embodiment of the invention provides electronic equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the intelligent marking method for fire fighting acceptance when executing the program.
The embodiment of the invention provides a non-transitory computer readable storage medium, on which a computer program is stored, which when being executed by a processor, realizes the steps of the intelligent marking method for fire fighting acceptance.
According to the intelligent marking method and the intelligent marking device for fire inspection and acceptance, a three-dimensional solid model of a target building is obtained, a structural patch of a unit space is extracted according to the three-dimensional solid model, and the attribute information of the structural patch is determined by combining the model information of the three-dimensional solid model; based on the attribute information, preprocessing the structural surface piece, and establishing a corresponding reference image library through the preprocessed structural surface piece, wherein the preprocessing comprises encoding, registering, classifying and storing the structural surface piece; acquiring image information of a unit space of a target building through a shooting module, and determining measurement information of image features corresponding to the image information and shooting parameters of the shooting module based on the image information; based on the similarity of image features in the image information, carrying out self-contrast on the image information, selecting temporary image information according to a self-contrast result, acquiring temporary shooting parameters corresponding to the temporary image information, and screening from a reference image library based on the temporary shooting parameters to obtain a first screening image result; acquiring measurement information of image features corresponding to the temporary image information, performing image feature matching on the first screening image result and the temporary image information, comparing the first screening image result with the measurement information of the temporary image information when the image features are matched, and outputting a second screening image result when the first screening image result is matched with the measurement information of the temporary image information; and acquiring attribute information of the second screening image result and pre-input labeling information, determining a corresponding target position based on the attribute information of the second screening image result, and inputting the labeling information to the target position of the target building structure drawing. Therefore, the accurate positioning of acceptance problems on engineering drawings can be realized, and a data basis is provided for later-stage problem analysis, tracing and evaluation. The fire-fighting completion acceptance is transparent, fair and efficient.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an intelligent labeling method for fire acceptance in an embodiment of the invention;
FIG. 2 is a flowchart of a method for intelligent annotation of fire checks according to another embodiment of the invention;
FIG. 3 is a block diagram of an intelligent marking device for fire acceptance in an embodiment of the invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic flow chart of an intelligent fire-fighting acceptance labeling method provided by an embodiment of the present invention, and as shown in fig. 1, the embodiment of the present invention provides an intelligent fire-fighting acceptance labeling method, including:
step S101, a three-dimensional solid model of a target building is obtained, a structural surface patch of a unit space is extracted according to the three-dimensional solid model, and the attribute information of the structural surface patch is determined by combining the model information of the three-dimensional solid model.
Specifically, a three-dimensional solid model corresponding to a target building to be marked is obtained, then a structural patch of a unit space in the three-dimensional solid model is extracted, wherein when the target building is a private residence, the unit space can be each household or each room of each household, then the attribute information of the structural patch is determined by combining the model information of the three-dimensional solid model, wherein the model information can comprise various data information of the target building, such as information of the size, the length, the width and the height of the room, information such as information of directions, directions and the like, information such as the position of the room, the position of each household and the like, information such as materials of houses, building structures and the like, the structural patch can be structural patches of all directions of each unit space, such as structural patches of one room, can be 6 directions (up, down, left, right, front, north, northeast, southeast, southwest and northwest), and the corresponding model information is determined according to the information of the three-dimensional solid model corresponding to the structural patch.
In addition, the attribute information of the structural panel may include azimuth information, structural information, and association number information, where the azimuth information and the azimuth angle refer to directions and positions of the structural panel in the three-dimensional building entity model, including geographic azimuth (which may be represented by eight directions in examples, such as positive east, positive south, positive west, positive north, northeast, southeast, southwest, and northwest), azimuth (which is represented by radian in examples), and structural information refers to structural attribute information of the structural panel, such as length, width, height, and shape, and the association information refers to other information associated with the structural panel, including model number, floor number, and room number.
Step S102, preprocessing the structural surface piece based on the attribute information, and establishing a corresponding reference image library through the preprocessed structural surface piece, wherein the preprocessing comprises encoding, registering, classifying and storing the structural surface piece.
Specifically, the structural patches are preprocessed based on attribute information of the structural patches, and a corresponding reference image library is established through the preprocessed structural patches, wherein the preprocessing step may include encoding, registering, classifying and storing the structural patches, wherein the encoding is to perform encoding corresponding to each structural patch according to association information such as position information of the structural patches, registering is to perform position registering on the structural patches, a three-dimensional building solid model and a structural drawing of a target building, thereby determining position information of the structural patches on the structural drawing of the target building, classifying the structural patches of the target building, for example, setting a multi-level directory, sequentially storing the structural patches in the reference image library in a storage mode such as a field, a character string and the like.
In addition, the pretreatment step may specifically include:
acquiring a preset coding rule, and carrying out serialization coding on the structural patch by combining the association number information, wherein the method comprises the following steps: the coding purpose is to sequence the structural patches, so that the subsequent query and retrieval are convenient. Standardized reference image library coding rules may be such as: the code length is 13 bits, and the code consists of a model code, a floor code, a room code and a view code. Wherein the model coding 5 bits consist of model type and number, the floor coding 3 bit rule is that the codes from the bottom floor to the top floor are F-a respectively, F01, F02, …, fn, a represents the number of underground floors (0 < a < 10), n represents the number of above-floor floors (0 < = n < 100); the room codes 3 bits, the rule is to code R1, …, rn for each room respectively from left to right and from top to bottom, n represents the number of rooms; view code 1 bit, top, bottom, left, right, front and rear view mode codes T, B, L, R, F, B, respectively (e.g., the picture coded as B0220F014R18F represents the picture with the door and window structure for the front view in the 18 th room of the 14 th floor).
Obtaining a structural drawing corresponding to a target building, superposing the positional relationship between the structural drawing and a three-dimensional entity model, and registering the positional information of a structural surface patch on the structural drawing by combining azimuth information, azimuth information and structural information, wherein the method comprises the following steps: and (3) carrying out position matching on the structural surface pieces for registration, and establishing a spatial position relation between the structural surface pieces and a building structural drawing. And (3) carrying out position superposition on the three-dimensional building entity model and the building structure drawing by registration, matching the corresponding relation between the structural surface patch and the building structure drawing based on azimuth information and associated information, and determining the position information of the structural surface patch on the building structure drawing.
Creating a hierarchical directory of the structural patches based on spatial information of the unit space, and determining the hierarchical directory corresponding to the structural patches, including: the purpose of classification is to establish a storage catalog of unstructured information and improve the later retrieval efficiency. The classification method may be as follows: first, a file primary directory is created, which stores the number of buildings, a few folders representing a few buildings, then a secondary folder directory is created, which stores the floors of the buildings, then a tertiary folder directory is created, which stores the rooms in the floors, and finally a quaternary folder directory is created, which stores the 6 structural tiles of the rooms in the rooms, and so on, multi-level directory folders.
Based on the coding rule, determining a corresponding database storage field, and storing the structural patch at a corresponding position of a reference image library according to the database storage field, wherein the method comprises the following steps: the storage purpose is to establish a spatial database of the structural patches, and realize the storage of information such as attributes, codes, positions and the like. The fields of the database may include: master number, model code, floor number, room number, view code, reference image number, stored image, position information, orientation information, structure information, and labeling information, wherein the field types can be integer type, character string type string type, string type binary type, string type, and string type.
Step S103, collecting image information of the unit space of the target building through a shooting module, and determining measurement information of image features corresponding to the image information and shooting parameters of the shooting module based on the image information.
Specifically, image information of a unit space of a target building is acquired through a shooting module, for example, an image of the unit space is acquired through an AR shooting device, for example, an angle of the shooting device is moved by 5 degrees in a parallel direction of the unit space (in a certain room) each time, an angle of the shooting device is moved by 5 degrees each time in a pitch angle, a series of image information of the unit space is acquired, then based on the shot image information, information of image features, such as a door, a window, a vent and the like, identified in the image information, such as a crease, a contour and the like, can be acquired, then size information corresponding to the image features, namely, measurement information of the image features, and shooting parameters corresponding to the image information are determined, wherein the shooting parameters are a series of parameters when the shooting module shoots a corresponding image, for example, information such as shooting position, shooting angle and elevation information of a shooting device.
Step S104, based on the similarity of the image features in the image information, performing self-comparison on the image information, selecting temporary image information according to the self-comparison result, acquiring temporary shooting parameters corresponding to the temporary image information, and screening from the reference image library based on the temporary shooting parameters to obtain a first screening image result.
Specifically, based on the similarity of image features in the image information, the image information is subjected to self-comparison, wherein the image features in a series of image information in a unit space, such as a door, a window, a vent and the like, can also be subjected to self-comparison, then temporary image information is selected according to a self-comparison result, for example, 5 to 10 images with the largest difference in the image features are selected as temporary image information, the image features in the unit space are summarized to the greatest extent, then temporary shooting parameters corresponding to the temporary image information are obtained, and structural patches conforming to the temporary shooting parameters are screened from a reference image library according to the temporary shooting parameters, such as information of shooting positions, shooting angles, elevation information and the like, and are used as a first screening result in the reference image library.
In addition, during screening, a azimuth section and a position section corresponding to the temporary image information, that is, an angle section and a position section of the shooting corresponding to the temporary image information, may be determined based on the shooting position, the shooting angle and the elevation information corresponding to the temporary image information, and then a first screening image result corresponding to the azimuth section and the position section may be retrieved from the reference image library according to the azimuth section and the position section.
Step S105, obtaining measurement information of image features corresponding to the temporary image information, performing image feature matching on the first screening image result and the temporary image information, when the image features are matched, comparing the first screening image result with the measurement information of the temporary image information, and when the first screening image result is matched with the measurement information of the temporary image information, outputting a second screening image result.
Specifically, measurement information of image features corresponding to the temporary image information, namely size information of features such as a door, a window and a vent in the temporary image information, lines, size information and the like of features such as wrinkles, contours and the like, is further determined, then image feature matching is carried out on the structural patches in the first screening image result and the temporary image information, when the image features are matched, whether the measurement information of the image features is matched is further determined, when the measurement information is also matched, whether the temporary image information is matched with the corresponding structural patches is further determined, when the measurement information is also matched, the same azimuth of the same unit space is displayed by the images, a second screening image result is output, so that the corresponding structural patches can be screened from a huge number of reference image libraries through twice screening, and the processing amount of data can be reduced as much as possible.
In addition, when the image features are matched, feature matching factors in the first screening image result and the temporary image information can be obtained, the first screening image result and the feature matching factors in the temporary image information are matched, the feature matching factors comprise structural factors, crease factors, inflection point factors and element factors, namely image structures, creases, inflection points in the image features, important elements in the image and the like, and then after matching, measurement information of the feature matching factors of the first screening image result and the temporary image information is compared.
And S106, acquiring attribute information of the second screening image result and pre-input labeling information, determining a corresponding target position based on the attribute information of the second screening image result, and inputting the labeling information to the target position of the target building structure drawing.
Specifically, attribute information in a structural panel of the second screening image result is obtained, position information corresponding to the structural panel is determined according to the attribute information, namely, the structural panel is at a target position of a target building structural drawing, then, relevant annotation information which is input in advance for the structural panel is obtained, the annotation information can be recorded in a text, picture, voice, video and other modes, and the annotation information is displayed on the structural drawing of the target building according to the target position.
In addition, the situation that the number of images of the second screening image result is greater than 1 may also occur, when the number of images of the second screening image result is greater than 1, the second screening image result is sent to the binding terminal corresponding to the shooting module, so that the corresponding staff can select the optimal position information (structural panel), and then the second screening image result is determined according to the feedback information of the binding terminal.
According to the intelligent labeling method for fire inspection and acceptance, which is provided by the embodiment of the invention, a three-dimensional solid model of a target building is obtained, a structural patch of a unit space is extracted according to the three-dimensional solid model, and the attribute information of the structural patch is determined by combining with the model information of the three-dimensional solid model; based on the attribute information, preprocessing the structural surface piece, and establishing a corresponding reference image library through the preprocessed structural surface piece, wherein the preprocessing comprises encoding, registering, classifying and storing the structural surface piece; acquiring image information of a unit space of a target building through a shooting module, and determining measurement information of image features corresponding to the image information and shooting parameters of the shooting module based on the image information; based on the similarity of image features in the image information, carrying out self-contrast on the image information, selecting temporary image information according to a self-contrast result, acquiring temporary shooting parameters corresponding to the temporary image information, and screening from a reference image library based on the temporary shooting parameters to obtain a first screening image result; acquiring measurement information of image features corresponding to the temporary image information, performing image feature matching on the first screening image result and the temporary image information, comparing the first screening image result with the measurement information of the temporary image information when the image features are matched, and outputting a second screening image result when the first screening image result is matched with the measurement information of the temporary image information; and acquiring attribute information of the second screening image result and pre-input labeling information, determining a corresponding target position based on the attribute information of the second screening image result, and inputting the labeling information to the target position of the target building structure drawing. Therefore, the accurate positioning of acceptance problems on engineering drawings can be realized, and a data basis is provided for later-stage problem analysis, tracing and evaluation. The fire-fighting completion acceptance is transparent, fair and efficient.
In another embodiment, as shown in fig. 2, a in the fig. 2 shows a schematic diagram of a room in a building, 6 structure panel pictures of the room in the three-dimensional model building are respectively obtained, B shows a created reference image database, information such as a picture and position information is stored in the reference image database, d shows a database collected on site, B is screened in the step c, building codes, floor codes and measurement information fields in the database d are required to be used for screening, for example, one picture in the database d is selected, wherein the building codes are B0220, the floor codes are F014, the measurement information is 1.5m wide and 2m wide of the window, then pictures meeting the requirements are screened in the database B according to the model codes, the floor codes and the structure information, the pictures selected from the database d are output according to an image matching algorithm, the position information in the database B is required to be screened, and the position information in the database is transmitted to the database d according to the output result, and the position information in the database d is added to the drawing information in the database d.
Fig. 3 is a schematic diagram of an intelligent labeling device for fire inspection and acceptance, which includes: the device comprises a first acquisition module S201, a preprocessing module S202, a shooting module S203, a first screening module S204, a second screening module S205 and a labeling module S206, wherein:
The acquisition module S201 is configured to acquire a three-dimensional solid model of a target building, extract a structural patch of a unit space according to the three-dimensional solid model, and determine attribute information of the structural patch in combination with model information of the three-dimensional solid model.
The preprocessing module S202 is configured to preprocess the structural patches based on the attribute information, and establish a corresponding reference image library through the preprocessed structural patches, where the preprocessing includes encoding, registering, classifying and storing the structural patches.
The shooting module S203 is configured to collect image information of a unit space of the target building through the shooting module, and determine measurement information of image features corresponding to the image information and shooting parameters of the shooting module based on the image information.
The first filtering module S204 is configured to perform self-comparison on the image information based on the similarity of image features in the image information, select temporary image information according to the self-comparison result, obtain temporary shooting parameters corresponding to the temporary image information, and perform filtering from the reference image library based on the temporary shooting parameters, so as to obtain a first filtering image result.
And the second screening module S205 is configured to obtain measurement information of image features corresponding to the temporary image information, perform image feature matching on the first screening image result and the temporary image information, compare the measurement information of the first screening image result and the temporary image information when the image features are matched, and output a second screening image result when the measurement information of the first screening image result and the temporary image information is matched.
And the labeling module S206 is used for acquiring the attribute information of the second screening image result and the pre-input labeling information, determining a corresponding target position based on the attribute information of the second screening image result, and inputting the labeling information to the target position of the target building structure drawing.
In one embodiment, the apparatus further comprises:
and the coding module is used for acquiring a preset coding rule and carrying out serialization coding on the structural surface piece by combining the association number information.
The registration module is used for acquiring a structural drawing corresponding to the target building, superposing the positional relationship between the structural drawing and the three-dimensional solid model, and registering the positional information of the structural surface piece on the structural drawing by combining the azimuth information, the azimuth information and the structural information.
And the classification module is used for creating the hierarchical directory of the structural patches based on the spatial information of the unit space and determining the hierarchical directory corresponding to the structural patches.
And the storage module is used for determining a corresponding database storage field based on the coding rule and storing the structural patch in a corresponding position of a reference image library according to the database storage field.
The specific limitation of the intelligent marking apparatus for fire inspection can be referred to the limitation of the intelligent marking method for fire inspection hereinabove, and will not be described herein. All or part of each module in the intelligent marking device for fire fighting acceptance can be realized by software, hardware and combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Fig. 4 illustrates a physical schematic diagram of an electronic device, as shown in fig. 4, which may include: a processor (processor) 301, a memory (memory) 302, a communication interface (Communications Interface) 303 and a communication bus 304, wherein the processor 301, the memory 302 and the communication interface 303 perform communication with each other through the communication bus 304. The processor 301 may call logic instructions in the memory 302 to perform the following method: acquiring a three-dimensional solid model of a target building, extracting a structural surface patch of a unit space according to the three-dimensional solid model, and determining attribute information of the structural surface patch by combining model information of the three-dimensional solid model; based on the attribute information, preprocessing the structural surface piece, and establishing a corresponding reference image library through the preprocessed structural surface piece, wherein the preprocessing comprises encoding, registering, classifying and storing the structural surface piece; acquiring image information of a unit space of a target building through a shooting module, and determining measurement information of image features corresponding to the image information and shooting parameters of the shooting module based on the image information; based on the similarity of image features in the image information, carrying out self-contrast on the image information, selecting temporary image information according to a self-contrast result, acquiring temporary shooting parameters corresponding to the temporary image information, and screening from a reference image library based on the temporary shooting parameters to obtain a first screening image result; acquiring measurement information of image features corresponding to the temporary image information, performing image feature matching on the first screening image result and the temporary image information, comparing the first screening image result with the measurement information of the temporary image information when the image features are matched, and outputting a second screening image result when the first screening image result is matched with the measurement information of the temporary image information; and acquiring attribute information of the second screening image result and pre-input labeling information, determining a corresponding target position based on the attribute information of the second screening image result, and inputting the labeling information to the target position of the target building structure drawing.
Further, the logic instructions in memory 302 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, embodiments of the present invention further provide a non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor is implemented to perform the transmission method provided in the above embodiments, for example, including: acquiring a three-dimensional solid model of a target building, extracting a structural surface patch of a unit space according to the three-dimensional solid model, and determining attribute information of the structural surface patch by combining model information of the three-dimensional solid model; based on the attribute information, preprocessing the structural surface piece, and establishing a corresponding reference image library through the preprocessed structural surface piece, wherein the preprocessing comprises encoding, registering, classifying and storing the structural surface piece; acquiring image information of a unit space of a target building through a shooting module, and determining measurement information of image features corresponding to the image information and shooting parameters of the shooting module based on the image information; based on the similarity of image features in the image information, carrying out self-contrast on the image information, selecting temporary image information according to a self-contrast result, acquiring temporary shooting parameters corresponding to the temporary image information, and screening from a reference image library based on the temporary shooting parameters to obtain a first screening image result; acquiring measurement information of image features corresponding to the temporary image information, performing image feature matching on the first screening image result and the temporary image information, comparing the first screening image result with the measurement information of the temporary image information when the image features are matched, and outputting a second screening image result when the first screening image result is matched with the measurement information of the temporary image information; and acquiring attribute information of the second screening image result and pre-input labeling information, determining a corresponding target position based on the attribute information of the second screening image result, and inputting the labeling information to the target position of the target building structure drawing.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. The intelligent marking method for fire control acceptance is characterized by comprising the following steps:
acquiring a three-dimensional solid model of a target building, extracting a structural surface patch of a unit space according to the three-dimensional solid model, and determining attribute information of the structural surface patch by combining model information of the three-dimensional solid model;
preprocessing the structural surface patch based on the attribute information, and establishing a corresponding reference image library through the preprocessed structural surface patch, wherein the preprocessing comprises encoding, registering, classifying and storing the structural surface patch;
acquiring image information of a unit space of the target building through a shooting module, and determining measurement information of image features corresponding to the image information and shooting parameters of the shooting module based on the image information;
Based on the similarity of image features in the image information, carrying out self-comparison on the image information, selecting temporary image information according to the self-comparison result, acquiring temporary shooting parameters corresponding to the temporary image information, and screening from the reference image library based on the temporary shooting parameters to obtain a first screening image result;
acquiring measurement information of image features corresponding to the temporary image information, performing image feature matching on the first screening image result and the temporary image information, comparing the first screening image result with the measurement information of the temporary image information when the image features are matched, and outputting a second screening image result when the first screening image result is matched with the measurement information of the temporary image information;
and acquiring attribute information of the second screening image result and pre-input labeling information, determining a corresponding target position based on the attribute information of the second screening image result, and inputting the labeling information to the target position of the structural drawing of the target building.
2. The fire check acceptance intelligent labeling method according to claim 1, wherein the attribute information comprises: azimuth information, structure information, association number information.
3. The fire inspection and acceptance intelligent labeling method according to claim 2, wherein the preprocessing the structural panel based on the attribute information comprises:
acquiring a preset coding rule, and carrying out serialization coding on the structural surface patch by combining the association number information;
obtaining a structure drawing corresponding to the target building, superposing the position relation between the structure drawing and a three-dimensional entity model, and registering the position information of the structure surface patch in the structure drawing by combining the azimuth information, the azimuth information and the structure information;
creating a hierarchical directory of the structural patches based on the spatial information of the unit space, and determining the hierarchical directory corresponding to the structural patches;
and determining a corresponding database storage field based on the coding rule, and storing the structural patch in a corresponding position of a reference image library according to the database storage field.
4. The method for intelligent annotation of fire protection acceptance according to claim 1, wherein the step of performing image feature matching on the first screening image result and temporary image information comprises the steps of:
acquiring feature matching factors in the first screening image result and the temporary image information, and matching the first screening image result with the feature matching factors of the temporary image information, wherein the feature matching factors comprise structural factors, crease factors, inflection factors and factor factors;
The comparing the measurement information of the first screening image result and the temporary image information includes:
and comparing the first screening image result with the measurement information of the feature matching factor of the temporary image information.
5. The fire check and acceptance intelligent labeling method according to claim 1, wherein the shooting parameters comprise: shooting position, shooting angle and elevation information;
screening from the reference image library based on the temporary shooting parameters to obtain a first screening image result, wherein the method comprises the following steps:
and determining an azimuth interval and a position interval corresponding to the temporary image information based on the shooting position, the shooting angle and the elevation information corresponding to the temporary image information, and retrieving a corresponding first screening image result from the reference image library through the azimuth interval and the position interval.
6. The fire acceptance intelligent labeling method of claim 1, further comprising:
and when the number of the images of the second screening image result is larger than 1, sending the second screening image result to a binding terminal corresponding to the shooting module, and determining the second screening image result according to the feedback information of the binding terminal.
7. An intelligent marking device for fire acceptance, characterized in that the device comprises:
the acquisition module is used for acquiring a three-dimensional solid model of a target building, extracting a structural surface patch of a unit space according to the three-dimensional solid model, and determining attribute information of the structural surface patch by combining model information of the three-dimensional solid model;
the preprocessing module is used for preprocessing the structural surface patches based on the attribute information, and establishing a corresponding reference image library through the preprocessed structural surface patches, wherein the preprocessing comprises encoding, registering, classifying and storing of the structural surface patches;
the shooting module is used for acquiring the image information of the unit space of the target building through the shooting module and determining the measurement information of the image characteristics corresponding to the image information and the shooting parameters of the shooting module based on the image information;
the first screening module is used for carrying out self-comparison on the image information based on the image feature similarity in the image information, selecting temporary image information according to the self-comparison result, acquiring temporary shooting parameters corresponding to the temporary image information, and screening from the reference image library based on the temporary shooting parameters to obtain a first screening image result;
The second screening module is used for acquiring the measurement information of the image features corresponding to the temporary image information, carrying out image feature matching on the first screening image result and the temporary image information, comparing the first screening image result with the measurement information of the temporary image information when the image features are matched, and outputting a second screening image result when the first screening image result is matched with the measurement information of the temporary image information;
and the labeling module is used for acquiring the attribute information of the second screening image result and the pre-input labeling information, determining a corresponding target position based on the attribute information of the second screening image result, and inputting the labeling information to the target position of the structural drawing of the target building.
8. The fire acceptance intelligent labeling apparatus of claim 7, wherein the apparatus further comprises:
the coding module is used for acquiring a preset coding rule and carrying out serialization coding on the structural surface piece by combining the association number information;
the registration module is used for acquiring a structural drawing corresponding to the target building, superposing the positional relationship between the structural drawing and the three-dimensional solid model, and registering the positional information of the structural surface patch on the structural drawing by combining azimuth information, azimuth information and structural information;
The classification module is used for creating a hierarchical directory of the structural patches based on the spatial information of the unit space and determining the hierarchical directory corresponding to the structural patches;
and the storage module is used for determining a corresponding database storage field based on the coding rule and storing the structural patch in a corresponding position of a reference image library according to the database storage field.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor performs the steps of the intelligent annotation method for fire acceptance according to any of claims 1 to 6 when the program is executed by the processor.
10. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the intelligent annotation method of fire acceptance according to any of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211555374.0A CN115713664B (en) | 2022-12-06 | 2022-12-06 | Intelligent marking method and device for fire inspection and acceptance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211555374.0A CN115713664B (en) | 2022-12-06 | 2022-12-06 | Intelligent marking method and device for fire inspection and acceptance |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115713664A CN115713664A (en) | 2023-02-24 |
CN115713664B true CN115713664B (en) | 2023-06-09 |
Family
ID=85235695
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211555374.0A Active CN115713664B (en) | 2022-12-06 | 2022-12-06 | Intelligent marking method and device for fire inspection and acceptance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115713664B (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109934798A (en) * | 2019-01-24 | 2019-06-25 | 深圳安泰创新科技股份有限公司 | Internal object information labeling method and device, electronic equipment, storage medium |
CN111986785B (en) * | 2020-08-26 | 2023-09-12 | 北京至真互联网技术有限公司 | Medical image labeling method, device, equipment and storage medium |
CN112381356A (en) * | 2020-10-24 | 2021-02-19 | 上海东方投资监理有限公司 | Completion acceptance method, completion acceptance system, server and storage medium for engineering project |
CN112884055B (en) * | 2021-03-03 | 2023-02-03 | 歌尔股份有限公司 | Target labeling method and target labeling device |
CN114882518A (en) * | 2022-02-11 | 2022-08-09 | 上海应用技术大学 | Standardized management system of construction engineering drawing based on image recognition technology |
CN114638885A (en) * | 2022-03-17 | 2022-06-17 | 广东工业大学 | Intelligent space labeling method and system, electronic equipment and storage medium |
-
2022
- 2022-12-06 CN CN202211555374.0A patent/CN115713664B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN115713664A (en) | 2023-02-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xue et al. | Automatic generation of semantically rich as‐built building information models using 2D images: A derivative‐free optimization approach | |
Dore et al. | Current state of the art historic building information modelling | |
Donkers et al. | Automatic conversion of IFC datasets to geometrically and semantically correct CityGML LOD3 buildings | |
Han et al. | A web-based system for supporting global land cover data production | |
Gamanya et al. | Object-oriented change detection for the city of Harare, Zimbabwe | |
CN110020433B (en) | Industrial and commercial high-management name disambiguation method based on enterprise incidence relation | |
JP2018537772A (en) | Insurance compensation fraud prevention method, system, apparatus and readable recording medium based on coincidence of multiple photos | |
Cao et al. | Facade geometry generation from low-resolution aerial photographs for building energy modeling | |
Larsson et al. | Representation of 3D cadastral boundaries-From analogue to digital | |
CN107978017B (en) | Indoor structure rapid modeling method based on frame line extraction | |
EP3921771A1 (en) | System and method for automated material take-off | |
CN115713664B (en) | Intelligent marking method and device for fire inspection and acceptance | |
CN110427648B (en) | Component attribute acquisition method and related product | |
US20230083833A1 (en) | Apparatus and method for remote determination of architectural feature elevation and orientation | |
Ziems et al. | Multiple-model based verification of road data | |
CN105975675A (en) | Method for generating housing type by editing imported local file on line | |
CN116310802A (en) | Method and device for monitoring change of residence based on multi-scale fusion model | |
Fowler et al. | Mining the British Isles oak tree-ring data set. Part A: Rationale, data, software, and proof of concept | |
DE102019211871B4 (en) | Procedure and arrangement for the representation of technical objects | |
Roman et al. | From 3D surveying data to BIM to BEM: the InCUBE dataset | |
Comiskey et al. | Geospatial data capture for BIM in retrofit projects-A viable option for small practices in Northern Ireland | |
CN115168526A (en) | Address positioning correction method, system and storage medium based on geocoding | |
Eremeev et al. | Spatial objects classification algorithm on the basis of topological features of a form | |
CN115937708A (en) | High-definition satellite image-based roof information automatic identification method and device | |
Steijn | Building an IndoorGML model in (near) real-time |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 313200 building 6, No. 11, Keyuan Road, Wuyang street, Deqing County, Huzhou City, Zhejiang Province Patentee after: Zhejiang Zhongce Spacetime Technology Co.,Ltd. Address before: 313200 building 6, No. 11, Keyuan Road, Wuyang street, Deqing County, Huzhou City, Zhejiang Province Patentee before: ZHEJIANG TOPRS GEOGRAPHIC INFORMATION TECHNOLOGY Co.,Ltd. |