CN116758269B - Position verification method - Google Patents

Position verification method Download PDF

Info

Publication number
CN116758269B
CN116758269B CN202311055293.9A CN202311055293A CN116758269B CN 116758269 B CN116758269 B CN 116758269B CN 202311055293 A CN202311055293 A CN 202311055293A CN 116758269 B CN116758269 B CN 116758269B
Authority
CN
China
Prior art keywords
model
data
target
information
construction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311055293.9A
Other languages
Chinese (zh)
Other versions
CN116758269A (en
Inventor
陈航
马继生
马晓彪
刘明
许洪波
赵硕硕
杨永坡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiongan Xiongchuang Digital Technology Co ltd
Original Assignee
Xiongan Xiongchuang Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiongan Xiongchuang Digital Technology Co ltd filed Critical Xiongan Xiongchuang Digital Technology Co ltd
Priority to CN202311055293.9A priority Critical patent/CN116758269B/en
Publication of CN116758269A publication Critical patent/CN116758269A/en
Application granted granted Critical
Publication of CN116758269B publication Critical patent/CN116758269B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Civil Engineering (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Structural Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The application provides a position verification method, which comprises the following steps: acquiring construction image data of a target construction area, wherein the construction image data is a digital orthographic image of the target construction area; carrying out image recognition on the construction image data to obtain a target object and target data of the target object; according to the target data, model data of a model object corresponding to the target object is obtained from a design model of the target construction area; and comparing the target data with the model data to obtain a verification result of the target object. Therefore, the target object and the target data of the target object, which need to be checked, are obtained from the digital orthographic image of the target construction area, and the target data are compared with the model data of the model object corresponding to the target object in the design model, so that the position of the target object in the target construction area can be checked, and compared with the traditional manual position checking mode, the position checking method has the advantages that the situation of checking errors is avoided, and the checking accuracy is improved.

Description

Position verification method
Technical Field
The application relates to the technical field of construction management, in particular to a position verification method.
Background
In the city construction stage, various infrastructures are constructed from planning design to modeling, construction is carried out after auditing is completed, and model-ground consistency check is needed after construction to determine whether the construction position is correct. Wherein "modulo" in the modulo-ground consistency check refers to a design model, and "ground" refers to a real construction state of a construction site and a geographical position where the construction is completed, and the modulo-ground consistency check can be summarized as checking whether a result of the design model and an actual construction operation match.
At present, after construction is completed, a survey and drawing instrument is used by an acceptance person to check infrastructure built on a construction site on site, then a measurement result is exported to a design model for display points, and human eyes are used for position comparison one by one to judge whether the positions of the construction site and the design scheme are consistent. However, by means of a manual one-by-one verification scheme, verification efficiency is low, and verification errors are easy to occur, so that verification is not accurate enough.
Disclosure of Invention
The embodiment of the application provides a position verification method.
According to a first aspect of the present application, there is provided a position verification method, the method comprising: acquiring construction image data of a target construction area, wherein the construction image data is a digital orthographic image of the target construction area; performing image recognition on the construction image data to obtain a target object and target data of the target object; according to the target data, model data of a model object corresponding to the target object is obtained from a design model of the target construction area; and comparing the target data with the model data to obtain a verification result of the target object.
According to one embodiment of the application, the construction image data is obtained by the aerial photographing device by adopting the following operations: and acquiring digital orthographic images of the target construction area according to the set route and the set conditions to obtain the construction image data.
According to an embodiment of the present application, the target object is a first target object, and the target data includes first target data; correspondingly, the image recognition of the construction image data comprises the following steps: extracting first graphic information of the first target object from the construction image data; and acquiring first coordinate information and central point elevation of the first graphic information to obtain first target data, wherein the first target data comprises the first graphic information, the first coordinate information and the central point elevation.
According to an embodiment of the present application, according to the target data, obtaining model data of a model object corresponding to the target object from a design model of the target construction area includes: performing first identification recognition on the first graphic information to obtain first identification information of the first graphic information; according to the first identification information, first model identification information matched with the first identification information is obtained from a construction table; determining the model object corresponding to the first target object in the design model according to the first model identification information; and obtaining model coordinate data and model center point elevation of the model object to obtain the model data, wherein the model data comprises the model coordinate data and the model center point elevation.
According to one embodiment of the application, the construction meter is obtained by a mobile device by adopting the following operations: receiving image information of the first target object, wherein the image information is acquired under the construction condition of the first target object; receiving first information corresponding to the image information, wherein the first information comprises model identification information, category information and installation information of the first target object; and generating the construction table based on the first information.
According to an embodiment of the present application, the comparing the target data with the model data to obtain a verification result of the target object includes: generating a target surface of the first target object in the design model according to the first coordinate information and the central point elevation; determining a model surface of the model object according to the model coordinate data; determining the intersection area of the target surface and the die surface; and determining the verification result of the first target object according to the intersection area and the central point elevation.
According to an embodiment of the present application, the obtaining, from the design model of the target construction area, model data of a model object corresponding to the target object according to the target data includes: establishing a reference three-dimensional scene, wherein the reference three-dimensional scene comprises a reference coordinate system; fusing the digital orthographic image, the design model and the reference three-dimensional scene based on the reference coordinate system, the first coordinate system of the digital orthographic image and the second coordinate system of the design model to obtain a reference digital orthographic image and a reference design model; performing second identification recognition on the first graphic information to obtain second identification information of the first graphic information; acquiring second model identification information matched with the second identification information from a construction table according to the second identification information; according to the model identification information, obtaining a model object corresponding to the second model identification information in the reference design model; acquiring the elevation of a model center point of the model object and model coordinate data; wherein the model data includes the model center point elevation and the model coordinate data.
According to an embodiment of the present application, the target object is a second target object, and the target data includes second target data; correspondingly, the image recognition of the construction image data to obtain a target object and target data of the target object includes: acquiring second graphic information corresponding to the second target object from the construction image data; and acquiring central line coordinate information and second coordinate information of the second graphic information to obtain second target data, wherein the second target data comprises the second graphic information, the central line coordinate information and the second coordinate information.
According to an embodiment of the present application, according to the target data, obtaining model data of a model object corresponding to the target object from a design model of the target construction area includes: acquiring a pipeline component corresponding to the second coordinate information from the design model according to the second coordinate information; acquiring a bounding box of the pipeline component; and determining model center line coordinate information of the pipeline component according to the bounding box to obtain model data, wherein the model data comprises the model center line coordinate information.
According to an embodiment of the present application, the comparing the target data with the model data to obtain a verification result of the target object includes: determining a center line included angle and a center line vertical clear distance according to the center line coordinate information and the model center line coordinate information; and determining a verification result of the second target object according to the included angle of the central line and the vertical clear distance of the central line.
According to the method, construction image data of a target construction area are obtained, wherein the construction image data are digital orthographic images of the target construction area; performing image recognition on the construction image data to obtain a target object and target data of the target object; according to the target data, model data of a model object corresponding to the target object is obtained from a design model of the target construction area; and comparing the target data with the model data to obtain a verification result of the target object. Therefore, the digital orthophoto data of the target construction area is acquired, the target data of the target object can be acquired based on the digital orthophoto data while the target object is acquired, then the target data is compared with the model data of the model object corresponding to the target object in the design model, so that the position verification of the target object of the target construction area can be completed, compared with the traditional manual position verification scheme, the situation of verification errors is avoided, and the verification accuracy is improved.
It should be understood that the teachings of the present application need not achieve all of the benefits set forth above, but rather that certain technical solutions may achieve certain technical effects, and that other embodiments of the present application may also achieve benefits not set forth above.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present application are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 shows a schematic diagram of an implementation flow of a location verification method according to an embodiment of the present application;
fig. 2 shows a second implementation flow diagram of a location verification method according to an embodiment of the present application;
fig. 3 shows a third implementation flow diagram of a location verification method according to an embodiment of the present application;
fig. 4 shows a fourth implementation flow chart of the location verification method according to the embodiment of the present application;
fig. 5 is a schematic diagram showing the composition and structure of a position verification device according to an embodiment of the present application;
fig. 6 shows a schematic diagram of a composition structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present application more comprehensible, the technical solutions according to the embodiments of the present application will be clearly described in the following with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 shows a schematic implementation flow diagram of a location verification method according to an embodiment of the present application.
Referring to fig. 1, an embodiment of the present application provides a location verification method, which includes: operation 101, acquiring construction image data of a target construction area, wherein the construction image data is a digital orthographic image of the target construction area; operation 102, performing image recognition on the construction image data to obtain a target object and target data of the target object; operation 103, obtaining model data of a model object corresponding to the target object from a design model of the target construction area according to the target data; and 104, comparing the target data with the model data to obtain a verification result of the target object.
In operation 101, construction image data of a target construction area is acquired, the construction image data being a digital orthographic image of the target construction area.
Specifically, before building various infrastructures, a three-dimensional model of a construction area is generally required to be built, and according to the construction planning of the various infrastructures, the various infrastructures to be built are also built in the three-dimensional model, so that a design model for the various infrastructures is obtained. And then, constructing by constructors according to the design model. Among these, various infrastructures may include well covers, pipelines, large machinery, various types of sites, etc.
Further, after the construction is completed, construction image data of the construction area needs to be acquired to determine whether the construction positions of various infrastructures in the construction area are consistent with the design model.
In this embodiment of the present application, the construction image data of the target construction area is a digital orthographic image of the target construction area.
In one embodiment of the application, the aerial photographing device performs digital orthophoto collection on the target construction area according to the set route and the set condition.
Specifically, according to the design model, referring to the construction positions of various infrastructures, an aerial route for aerial photography of a target construction area can be planned first, and then aerial photography is carried out on the target construction area according to set conditions, so that a digital orthographic image of the target construction area is obtained. The aerial photographing setting conditions can include aerial photographing precision, the aerial photographing precision can be 1.5cm, and the planning of the route can refer to a conventional route planning scheme, which is not described herein.
In one embodiment of the present application, the design model is a BIM (Building Information Modeling, building information model).
In operation 102, image recognition is performed on the construction image data to obtain a target object and target data of the target object.
Specifically, the target object represents an infrastructure built in the target construction area, and can be obtained by image recognition of the construction image data. Wherein the target object may represent an infrastructure, such as a manhole cover, a pipeline, etc., at the target construction area.
Further, in the process of performing image recognition on the construction image data, the target data of the recognized target image needs to be extracted, and the target data may include graphic information, coordinate information, central point elevation information and the like of the target object.
In one embodiment of the present application, the target object and the target data of the target object may be acquired by: and inputting the construction image data into a pre-trained model to acquire a target object to obtain a target orthophoto, and then guiding the target orthophoto into a common aerial survey processing tool to process the target orthophoto to obtain target data. The aerial survey processing tool may be a pix4d mapper (aerial survey software) or a GIS (Geographic Information System ) or the like.
It should be noted that, after aerial photographing is performed on the target construction area to obtain the construction image data, specific details of processing and identifying the construction image data may refer to an existing aerial photographing data processing process, which is not described herein again.
In operation 103, model data of a model object corresponding to the target object is acquired from a design model of the target construction area according to the target data.
Specifically, a model object of a target object is designed in a design stage of a design model, the model object corresponding to the target object can be determined from the design model through target data, and then model data corresponding to the model object and the target data are obtained.
In an embodiment of the present application, the target data may include category information, identification information, graphic information, coordinate information, installation information, and the like of the target object, and the model object corresponding to the target object may be determined from the design model by the target data.
For example, the model object in the design model may have a model identifier having an association with the identifier information, so that the model object may be determined from the design model by the identifier information, and model data of the model object may be acquired. The model data of the model object may include model graphic information corresponding to the target data, model coordinate information, and the like.
In operation 104, the target data and the model data are compared to obtain a verification result of the target object.
Specifically, the target data and the model data are compared, and a verification result of the target object can be obtained through the comparison result.
For example, the coordinate information of the target object and the model coordinate information may be compared to determine whether the positions of the target object and the model object are consistent, and if so, the verification result is determined to be qualified in construction, and if not, the verification result is determined to be unqualified in construction.
In an embodiment of the present application, the method of the present application is implemented based on a CIM (City Information Modeling, city information model) platform.
Therefore, the digital orthophoto data of the target construction area is obtained, the target data of the target object can be obtained based on the digital orthophoto while the target object is obtained, then the target data is compared with the model data of the model object corresponding to the target object in the design model, so that the position verification of the target object of the target construction area can be completed, compared with the traditional manual position verification scheme, the situation of verification errors is avoided, and the verification accuracy is improved.
In one embodiment of the present application, when the construction image data of the target construction area is not acquired, the construction real-time video data is acquired, and the verification result of the target object of the target construction area is determined by the construction real-time video data and the design model.
In particular, aerial photography has high requirements on the surrounding environment, and aerial photography cannot be performed if a building or a tree is shielded. Therefore, for the route design of the target construction area, the route design is only carried out for the area meeting the aviation condition, and monitoring equipment can be erected or installed in some areas not meeting the aviation condition to acquire the construction real-time video data of the target construction area.
In one embodiment of the application, the monitoring device is a 5G communication and battery powered monitoring camera with a GPS positioning chip, a gyroscope and a communication module.
Furthermore, the monitoring equipment supplies power to the storage battery and 5G communication, so that the monitoring equipment is not suitable for long-time real-time monitoring, and therefore, the monitoring equipment is erected or installed after construction is completed. In the verification process, after clicking and confirming on the monitoring equipment, the monitoring equipment acquires parameters such as a current coordinate position (x, y, z), a horizontal orientation (yaw, angle value), a pitch angle (pitch, angle value), an equipment aspect ratio, a current video stream address and the like by utilizing a GPS and a gyroscope. The real-time construction video data of the parameters and the real-time construction video data of the monitoring equipment can be obtained by connecting the real-time construction video data with the communication module of the monitoring equipment. After the construction real-time video data is obtained, the construction real-time video data and the design model can be subjected to video fusion, and whether a target object to be checked in the construction real-time video data is consistent with the design model or not can be determined through a fusion result, so that the verification is completed.
In this embodiment of the present application, the fusion result may be saved by performing screenshot to save the verification record.
In the embodiment of the application, the parameters of the monitoring equipment are obtained in real time by being connected with the communication module of the monitoring equipment, the parameters are projected on the design model in the mode of anchor point icons according to the parameters, and the construction real-time video data are used as the materials of model entities in the scene for attaching, so that the effect of video fusion is achieved. Thus, the design model is opened, and whether the position of the target object in the construction real-time video data is consistent with the model object corresponding to the target object in the design model or not and whether the deviation situation is clear or not can be known at a glance.
It should be noted that, the above-mentioned video fusion process is only needed by referring to the existing video fusion scheme, and will not be described herein.
Therefore, according to the embodiment of the application, the monitoring equipment meeting the specific requirements is used, the position and the lens parameters can be dynamically acquired, the position and the lens parameters can be actively reported, and the video fusion is carried out with the design model, so that the technical problem that the digital orthographic image cannot be acquired due to the fact that part of the area is a blind area when the digital orthographic image is checked is solved.
Fig. 1 shows only a basic embodiment of the method according to the application, on the basis of which certain optimizations and developments are made, but other preferred embodiments of the method are also possible. Fig. 2 shows a second implementation flow chart of the location verification method according to the embodiment of the present application.
Referring to fig. 2, fig. 2 is a schematic diagram of a position verification method according to another embodiment of the present application, in which the image recognition process, the model data extraction process, and the verification process are more specifically described and optimized to some extent based on the foregoing embodiments. The target object in this embodiment of the present application is a first target object, the target data is first target data, and the first target object may be an infrastructure that is exposed on the surface after the construction is completed and is capable of collecting images, such as a manhole cover, for convenience of description, and this embodiment of the present application will be described below with "manhole cover" as the first target object.
Referring to fig. 2, the method in this embodiment includes the following operations:
in operation 201, construction image data of a target construction area is acquired, the construction image data being a digital orthographic image of the target construction area.
Operation 202 extracts first graphic information of a first target object from the construction image data.
Specifically, the well lid has the shape, at first obtains the well lid graphic information of well lid through construction image data, first graphic information promptly.
In this embodiment of the application, the acquisition of the graphical information of the manhole cover is performed by means of a pre-trained manhole cover identification model. The well lid recognition model can refer to a conventional image recognition model training method, and is not described herein.
The identification of the graphic information of the manhole cover is not particularly limited, and the graphic information of the manhole cover can be obtained.
And 203, acquiring first coordinate information and central point elevation of the first graphic information to obtain first target data, wherein the first target data comprises the first graphic information, the first coordinate information and the central point elevation.
Specifically, the construction image data is a digital orthographic image, and the digital orthographic image can acquire first coordinate information and center point elevation of the well lid.
Furthermore, the digital orthographic image can be processed by a common aerial survey processing tool to obtain the first coordinate information of the well lid and the central point elevation, such as aerial survey software or GIS.
In one embodiment of the present application, the first coordinate information is planar coordinate information of the manhole cover.
Operation 204, performing first identification recognition on the first graphic information to obtain first identification information of the first graphic information.
Specifically, the well lid has multiple category, for example, rainwater well lid, sewage well lid and electric power well lid etc. and the graffiti has the characters that the well lid category corresponds on the conventional well lid. When the construction position of the well lid is verified, any well lid in the target construction area needs to be verified, so that in order to better identify any well lid, a unique mark needs to be designed for any well lid.
In an embodiment of the present application, a unique and permanent graphic mark, such as a two-dimensional code or code, may be sprayed or printed on each manhole cover, and the size of a single pixel in the graphic mark is not less than a set precision, such as 2cm.
Thus, after the first graphic information of the well lid is obtained through the construction image data, the graphic mark in the first graphic information can be identified through the mode of identifying the first graphic information.
Further, each graphic identifier corresponds to a unique identifier, namely first identifier information, the graphic identifier can be obtained by performing first graphic identification on the first graphic information, and the first identifier information can be obtained by identifying the graphic identifier.
In an embodiment of the present application, the first identification information corresponding to the graphic mark is further acquired only when the graphic mark is identified, and when the graphic mark is not acquired, it is indicated that the identified manhole cover is not the current construction range, and the position verification is not performed.
In operation 205, first model identification information matched with the first identification information is obtained from the construction table according to the first identification information.
In one embodiment of the application, the construction meter is obtained by the mobile device by: receiving image information of a first target object, wherein the image information is acquired under the construction condition of the first target object; receiving first information corresponding to the image information, wherein the first information comprises model identification information, category information and installation information of a first target object; based on the first information, a construction table is generated.
Specifically, in the construction process, each construction worker is equipped with a mobile device with a built-in "manhole cover registration system". When the well lid is installed, a well lid identification and registration step is added. The method comprises the following specific steps: (1) Opening a well lid registration system on the mobile equipment, and scanning identification information, namely image information, on the well lid; (2) Receiving input for registration, such as model identification information, category information, and attribute fields such as installation information; (3) And identifying and registering each well lid to be constructed according to the operation, and generating a table comprising identification and registration information of each well lid to obtain a construction table.
Further, since each attribute field of each well lid has a corresponding relation with the image information of the well lid, each attribute field also has a corresponding relation, and after the identification information is determined, the model identification information corresponding to the identification information can be obtained according to the identification information. In the verification process, after the first identification information of the well lid is acquired, the first identification information is matched with the construction table, the identification information matched with the first identification information is searched, and then the model identification information corresponding to the identification information is acquired, namely the first model identification information is acquired.
Operation 206, determining a model object corresponding to the first target object in the design model according to the first model identification information.
Specifically, the well lid model in the design model is correspondingly configured with first model identification information in the design stage.
Furthermore, the well lid model corresponding to the well lid, namely the model object, can be obtained through the first model identification information.
In operation 207, model coordinate data and a model center point elevation of the model object are obtained, and model data is obtained, wherein the model data includes the model coordinate data and the model center point elevation.
Specifically, by acquiring the bounding box and the projection plane of the well lid model, three-dimensional plane coordinate information (geometry) of the well lid model and the elevation (Z) of the center point, namely, model coordinate information of the well lid model and the elevation of the center point of the model, can be calculated. The process of obtaining the model coordinate data and the elevation of the center point of the model object from the design model is only required to refer to the conventional method of obtaining the coordinate data and the elevation of the center point of the model, and will not be described herein.
In operation 208, a target surface of the first target object is generated in the design model based on the first coordinate information and the center point elevation.
In one embodiment of the present application, the coordinate axis of the digital orthographic image and the coordinate axis of the design model are the same coordinate axis and are common geographic coordinate axes, so that the coordinates in the design model have consistency with the coordinates obtained by the digital orthographic image.
Specifically, the first coordinate information is surface coordinate information of the well lid, and because the first coordinate information is the same as the coordinate axis of the design model, the surface coordinate information of the well lid and the center point elevation can be directly imported into the design model, and a target surface is generated in the design model, wherein the target surface has the center point elevation.
In one embodiment of the present application, the coordinate axes of the design model and the digital orthographic image are not particularly limited, and may be different.
Specifically, under different conditions, the first coordinate information and the central point elevation can be converted according to the coordinate axis of the design model and the coordinate axis of the digital orthographic image to achieve the consistency with the design model, and then the target surface is constructed in the design model through the converted coordinate information and the central point elevation. It should be noted that, the specific coordinate conversion method is only needed to refer to a common coordinate conversion scheme, and will not be described herein.
In operation 209, a model surface of the model object is determined based on the model coordinate data.
Further, the model surface corresponding to the well lid model is obtained from the design model through the model coordinate information of the well lid model, namely the three-dimensional surface coordinate information of the well lid model.
At operation 210, an intersection area of the target surface and the mold surface is determined.
Specifically, according to the first coordinate information of the well lid and the model coordinate information of the well lid model, the overlapping area, namely the intersection area, of the target surface and the model surface can be obtained.
Operation 211, determining a verification result of the first target object according to the intersection area and the central point elevation.
Specifically, according to the intersection area and the elevation of the central point, the deviation degree of the well lid on the plane and the deviation degree of the well lid on the height can be determined.
In one embodiment of the application, no overlapping area exists, the inspection result is 'heavy deflection', and the target surface and the model surface corresponding to the well lid can be displayed in red in the design model; the overlapping areas are defined by a first boundary as "slightly offset" and "no offset"; the difference of the central point elevation and the setting distance is set as slight deviation; the first limit may be a percentage of the overlapping area to the area of the manhole cover, etc., and the percentage may be determined according to an actual demand error, for example, may be a percentage of the overlapping area to the area of the manhole cover, for example, 40%, and the set distance may be configured according to an actual error demand, for example, 30cm.
Further, after the verification result is obtained, the verification result is stored for subsequent checking.
Therefore, according to the embodiment of the application, the well lid position is identified through the digital orthographic image, and compared with a manual verification mode, the well lid position verification method has the advantages of high verification precision and high verification efficiency. And the defect that the information identified by the image is insufficient to match with the model is overcome by the special mark during the well lid manufacturing and the registration during the construction. In addition, the embodiment of the application solves the problem of misjudgment caused by the influence of the three-dimensional visual angle of human eyes in judgment by checking the space data on the height and the plane.
Fig. 3 shows a third implementation flow chart of the location verification method according to the embodiment of the present application.
Referring to fig. 3, fig. 3 shows a further embodiment of the position verification method of the present application. This embodiment is different from fig. 2 on the basis of the embodiment of fig. 1, and the process of extracting and verifying the model data is more specifically described and optimized to a certain extent in other implementation manners. The application scenario of this embodiment of the present application is the same as that of fig. 2, the target object is a first target object, the target data is first target data, and the first target object may be an infrastructure that is exposed on the surface after the construction is completed and is capable of collecting images, such as a manhole cover, and for convenience of description, this embodiment of the present application is also described below with "manhole cover" as the first target object.
Referring to fig. 3, the method of the embodiment of the present application includes:
in operation 301, construction image data of a target construction area is acquired, the construction image data being a digital orthographic image of the target construction area.
In operation 302, first graphic information of a first target object is extracted from the construction image data.
In operation 303, first coordinate information and a center point elevation of the first graphic information are obtained, and first target data is obtained, where the first target data includes the first graphic information, the first coordinate information and the center point elevation.
In operation 304, a reference three-dimensional scene is established, the reference three-dimensional scene including a reference coordinate system.
Specifically, to better fuse and display the design model and the digital orthophoto, a reference three-dimensional scene is first created, where the reference three-dimensional scene may be a three-dimensional sphere that itself contains a geographic coordinate system (e.g., CGCS 2000), i.e., a reference coordinate system.
In operation 305, the digital orthographic image, the design model and the reference three-dimensional scene are fused based on the reference coordinate system, the first coordinate system of the digital orthographic image and the second coordinate system of the design model to obtain a reference digital orthographic image and a reference design model.
Specifically, in order to better import the digital orthographic image and the design model, a first coordinate system of the digital orthographic image and a second coordinate system of the design model are acquired first, and the design model is displayed as a reference design model in a three-dimensional scene in the form of a reference coordinate system by coordinate system conversion of the reference coordinate system, the first coordinate system and the second coordinate system, and the digital orthographic image is displayed as a reference image model, that is, a reference digital orthographic image, in the three-dimensional scene in the form of the reference coordinate system.
Further, by converting the first coordinate system into the reference coordinate system, the digital orthographic image can be introduced into the three-dimensional scene in the form of the reference coordinate system, and displayed in the form of the reference digital orthographic image.
In this embodiment of the application, the three-dimensional scene further comprises a spatial coordinate system corresponding to the geographic coordinate system, where the (longitude, latitude, elevation) and spatial coordinate systems (X, Y, Z) may represent the same location, e.g., (115.821437, 39.060436,9.308) and (484544.870423, 4325228.529692,9.308) represent the same location. Thus, the design model can be introduced into the three-dimensional scene in the form of the reference coordinate system by converting the spatial coordinate system and the second coordinate system, and displayed as the reference design model in the three-dimensional scene.
And 306, performing second identification recognition on the first graphic information to obtain second identification information of the first graphic information.
Specifically, the second identifier identifying process may refer to the description of the first identifier identifying in operation 204, which is not described herein.
Operation 307, according to the second identification information, obtaining second model identification information matched with the second identification information from the construction table.
Specifically, the process of obtaining the construction table and the second model identification information may refer to the description of the construction table and the first model identification information in operation 206, which is not described herein.
Operation 308, according to the second model identification information, obtaining a model object corresponding to the second model identification information in the reference design model.
Specifically, the model object corresponding to the second model identification information may be acquired from the reference design model.
Operation 309, obtaining model center point elevation and model coordinate data of the model object; the model data comprise model center point elevation and model coordinate data.
In one embodiment of the present application, model center point elevation and model coordinate data may be obtained by: acquiring a bounding box of the model object; acquiring a plurality of vertex coordinates of the bounding box based on the bounding box; determining a model center point elevation of the model object based on the plurality of vertex coordinates; generating a projection surface based on the reference coordinate system; projecting the model object to a projection surface to obtain a reference model surface; and obtaining model coordinate data of the reference model surface.
It should be noted that, the specific bounding box and the model center point elevation acquiring process may refer to conventional model bounding boxes and schemes for center point elevation acquiring, which are not described herein.
And step 310, comparing the target data with the model data to obtain a verification result of the first target object.
In one embodiment of the present application, comparing target data with model data to obtain a verification result of a first target object includes: determining reference coordinate information corresponding to the first coordinate information and reference center point elevation corresponding to the center point elevation according to the reference coordinate system and the first coordinate system; generating a reference target surface in the three-dimensional scene according to the reference coordinate information and the reference center point elevation; determining a reference intersection area of a reference target surface and a reference mold surface; and determining a verification result of the first target object according to the reference intersection area and the central point elevation.
Specifically, the coordinate system based on the first coordinate information and the central point elevation is the first coordinate system, the first coordinate information and the central point elevation are converted into the reference coordinate information and the reference central point elevation based on the reference coordinate system, then the reference intersection area is calculated according to the area of the reference coordinate plane and the area of the reference mold surface, and the central point elevation and the reference central point elevation are compared to obtain a verification result.
In one embodiment of the application, the verification result is displayed in a reference three-dimensional scene.
Specifically, the verification result may show whether the first target object is acceptable, and for the acceptable and unacceptable first target objects, different labels may be displayed in the reference three-dimensional scene, for example, different colors. In this way, displaying the target object based on the reference three-dimensional scene with the orthographic image can provide a more accurate position for subsequent processing for the first target object.
Therefore, according to the embodiment of the application, the well lid position is identified through the digital orthographic image, and compared with a manual verification mode, the well lid position verification method has the advantages of high verification precision and high verification efficiency. And the defect that the information identified by the image is insufficient to match with the model is overcome by the special mark during the well lid manufacturing and the registration during the construction. In addition, the embodiment of the application checks the height and the plane of the space data in the three-dimensional scene, thereby solving the problem of misjudgment caused by the influence of the three-dimensional visual angle of human eyes in judgment.
Fig. 4 shows a fourth implementation flow chart of the location verification method according to the embodiment of the present application.
Referring to fig. 4, fig. 4 shows a further embodiment of the position verification method according to the present application. The embodiment is different from fig. 2 on the basis of the embodiment of fig. 1, and the image recognition process, the model data extraction process and the verification process are more specifically described and optimized to a certain extent in other implementation manners. The target data in this embodiment of the present application is second target data, the target object is a second target object, and the second target object may be a construction trench where the infrastructure to be concealed is located after the construction is completed, for example, a construction trench where the construction process is not backfilled or where a pipeline is not laid, and for convenience of description, this embodiment of the present application will be described below with "trench" as the second target object.
Referring to fig. 4, the method of the embodiment of the present application includes:
in operation 401, construction image data of a target construction area is acquired, the construction image data being a digital orthographic image of the target construction area.
Operation 402 obtains second graphic information corresponding to a second target object from the construction image data.
Specifically, the groove graphic information, namely the second graphic information, of the groove is obtained through the construction image data of the target construction area.
In one embodiment of the present application, a trench identification model is trained in advance, and a trench in a digital orthographic image can be identified by the trench identification model.
In operation 403, the center line coordinate information and the second coordinate information of the second graphic information are obtained, and second target data is obtained, where the second target data includes the second graphic information, the center line coordinate information, and the second coordinate information.
Specifically, similar to the description of operation 203 above, the second coordinate information of the trench, which is the plane coordinate information of the trench, can be identified by an aerial processing tool, such as aerial survey software or GIS. And calculating the central line coordinate information of the groove according to the plane coordinate information.
In one embodiment of the present application, the center line coordinate information is coordinate information of a center line of a surface corresponding to the surface coordinate information.
Specifically, the plane coordinate information can be regarded as a plurality of coordinates of a plane, and the center line of the plane is obtained, namely the center line coordinate information is obtained.
In operation 404, a pipeline component corresponding to the second coordinate information is obtained from the design model based on the second coordinate information.
Specifically, the range of the grooves is large, and the grooves for a plurality of pipes to be laid may have a rough range. Thus, the component model covered with the surface coordinate information in the design model can be determined based on the surface coordinate information of the trench. The component model is a model of a component to be paved in the groove, which corresponds to the model in the design model, and the component can be an infrastructure such as a pipeline to be paved in a ditching way.
Further, a model component surface is built in the design model according to the surface coordinate information, the center line coordinate information, the coordinate axis of the digital orthographic image and the coordinate axis of the design model, and a pipeline component covered by the model building surface in the design model is obtained. The process of creating the model component surface is similar to the process of creating the target surface in operation 208, and will not be described in detail herein.
In an embodiment of the present application, when there is no pipeline component corresponding to the second coordinate information, the current trench has a risk of misdigging, and a special labeling reminder can be performed at a position corresponding to the model building surface in the design model.
At operation 405, a bounding box of the pipeline component is acquired.
In one embodiment of the application, the covered pipeline components can be one or more, each pipeline is constructed with a corresponding own bounding box, and in the case that the pipelines are constructed as one, the bounding box of the current pipeline component is used for subsequent use. When there are a plurality of line members to be covered, a bounding box of the plurality of line members is calculated, and a total bounding box of the plurality of line members is obtained from the bounding boxes of the line members as a bounding box to be used later.
It should be noted that, the calculation process of the bounding box may refer to a conventional bounding box calculation method, which is not described herein.
In operation 406, model centerline coordinate information for the pipeline component is determined based on the bounding box, resulting in model data, the model data including the model centerline coordinate information.
Specifically, after obtaining the bounding box of the pipeline component, acquiring the central line of the bounding box, and obtaining the coordinate information of the central line of the bounding box, namely the coordinate information of the central line of the model.
In operation 407, a center line included angle and a center line vertical clear distance are determined according to the center line coordinate information and the model center line coordinate information.
Specifically, the center line coordinate information and the model center line coordinate information obtained by recognition can be regarded as two center lines, and the center line included angle and the center line vertical clear distance of the two center lines can be obtained according to the center line coordinate information and the model center line coordinate information.
In one embodiment of the application, the center line coordinate information can be identified to obtain a center point, then the model center point corresponding to the center point in the model center line coordinate information is obtained, and the vertical clear distance of the center line is determined through the height difference of the two center points.
In operation 408, a verification result of the second target object is determined according to the included angle of the center line and the vertical clear distance of the center line.
Specifically, the included angle of the central lines is the included angle of the two central lines, and whether the laying angle of the groove is consistent with the laying angle designed in the design model can be determined through the included angle of the two central lines, so that the consistency of the trend of the pipeline can be checked.
Further, the vertical clear distance of the central lines is the distance between the two central lines in height and is used for checking whether the heights of the pipelines are consistent.
In an embodiment of the present application, an included angle condition may be configured for the included angle of the center line, a height condition may be configured for the vertical clear distance of the center line, and a verification result of the second target object may be determined according to a comparison between the included angle of the center line and the included angle condition and a comparison between the vertical clear distance of the center line and the height condition.
In one embodiment of the present application, the verification result may be determined by: under the condition that the included angle of the central line meets the included angle setting condition, determining that the verification result is a heavy construction risk; under the condition that the included angle of the central line meets the included angle setting condition, determining the vertical clear distance of the central line according to the central line coordinate information and the model central line coordinate information; under the condition that the vertical clear distance of the central line meets the vertical clear distance condition, determining that the checking result is no construction risk; and under the condition that the vertical clear distance meets the vertical clear distance condition, determining that the verification result is a moderate construction risk.
In one embodiment of the present application, display marks are also made at corresponding positions of the design model according to the verification result. For example, red is displayed on a pipeline member corresponding to a heavy construction risk, yellow is displayed on a pipeline member corresponding to a medium construction risk, and green is displayed on a pipeline member corresponding to no construction risk. It should be noted that, the display mark is not limited in detail, and other modes of displaying the mark may be applied to the embodiments of the present application, which are all within the scope of the present application.
Therefore, the embodiment of the application identifies the position of the groove through the digital orthographic image and the image, calculates the coordinate information of the groove according to the digital orthographic image, and can automatically acquire the pipeline components designed and laid in the groove by matching the coordinate information with the design model so as to finish whether the groove is correctly checked. And in the verification process, the spatial data in the three-dimensional scene are verified on the height and the plane, so that the problem of misjudgment caused by the influence of the three-dimensional visual angle in human eye judgment is solved.
In order to facilitate a further understanding of embodiments of the present application, the present application is illustrated below in three specific examples.
The target object of the first specific application example of the embodiment of the present application is a manhole cover, and specifically, the first specific application example of the present application includes:
step 1, when the well lid is manufactured, the single character identifiers such as rainwater, sewage, electric power and the like are replaced by the two-dimension-code-like graphic identifiers which are unique and permanent, and the size of a single pixel in the graphic is not smaller than 2cm.
And 2, preparing mobile equipment with a built-in 'well lid registration system' in site construction. And adding a well lid identification and registration step when the well lid is installed. The method comprises the following specific steps: (1) The mobile terminal opens a well lid registration system and scans a two-dimensional code-like code on a well lid; (2) Receiving input, including the type of the well lid, whether the unique code in the BIM model is filled with the attribute fields such as 'yes', the date of well lid installation and the like, generating a well lid installation information table by taking the identified unique digital Identification (ID) as a main key, uploading the well lid installation information table, and finishing data registration and library entry as a retrieval library for code extraction during later image identification.
And 3, setting a flight path according to the well lid construction area, and collecting digital orthographic images in units of weeks, wherein the precision is 1.5cm.
And 4, importing the obtained digital orthophoto into a pc end system, accurately identifying the well lid graphic information meeting the requirements based on the well lid intelligent identification model after training, storing the well lid in a picture form, extracting coordinates of the graphic information, storing the graphic information in a geometry format, and extracting a central point elevation (Z). The well lid graphic information comprises a high-resolution well lid image, and coordinates are extracted to be well lid surface coordinates.
And 5, reading a two-dimensional code on the manhole cover based on image recognition, converting the two-dimensional code into a unique digital Identification (ID), and reading a field of 'whether to install' in the manhole cover installation information table obtained in the step 2. And then, reserving the data which are already installed, correlating the unique component code, the construction date, the well lid type and the geometry of the last step, and adding the acquisition date to a new library table to be one line of data in the database, so as to obtain the image recognition table.
Step 6, performing position matching calculation: and (3) screening the image recognition table according to the acquisition date in the step (S5), importing the image recognition table according to the requirement, drawing the image in a three-dimensional surface form in a CIM three-dimensional scene, and importing design model data corresponding to the well lid. Matching the unique component codes acquired and stored by using the image recognition with the component codes in the BIM model, and verifying according to the following rules: (1) The collected data has unique component codes, but is not searched in the design model, is directly rendered and displayed in gray in the digital orthographic image, and is stored as an 'superfluous well lid point position' type into a matching result table; (2) And acquiring bounding boxes and projection surfaces of the corresponding three-dimensional model by using the CIM platform for model data with corresponding relations, and calculating three-dimensional surface coordinate data (geometry) of the well lid and a height value (Z) of a center point. The data of the unique component code coincidence is calculated for a group, and the raw materials are as follows: and identifying the obtained coordinates of the well cover surface and the central point elevation by the image, and extracting the coordinates of the well cover surface and the central elevation by the CIM platform according to the BIM model. Calculating the overlapping area of the two surfaces, wherein no overlapping area exists, and storing the overlapping area as 'heavy deflection' to be displayed in red; the overlapping areas are defined as "slightly offset" and "no offset" by 40% of the manhole cover area percentage; the difference between the height values of the central points is more than 30cm, and the difference is defined as slight deviation; slightly yellow, and unbiased green. And rendering the matching results in a three-dimensional scene by color separation, and storing the matching results in a matching result table for screening and displaying according to conditions.
It should be noted that, the description of the first specific application example of the embodiment of the present application may be understood with reference to the description of any one of fig. 1 to 3, which is not repeated herein.
Therefore, according to the first specific application example of the embodiment of the application, the positions of the manhole cover are identified through the high-precision aerial images and the images, so that the problems that the positions are missed when the manual spot is searched one by one, the influence of the manual vertical rod on the precision is large, and the time consumption is long are solved. The special mark is used for manufacturing the well lid, and the information is input during installation, so that the defects that the current state of the well lid cannot be identified by image identification, the type screening is not met only by outputting a surface result, the extraction according to the requirement is not met, and the output image does not carry a unique mark and can be matched with a model are overcome. And the space data in the three-dimensional scene is subjected to distance and position matching according to the coordinate positions, so that misjudgment caused by the influence of a three-dimensional visual angle in human eye judgment is solved, and an offset threshold is generated according to the size and the category of the well lid, so that a matching mechanism is more reasonable.
The second specific application example of the embodiment of the application is developed based on the first specific application example, and because some construction areas cannot collect digital orthographic images, the embodiment of the application performs verification by erecting or installing cameras in the construction areas in which the digital orthographic images cannot be collected. The camera of the second specific application example of the application is a monitoring camera for 5G communication and power supply of a storage battery, and the camera is internally provided with a GPS positioning chip, a gyroscope and a communication module for transmitting data with the rear end of a CIM platform. Specifically, a second specific application example of the present application includes:
Step 1, when verification is needed, erecting and installing a camera, starting the camera, and acquiring current coordinate positions (X1, Y1 and Z1), horizontal orientations (YAW, angle values), PITCH angles (PITCH, angle values), equipment aspect ratios, current video stream addresses and other camera parameters by the camera through a GPS and a gyroscope; and uploading the data to the CIM platform through a data transmission module.
And 2, displaying the current coordinate positions (X1, Y1 and Z1) of the camera in the three-dimensional scene in an anchor point icon mode by the CIM platform, after receiving the anchor point click, projecting the current coordinate positions in the three-dimensional scene according to the camera parameters acquired in the previous step, and attaching the video serving as the material of the BIM model entity in the scene, so that the effect of video fusion is achieved. And finally, opening the BIM model, and judging whether the pipeline positions and the models in the video are consistent or not and whether the pipeline positions and the models are deviated or not can be clearly known. The screen shots can be directly recorded and archived for the position with problems, and the check history record is saved.
It should be noted that, the video fusion portion in the description of the second specific application example of the present application may refer to a conventional video fusion scheme, which is not described herein.
Therefore, according to the second specific application example of the embodiment of the application, the problem of inconvenience in frequent camera movement of the existing cameras is solved by using the cameras meeting specific requirements, the position and lens parameters can be dynamically acquired, the position and lens parameters can be actively reported, video fusion is carried out with a three-dimensional scene, and the technical problem of orthographic blind area position verification is solved.
According to a third specific application example of the embodiment of the application, the application scene is to identify the excavated grooves of the non-backfilled or non-paved pipelines in the construction area. Specifically, a third specific application example of the present application includes:
and step 1, setting a flight path according to a well lid construction area, and collecting digital orthographic images in units of weeks, wherein the precision is 1.5cm.
And step 2, the obtained digital orthophoto is imported into a pc end system, groove graphic information meeting the requirements is accurately identified based on a trained groove intelligent identification model, planar data are extracted, and a central line is calculated.
And 3, in the BIM model, carrying out spatial query on pipeline model components within the coverage range by using planar data, wherein if no query result exists, the trench is at risk of false digging, and special labeling reminding is carried out. Calculating an included angle between the central line model member in the previous step 2 when the query result is found, and checking the trend consistency of the pipeline when the included angle exceeds 30 degrees and the risk of inconsistent laying exists; and (3) identifying the (X2, Y2, Z2) coordinates of the position of the central point of the pipeline in the area where the pipeline is already laid in the groove, comparing the coordinates with the same position of the model in height, and checking whether the heights of the pipelines are consistent.
It should be noted that, for the description of the third specific application example of the present application, it can be understood with reference to the descriptions of fig. 1 and fig. 4, and will not be repeated herein.
Therefore, according to the third specific application example of the embodiment of the application, aiming at the excavated groove which is not backfilled or is not paved with a pipeline in the construction process, the supervision of the construction process is increased by identifying the height of the extraction surface and the center point of the groove and realizing the model space query calculation to realize the model consistency check.
Fig. 5 shows a schematic diagram of a composition structure of a position verification device according to an embodiment of the present application.
Based on the above position verification method, the embodiment of the present application further provides a position verification device, where the position verification device 50 includes: a first obtaining module 501, configured to obtain construction image data of a target construction area, where the construction image data is a digital orthographic image of the target construction area; the identifying module 502 is configured to perform image identification on the construction image data to obtain a target object and target data of the target object; a second obtaining module 503, configured to obtain, according to the target data, model data of a model object corresponding to the target object from a design model of the target construction area; and the comparison module 504 is configured to compare the target data with the model data to obtain a verification result of the target object.
In one embodiment of the present application, the target object is a first target object, and the target data includes first target data; accordingly, the identification module 502 includes: the extraction sub-module is used for extracting first graphic information of a first target object from the construction image data; the first acquisition sub-module is used for acquiring first coordinate information and central point elevation of the first graphic information to obtain first target data, wherein the first target data comprises the first graphic information, the first coordinate information and the central point elevation.
In one embodiment of the present application, the second obtaining module 503 includes: the first identification sub-module is used for carrying out first identification on the first graphic information to obtain identification information of the first graphic information; the second acquisition sub-module is used for acquiring first model identification information matched with the first identification information from the construction table according to the identification information; the first determining submodule is used for determining a model object corresponding to the first target object in the design model according to the first model identification information; and the third acquisition sub-module is used for acquiring the model coordinate data and the model center point elevation of the model object to obtain model data, wherein the model data comprises the model coordinate data and the model center point elevation.
In one embodiment of the present application, the comparison module 504 includes: the generating sub-module is used for generating a target surface of the first target object in the design model according to the first coordinate information and the central point elevation; the second determining submodule is used for determining the die surface of the model object according to the model coordinate data; a third determining sub-module for determining an intersecting area of the target surface and the mold surface; and the fourth determination submodule is used for determining a verification result of the first target object according to the intersection area and the central point elevation.
In one embodiment of the present application, the second obtaining module 503 includes: the building sub-module is used for building a reference three-dimensional scene, wherein the reference three-dimensional scene comprises a reference coordinate system; the fusion sub-module is used for fusing the digital orthographic image, the design model and the reference three-dimensional scene based on the reference coordinate system, the first coordinate system of the digital orthographic image and the second coordinate system of the design model to obtain a reference digital orthographic image and a reference design model; the second identification sub-module is used for carrying out second identification on the first graphic information to obtain second identification information of the first graphic information; the model identification acquisition sub-module is used for acquiring second model identification information matched with the second identification information from the construction table according to the second identification information; the object acquisition sub-module is used for acquiring a model object corresponding to the second model identification information in the reference design model according to the second model identification information; the data acquisition sub-module is used for acquiring the elevation of the model center point of the model object and the model coordinate data; the model data comprise model center point elevation and model coordinate data.
In an embodiment of the present application, the target object is a second target object, and the target data includes second target data; accordingly, the identification module 502 includes: a fourth obtaining sub-module, configured to obtain second graphic information corresponding to the second target object from the construction image data; and the fifth acquisition sub-module is used for acquiring the central line coordinate information and the second coordinate information of the second graphic information to obtain second target data, wherein the second target data comprises the second graphic information, the central line coordinate information and the second coordinate information.
In one embodiment of the present application, the second obtaining module 503 includes: a sixth obtaining sub-module, configured to obtain, from the design model, a pipeline component corresponding to the second coordinate information according to the second coordinate information; a seventh acquisition sub-module for acquiring a bounding box of the pipeline member; and the fourth determination submodule is used for determining the model center line coordinate information of the pipeline component according to the bounding box to obtain model data, wherein the model data comprises the model center line coordinate information.
In one embodiment of the present application, the comparison module 504 includes: a fifth determining submodule for determining a center line included angle and a center line vertical clear distance according to the center line coordinate information and the model center line coordinate information; and the sixth determining submodule is used for determining a verification result of the second target object according to the included angle of the central line and the vertical clear distance of the central line.
It should be noted that, the description of the apparatus according to the embodiment of the present application is similar to the description of the embodiment of the method described above, and has similar beneficial effects as the embodiment of the method, so that a detailed description is omitted. The technical details of the position checking device provided in the embodiment of the present application may be understood from the description of any one of fig. 1 to fig. 4.
According to an embodiment of the present application, the present application also provides an electronic device and a non-transitory computer-readable storage medium.
FIG. 6 shows a schematic block diagram of an example electronic device 60 that may be used to implement an embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 6, the electronic device 60 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the electronic device 60 can also be stored. The computing unit 601, ROM 602, and RAM603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the electronic device 60 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 60 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the respective methods and processes described above, such as a position verification method. For example, in some embodiments, the location verification method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 60 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM603 and executed by the computing unit 601, one or more steps of the position verification method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the location verification method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present application may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution disclosed in the present application can be achieved, and are not limited herein.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method of location verification, the method comprising:
acquiring construction image data of a target construction area, wherein the construction image data is a digital orthographic image of the target construction area;
performing image recognition on the construction image data to obtain a target object and target data of the target object;
according to the target data, model data of a model object corresponding to the target object is obtained from a design model of the target construction area;
Comparing the target data with the model data to obtain a verification result of the target object;
the target object is a first target object, and the target data comprises first target data; correspondingly, the image recognition of the construction image data comprises the following steps:
extracting first graphic information of the first target object from the construction image data;
acquiring first coordinate information and central point elevation of the first graphic information to obtain first target data, wherein the first target data comprises the first graphic information, the first coordinate information and the central point elevation;
according to the target data, obtaining model data of a model object corresponding to the target object from a design model of the target construction area, including:
performing first identification recognition on the first graphic information to obtain first identification information of the first graphic information;
according to the first identification information, first model identification information matched with the first identification information is obtained from a construction table;
determining the model object corresponding to the first target object in the design model according to the first model identification information;
And obtaining model coordinate data and model center point elevation of the model object to obtain the model data, wherein the model data comprises the model coordinate data and the model center point elevation.
2. The method of claim 1, wherein the construction image data is obtained by an aerial device using:
and acquiring digital orthographic images of the target construction area according to the set route and the set conditions to obtain the construction image data.
3. The method of claim 1, wherein the construction schedule is obtained by the mobile device using:
receiving image information of the first target object, wherein the image information is acquired under the construction condition of the first target object;
receiving first information corresponding to the image information, wherein the first information comprises model identification information, category information and installation information of the first target object;
and generating the construction table based on the first information.
4. The method according to claim 1, wherein comparing the target data with the model data to obtain a verification result of the target object comprises:
Generating a target surface of the first target object in the design model according to the first coordinate information and the central point elevation;
determining a model surface of the model object according to the model coordinate data;
determining the intersection area of the target surface and the die surface;
and determining the verification result of the first target object according to the intersection area and the central point elevation.
5. The method according to claim 1, wherein the obtaining model data of a model object corresponding to the target object from a design model of the target construction area based on the target data includes:
establishing a reference three-dimensional scene, wherein the reference three-dimensional scene comprises a reference coordinate system;
fusing the digital orthographic image, the design model and the reference three-dimensional scene based on the reference coordinate system, the first coordinate system of the digital orthographic image and the second coordinate system of the design model to obtain a reference digital orthographic image and a reference design model;
performing second identification recognition on the first graphic information to obtain second identification information of the first graphic information;
acquiring second model identification information matched with the second identification information from a construction table according to the second identification information;
According to the second model identification information, obtaining a model object corresponding to the second model identification information in the reference design model;
acquiring the elevation of a model center point of the model object and model coordinate data;
wherein the model data includes the model center point elevation and the model coordinate data.
6. The method of claim 1, wherein the target object is a second target object, the target data comprising second target data;
correspondingly, the image recognition of the construction image data to obtain a target object and target data of the target object includes:
acquiring second graphic information corresponding to the second target object from the construction image data;
and acquiring central line coordinate information and second coordinate information of the second graphic information to obtain second target data, wherein the second target data comprises the second graphic information, the central line coordinate information and the second coordinate information.
7. The method of claim 6, wherein obtaining model data of a model object corresponding to the target object from a design model of the target construction area based on the target data, comprises:
Acquiring a pipeline component corresponding to the second coordinate information from the design model according to the second coordinate information;
acquiring a bounding box of the pipeline component;
and determining model center line coordinate information of the pipeline component according to the bounding box to obtain model data, wherein the model data comprises the model center line coordinate information.
8. The method of claim 7, wherein comparing the target data with the model data to obtain a verification result of the target object comprises:
determining a center line included angle and a center line vertical clear distance according to the center line coordinate information and the model center line coordinate information;
and determining a verification result of the second target object according to the included angle of the central line and the vertical clear distance of the central line.
CN202311055293.9A 2023-08-22 2023-08-22 Position verification method Active CN116758269B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311055293.9A CN116758269B (en) 2023-08-22 2023-08-22 Position verification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311055293.9A CN116758269B (en) 2023-08-22 2023-08-22 Position verification method

Publications (2)

Publication Number Publication Date
CN116758269A CN116758269A (en) 2023-09-15
CN116758269B true CN116758269B (en) 2023-11-14

Family

ID=87955565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311055293.9A Active CN116758269B (en) 2023-08-22 2023-08-22 Position verification method

Country Status (1)

Country Link
CN (1) CN116758269B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005050098A (en) * 2003-07-28 2005-02-24 Takenaka Komuten Co Ltd Construction process visualization method
WO2020019673A1 (en) * 2018-07-25 2020-01-30 深圳云天励飞技术有限公司 Construction site monitoring method and device based on image analysis, and readable storage medium
CN111898990A (en) * 2020-07-31 2020-11-06 城光建设有限公司 Building construction progress management method
CN112435143A (en) * 2020-11-23 2021-03-02 温岭市第一建筑工程有限公司 Building engineering construction optimization system based on BIM
CN112699430A (en) * 2021-01-04 2021-04-23 福建汇川物联网技术科技股份有限公司 Method and device for detecting remote video and drawing models
CN114564777A (en) * 2022-03-02 2022-05-31 安徽奔迈建设有限公司 Intelligent building construction supervisory systems based on BIM technique
CN114626700A (en) * 2022-02-28 2022-06-14 北京颐和工程监理有限责任公司 BIM-based transport airport construction management method, system, device and storage medium
CN115168958A (en) * 2022-07-19 2022-10-11 重庆第二师范学院 Rapid deduction system and method based on building structure node strength analysis
CN116090981A (en) * 2022-11-30 2023-05-09 国网上海市电力公司 AR-based construction assistance system, AR-based construction assistance method, AR-based construction assistance computer device, and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005050098A (en) * 2003-07-28 2005-02-24 Takenaka Komuten Co Ltd Construction process visualization method
WO2020019673A1 (en) * 2018-07-25 2020-01-30 深圳云天励飞技术有限公司 Construction site monitoring method and device based on image analysis, and readable storage medium
CN111898990A (en) * 2020-07-31 2020-11-06 城光建设有限公司 Building construction progress management method
CN112435143A (en) * 2020-11-23 2021-03-02 温岭市第一建筑工程有限公司 Building engineering construction optimization system based on BIM
CN112699430A (en) * 2021-01-04 2021-04-23 福建汇川物联网技术科技股份有限公司 Method and device for detecting remote video and drawing models
CN114626700A (en) * 2022-02-28 2022-06-14 北京颐和工程监理有限责任公司 BIM-based transport airport construction management method, system, device and storage medium
CN114564777A (en) * 2022-03-02 2022-05-31 安徽奔迈建设有限公司 Intelligent building construction supervisory systems based on BIM technique
CN115168958A (en) * 2022-07-19 2022-10-11 重庆第二师范学院 Rapid deduction system and method based on building structure node strength analysis
CN116090981A (en) * 2022-11-30 2023-05-09 国网上海市电力公司 AR-based construction assistance system, AR-based construction assistance method, AR-based construction assistance computer device, and storage medium

Also Published As

Publication number Publication date
CN116758269A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
US10467758B1 (en) Imagery-based construction progress tracking
US20190272676A1 (en) Local positioning system for augmented reality applications
US7944547B2 (en) Method and system of generating 3D images with airborne oblique/vertical imagery, GPS/IMU data, and LIDAR elevation data
US20150130840A1 (en) System and method for reporting events
EP1993072A1 (en) Method for comparison of 3D computer model and as-built situation of an industrial plant
CN110348138B (en) Method and device for generating real underground roadway model in real time and storage medium
Fathi et al. A videogrammetric as-built data collection method for digital fabrication of sheet metal roof panels
KR102097416B1 (en) An augmented reality representation method for managing underground pipeline data with vertical drop and the recording medium thereof
US10997785B2 (en) System and method for collecting geospatial object data with mediated reality
Hansen et al. Augmented reality for subsurface utility engineering, revisited
KR102162818B1 (en) Method for surveying underground utility being constructed using a camera in real time and apparatus for producing numerical drawings of underground utility based on the same
CN109685893B (en) Space integrated modeling method and device
CN110660125B (en) Three-dimensional modeling device for power distribution network system
Soliman et al. BIM-based facility management models for existing buildings
Lee et al. A study on scan data matching for reverse engineering of pipes in plant construction
CN112541049A (en) High-precision map processing method, device, equipment, storage medium and program product
CN113656477A (en) Method for verifying and fusing multi-source heterogeneous data of homeland space
CN112509135B (en) Element labeling method, element labeling device, element labeling equipment, element labeling storage medium and element labeling computer program product
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN111986320B (en) Smart city application-oriented DEM and oblique photography model space fitting optimization method
CN116758269B (en) Position verification method
JP2022145441A (en) Survey information management system, survey information management method, and survey information management program
CN114490907A (en) Method and device for constructing famous city management database and storage medium
Liu et al. System development of an augmented reality on-site BIM viewer based on the integration of SLAM and BLE indoor positioning
VARLIK et al. Generation and comparison of BIM models with CAD to BIM and scan to BIM techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant