CN114359412B - Automatic calibration method and system for external parameters of camera facing to building digital twins - Google Patents
Automatic calibration method and system for external parameters of camera facing to building digital twins Download PDFInfo
- Publication number
- CN114359412B CN114359412B CN202210217830.4A CN202210217830A CN114359412B CN 114359412 B CN114359412 B CN 114359412B CN 202210217830 A CN202210217830 A CN 202210217830A CN 114359412 B CN114359412 B CN 114359412B
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- straight line
- camera
- image frame
- virtual space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000001514 detection method Methods 0.000 claims abstract description 94
- 239000011159 matrix material Substances 0.000 claims abstract description 92
- 230000009466 transformation Effects 0.000 claims abstract description 45
- 230000000007 visual effect Effects 0.000 claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 14
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 230000015654 memory Effects 0.000 claims description 13
- 238000007621 cluster analysis Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 claims description 6
- 238000003708 edge detection Methods 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 description 15
- 239000013598 vector Substances 0.000 description 15
- 238000006243 chemical reaction Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 10
- 238000012544 monitoring process Methods 0.000 description 10
- 238000013519 translation Methods 0.000 description 9
- 238000011156 evaluation Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 7
- 238000009877 rendering Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 230000003993 interaction Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000009467 reduction Effects 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000003707 image sharpening Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
The application provides a method and a system for automatically calibrating external parameters of a camera facing to a building digital twin, wherein the method comprises the following steps: when the rotation angle of the camera visual angle meets the preset condition, three vanishing points respectively corresponding to the three coordinate axis directions in the world coordinate system of the virtual space are determined from the target image frame, and a vanishing point world coordinate system is established according to the three vanishing points and the origin of the camera coordinate system; determining a first rotation matrix according to vanishing point coordinates of the three vanishing points in a vanishing point world coordinate system and pixel coordinates of the three vanishing points in a pixel coordinate system determined from the straight line detection result; and according to the target axis transformation adjustment relation between the vanishing point world coordinate system and the virtual space world coordinate system, carrying out coordinate transformation processing on the first rotation matrix to obtain a target rotation matrix, so that the camera external reference calibration result is updated in a self-adaptive manner along with the rotation of the dynamic camera.
Description
Technical Field
The application relates to the technical field of building digital twins, in particular to a method and a system for automatically calibrating external parameters of a camera facing to the building digital twins.
Background
The digital twin refers to the full utilization of data such as physical models, sensor updating, operation history and the like, the integration of multidisciplinary, multi-physical quantity, multi-scale and multi-probability simulation processes and the completion of mapping in a virtual space, so that the full life cycle process of corresponding entity equipment is reflected. In order to more intuitively and clearly show the spatial layout structure of the physical building in an application scene of engineering construction, engineering construction personnel usually construct a building information model capable of reflecting the physical and functional characteristics of the physical building after the physical building is constructed, and the constructed building information model is used as a digital twin model of the physical building so as to conveniently manage the physical building through the digital twin model.
Based on this, when constructing/updating the digital twin model, it is often necessary to obtain the category and boundary position information of each entity object from the picture taken in the building scene by an image target detection method, and to determine the real position of the virtual object mapped by the entity object in the digital twin model by completing the conversion between the pixel coordinate system and the virtual space world coordinate system (the position coordinate in the digital twin model conforms to the virtual space world coordinate system) by the internal and external parameters of the camera.
In the current image target detection method, the camera external reference calibration result is used for completing the coordinate conversion function of the position coordinate between the camera coordinate system and the virtual space world coordinate system. The camera external parameters comprise a target rotation matrix and a translation vector, for a dynamic camera, the camera position is determined, but the target rotation matrix changes along with the rotation of the camera, if the camera external parameters cannot change along with the rotation of the camera in time, when coordinate conversion is carried out between a camera coordinate system and a virtual space world coordinate system, the conversion precision is reduced, the accuracy of the calculation result of the real position of a virtual object mapped by an entity object in the digital twin model is reduced, and the model information in the constructed/updated digital twin model is distorted.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a method and a system for automatically calibrating external parameters of a camera facing a building digital twin, which can adaptively determine external parameter calibration results along with rotation of a dynamic camera, so as to make coordinate conversion calculation results of position coordinates between a camera coordinate system and a virtual space world coordinate system more accurate.
The camera external parameter automatic calibration method for the building digital twin is used for calibrating a target rotation matrix expressed by a camera in a target image frame under a virtual space world coordinate system; the target image frame is used for representing a scene image in an entity building scene shot by a camera arranged at a fixed position at a rotated visual angle, and the entity building scene is mapped with a building information model scene in a virtual space; the target rotation matrix is used for representing the relative direction between the coordinate axis of the virtual space world coordinate system and the coordinate axis of the camera coordinate system; the method comprises the following steps:
when the rotation angle of the camera visual angle meets the preset condition, according to a straight line detection result in the edge of a target image frame, three vanishing points respectively corresponding to three coordinate axis directions in the virtual space world coordinate system are determined from the target image frame, and a vanishing point world coordinate system is established according to the three vanishing points and the origin of the camera coordinate system; three coordinate axes in the vanishing point world coordinate system and the virtual space world coordinate system are respectively parallel;
determining a first rotation matrix according to vanishing point coordinates of the three vanishing points in the vanishing point world coordinate system and pixel coordinates of the three vanishing points in the pixel coordinate system determined from the straight line detection result; the first rotation matrix is used for representing the relative direction between the coordinate axis of the vanishing point world coordinate system and the coordinate axis of the camera coordinate system;
and according to a target axis transformation adjustment relation between the vanishing point world coordinate system and the virtual space world coordinate system, carrying out coordinate transformation processing on the first rotation matrix to obtain the target rotation matrix, wherein the target axis transformation adjustment relation is determined according to the similarity of the target image frame and a scene image of a building information model scene in a virtual space.
In some embodiments, in the method for automatically calibrating external parameters of a camera facing a building digital twin, determining the target axis transformation adjustment relationship according to the similarity between the target image frame and a scene image in a building information model in a virtual space includes:
determining a plurality of candidate axis transformation adjustment relations in the vanishing point world coordinate system and the virtual space world coordinate system according to a preset rule, and determining a plurality of candidate rotation matrixes, wherein each candidate rotation matrix corresponds to a scene image intercepted by a building information model scene in the virtual space under a visual angle;
and respectively calculating the similarity of each scene image and the target image frame, and selecting a candidate rotation matrix corresponding to the scene image with the highest similarity as a target rotation matrix.
In some embodiments, in the method for automatically calibrating external parameters of a camera facing a building digital twin, the calculating the similarity between each scene image and the target image frame includes:
respectively determining the pixel category of each pixel in the target image frame and the pixel category of each pixel in each scene image according to multiple preset pixel categories to obtain a pixel set of the target image frame and a pixel set of each scene image under each pixel category;
and determining the similarity between each scene image and the target image frame according to the similarity between the pixel set of the target image frame and the pixel set of each scene image under the multiple pixel categories.
In some embodiments, in the automatic calibration method for external parameters of a camera facing a building digital twin, the result of detecting a straight line in an edge of the target image frame is obtained by the following straight line detection method:
extracting edges in the target image frame according to a preset edge detection algorithm;
and carrying out linear detection on the edge in the target image frame to determine a linear detection result in the edge.
In some embodiments, before determining three vanishing points corresponding to three coordinate axis directions in the virtual space world coordinate system from the target image frame according to a detection result of a straight line in an edge of the target image frame, the method further includes:
correcting the target image frame according to the camera internal parameters of the camera to obtain a corrected target image frame;
and performing image enhancement processing on the corrected target image frame to enhance the image difference at two sides of the inner edge of the image so as to obtain an enhanced target image frame.
In some embodiments, the method for automatically calibrating external parameters of a camera facing a building digital twin, wherein three vanishing points respectively corresponding to three coordinate axis directions in a world coordinate system of a virtual space are determined from a target image frame according to a detection result of a straight line in an inner edge of the target image frame, includes:
according to the straight line detection result in the inner edge of the target image frame, performing cluster analysis on straight line segments in the straight line detection result, and determining that the directions of three groups of parallel straight line segments with the largest number of parallel straight line segments are three coordinate axis directions of a virtual space world coordinate system;
and determining three vanishing points respectively corresponding to the three coordinate axis directions in the virtual space world coordinate system according to the parallel straight line segment groups in the three coordinate axis directions.
In some embodiments, in the method for automatically calibrating external parameters of a camera facing a building digital twin, clustering analysis is performed on straight line segments in the straight line detection result, and it is determined that directions of three groups of parallel straight line segments with the largest number of parallel straight line segments are three coordinate axis directions of a virtual space world coordinate system, including:
performing first iterative clustering on the straight line segments in the straight line detection result, and determining that the direction of a first group of parallel straight line segments with the largest number of parallel straight line segments is the direction of a first coordinate axis under a virtual space world coordinate system;
removing a first group of parallel straight line segments from the straight line detection result, performing second iterative clustering on the remaining straight line segments in the straight line detection result, and determining that the direction of a second group of parallel straight line segments with the largest number of parallel straight line segments is the direction of a second coordinate axis under the virtual space world coordinate system;
and removing the first group of parallel straight line segments and the second group of parallel straight line segments from the straight line detection result, performing third iterative clustering on the remaining straight line segments in the straight line detection result, and determining the direction of the third group of parallel straight line segments with the largest number of parallel straight line segments as the direction of a third coordinate axis in the virtual space world coordinate system.
In some embodiments, an external camera parameter automatic calibration system facing a building digital twin is further provided, where the external camera parameter automatic calibration system includes at least a terminal device and a shooting device, and the terminal device is configured to calibrate a target rotation matrix, which is expressed by a camera in a target image frame, in a virtual space world coordinate system; the target image frame is used for representing a scene image in an entity building scene shot by a camera in a shooting device arranged at a fixed position at a rotated visual angle, and the entity building scene is mapped with a building information model scene in a virtual space; the target rotation matrix is used for representing the relative direction between the coordinate axis of the virtual space world coordinate system and the coordinate axis of the camera coordinate system; the terminal device is configured to:
when the rotation angle of the camera visual angle meets the preset condition, according to a straight line detection result in the edge of a target image frame, three vanishing points respectively corresponding to three coordinate axis directions in the virtual space world coordinate system are determined from the target image frame, and a vanishing point world coordinate system is established according to the three vanishing points and the origin of the camera coordinate system; three coordinate axes in the vanishing point world coordinate system and the virtual space world coordinate system are respectively parallel;
determining a first rotation matrix according to vanishing point coordinates of the three vanishing points in the vanishing point world coordinate system and pixel coordinates of the three vanishing points in the pixel coordinate system determined from the straight line detection result; the first rotation matrix is used for representing the relative direction between the coordinate axis of the vanishing point world coordinate system and the coordinate axis of the camera coordinate system;
and according to a target axis transformation adjustment relation between the vanishing point world coordinate system and the virtual space world coordinate system, carrying out coordinate transformation processing on the first rotation matrix to obtain a target rotation matrix, wherein the target axis transformation adjustment relation is determined according to the similarity between the target image frame and a scene image of a building information model scene in a virtual space.
In some embodiments, there is also provided a computer device comprising: the camera external parameter automatic calibration method comprises a processor, a memory and a bus, wherein the memory stores machine readable instructions executable by the processor, when a computer device runs, the processor and the memory are communicated through the bus, and the machine readable instructions are executed by the processor to execute the steps of the building digital twin-oriented camera external parameter automatic calibration method.
In some embodiments, a computer-readable storage medium is also provided, on which a computer program is stored, which, when being executed by a processor, performs the steps of the building digital twin-oriented camera external parameter automatic calibration method.
The application provides a camera external parameter automatic calibration method facing to building digital twins, when the camera rotates, acquiring a single scene image in the physical building scene shot by the camera at the rotated visual angle, and adaptively calculating a target rotation matrix of the camera by using parallel straight line segments in the single scene image, because the target rotation matrix obtained by self-adaptive calculation timely and accurately reflects the relative direction between the coordinate axes of the camera coordinate system and the virtual space world coordinate system after the camera rotates, the accuracy of the coordinate conversion calculation result of the position coordinates between the camera coordinate system and the virtual space world coordinate system is high, the accuracy of the real position of the virtual object mapped by the entity object in the digital twin model is high, the real reduction degree of the entity building scene in the building information model is favorably improved, and therefore the maintenance and management efficiency of the user on the entity building scene is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 shows a flow chart of a camera external parameter automatic calibration method for a building digital twin according to an embodiment of the present application;
fig. 2 shows a schematic structural diagram of an external reference automatic calibration system for a building digital twin camera provided in an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating a line detection method according to an embodiment of the present application;
FIG. 4 is a diagram illustrating an edge extraction result in a target image frame according to an embodiment of the present disclosure;
FIG. 5 illustrates a result of line detection in an edge within the target image frame shown in FIG. 4;
fig. 6 shows a result of cluster analysis of straight line segments in one direction in the results of line detection shown in fig. 5;
FIG. 7 is a diagram showing a result of clustering analysis of straight line segments in another direction among the results of straight line detection shown in FIG. 5;
FIG. 8 is a diagram showing a result of clustering analysis of straight line segments in another direction among the results of straight line detection shown in FIG. 5;
FIG. 9 shows a schematic diagram of a vanishing point world coordinate system and a virtual space world coordinate system in an embodiment of the application;
FIG. 10 is a diagram illustrating a semantic segmentation result of the target image frame shown in FIG. 5 in an embodiment of the application;
fig. 11 is a schematic diagram illustrating a rendering result of the building information model scene in an axis transformation adjustment relationship in the embodiment of the present application;
fig. 12 is a schematic diagram illustrating a rendering result of the building information model scene in another axis transformation adjustment relationship in the embodiment of the present application;
fig. 13 is a schematic diagram illustrating a rendering result of the building information model scene in another axis transformation adjustment relationship in the embodiment of the present application;
fig. 14 is a schematic diagram illustrating a rendering result of the building information model scene in another axis transformation adjustment relationship in the embodiment of the present application;
fig. 15 shows a schematic structural diagram of the computer device in the embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
In the current image target detection method, the camera external reference calibration result is used for completing the coordinate conversion function of the position coordinate between the camera coordinate system and the virtual space world coordinate system. The camera external parameters comprise a target rotation matrix and a translation vector, the rotation matrix represents the relative direction between the coordinate axes of the virtual space world coordinate system and the coordinate axes of the camera coordinate system, and the translation vector represents the position of the space origin of the virtual space world coordinate system in the camera coordinate system.
In the prior art, when the position of a camera is determined, a plurality of sets of calibration point pairs, namely pixel coordinate and world coordinate point pairs, are manually selected, wherein the calibration point pairs include a "calibration point a1 in a pixel coordinate system in an image and a calibration point b1 in a virtual space world coordinate system", and the like, and one set of calibration point pairs represents coordinates of the same position in the pixel coordinate system and coordinates in the virtual space world coordinate system, for example, coordinates of a corner in the pixel coordinate system are coordinates of a calibration point a1, and coordinates in the virtual space world coordinate system are coordinates of a calibration point b 1; and solving external parameters of the camera by a PNP (N-point perspective pose solving) algorithm according to the plurality of groups of calibration point pairs. Usually, a specific corner point (wall corner, table leg, etc.) in the picture needs to be selected as a calibration point, and world coordinates of a corresponding point in a BIM (Building Information Modeling, i.e. representing a virtual space in the present application) are found, for example, calibration point a1 is a table leg in an image captured by a camera, calibration point b1 in the virtual space is a table leg in the virtual space, the position of calibration point a1 is the same as that of calibration point b1, and the world coordinate of calibration point b1 in the virtual space and the pixel coordinate of calibration point a1 in the image have a coordinate conversion relationship. And determining camera external parameters according to the coordinate conversion relationship among the multiple groups of calibration point pairs.
For a dynamic camera, the position of the camera is determined, but a target rotation matrix changes along with the rotation of the camera, and a manner of manually selecting a calibration point has certain limitation, and the determined target rotation matrix cannot timely and accurately reflect the relative direction between the coordinate axes of a rotated camera coordinate system and a virtual space world coordinate system, so that the accuracy of a coordinate conversion calculation result of a position coordinate between the camera coordinate system and the virtual space world coordinate system is reduced, so that the accuracy of the real position of a virtual object mapped by an entity object in a digital twin model is low, and model information in the constructed/updated digital twin model is distorted; secondly, for a dynamic camera, the manually selected calibration point may be blocked during the rotation of the camera, so that the target rotation matrix cannot be accurately calculated. In summary, manually selecting the calibration point pairs to determine the target rotation matrix has limitations in the field of building digital twins.
Based on this, the application provides a building digital twin-oriented camera external parameter automatic calibration method under the condition that the camera position is known, when the camera rotates, a single scene image in an entity building scene shot by the camera at a rotated visual angle is obtained, a target rotation matrix of the camera is self-adaptively calculated by utilizing parallel straight line segments in the single scene image, because the target rotation matrix obtained by self-adaptively calculation timely and accurately reflects the relative direction between a camera coordinate system after the camera rotates and coordinate axes of a virtual space world coordinate system, the accuracy of a coordinate conversion calculation result of a position coordinate between the camera coordinate system and the virtual space world coordinate system is high, the accuracy of the real position of a virtual object mapped by the entity object in a digital twin model is high, and the real reduction degree of the entity building scene in a building information model is favorably improved, therefore, the maintenance and management efficiency of the user on the entity building scene is improved.
The method and the system for automatically calibrating external references of the camera facing the building digital twin provided by the embodiment of the application are described in detail below.
Referring to fig. 1, fig. 1 is a schematic flow chart of an automatic calibration method for external parameters of a camera facing a building digital twin provided by an embodiment of the present application, where the automatic calibration method for external parameters of a camera facing a building digital twin includes steps S101-S103; specifically, the method comprises the following steps: the method comprises the following steps:
s101, when a camera view angle rotation angle meets a preset condition, according to a straight line detection result in an edge of a target image frame, determining three vanishing points respectively corresponding to three coordinate axis directions in a virtual space world coordinate system from the target image frame, and establishing a vanishing point world coordinate system according to the three vanishing points and an origin of a camera coordinate system; three coordinate axes in the vanishing point world coordinate system and the virtual space world coordinate system are respectively parallel;
s102, determining a first rotation matrix according to vanishing point coordinates of the three vanishing points in the vanishing point world coordinate system and pixel coordinates of the three vanishing points in the pixel coordinate system determined from the straight line detection result; the first rotation matrix is used for representing the relative direction between the coordinate axis of the vanishing point world coordinate system and the coordinate axis of the camera coordinate system;
s103, according to a target axis transformation adjustment relation between the vanishing point world coordinate system and the virtual space world coordinate system, carrying out coordinate transformation processing on the first rotation matrix to obtain a target rotation matrix, wherein the target axis transformation adjustment relation is determined according to the similarity between the target image frame and a scene image in a building information model in a virtual space.
In the embodiment of the application, the automatic calibration method for the external parameters of the camera facing the building digital twin can be operated on a terminal device or a server; the terminal device may be a local terminal device, and when the camera external parameter automatic calibration method runs on the server, the camera external parameter automatic calibration method may be implemented and executed based on a cloud interaction system, where the cloud interaction system at least includes the server and a client device (that is, the terminal device).
Specifically, for example, when the camera external parameter automatic calibration method is applied to a terminal device, the camera external parameter automatic calibration method is used for calibrating a target rotation matrix expressed by a camera in a target image frame in a virtual space world coordinate system; the target image frame is used for representing a scene image in an entity building scene shot by a camera in a shooting device arranged at a fixed position at a rotated visual angle, and the entity building scene is mapped with a building information model in a virtual space; the target rotation matrix is used for representing the relative direction between the coordinate axis of the virtual space world coordinate system and the coordinate axis of the camera coordinate system.
When calibrating the existing external parameters of the camera, the external parameters of the camera generally comprise: a target rotation matrix and a translation vector; the translation vector is used to characterize the position of the spatial origin in the camera coordinate system. Since the position of the camera is kept unchanged in the application, the translation vector in the external reference of the camera is kept unchanged in the rotation process of the camera, and only the target rotation matrix in the external reference of the camera needs to be calibrated.
Based on this, in this embodiment of the present application, as an optional embodiment, the terminal device 200 may be located in an external camera parameter automatic calibration system as shown in fig. 2, which, as shown in fig. 2, includes at least the terminal device 200 and one shooting device 201, where the shooting device 201 is dispersed in the physical building, that is, the shooting device 201 is installed in different physical building scenes in the physical building, and the number of the shooting devices 201 is not limited to the scene images in the physical building scenes shot by the camera.
Specifically, data transmission and interaction may be performed between each camera 201 and the terminal device 200 in a wired network/wireless network manner according to a preset communication Protocol (e.g., a Real Time Streaming Protocol (RTSP)) Protocol; in the data interaction process, the terminal device 200 may control each camera 201 to perform monitoring shooting on the physical building scene at the installation position, and receive monitoring video data (i.e., a monitoring video stream composed of a plurality of to-be-processed image frames) fed back by different cameras 201, and according to each frame image in the monitoring data, the terminal device 200 may perform real-time monitoring on scene changes (such as indoor decoration design change, indoor display layout change, personnel flow, and the like) in different physical building scenes.
Here, in step S101, the camera 201 is used to characterize the camera 201 (such as a camera, a surveillance camera, etc.) installed in the physical building scene, wherein the specific number of the cameras 201 installed in the physical building scene is not specifically limited in the embodiment of the present application, considering that the relationship between the area size of the physical building scene and the maximum shooting range of the camera 201 is not fixed.
Based on this, in step S101, the physical building scene may be used to characterize a physical building space in the target physical building, for example, the physical building scene may be a room a in the target physical building, and may also be a partial area that can be shot by one shooting device 201 in the room a; the embodiment of the present application also does not limit the size of the area of the physical building scene.
Specifically, data transmission and interaction may be performed between each camera 201 and the terminal device 200 in a wired network/wireless network manner according to a preset communication Protocol (e.g., a Real Time Streaming Protocol (RTSP)) Protocol; in the data interaction process, the terminal device 200 may control each camera 201 to perform monitoring shooting on the physical building scene at the installation position, receive monitoring video data (i.e., a monitoring video stream composed of a plurality of to-be-processed image frames) fed back by different cameras 201, and obtain each frame image from the monitoring data as the to-be-processed image frame, so that the terminal device 200 may perform real-time monitoring on scene changes (such as indoor decoration design changes, indoor display layout changes, personnel flow, and the like) in different physical building scenes.
Based on this, the physical building scene may be used to characterize a physical building space in the physical building, for example, the physical building scene may be a room a in the physical building, or may be a partial area that can be photographed by a photographing device in the room a; the embodiment of the present application also does not limit the size of the area of the physical building scene.
The building information model scene mapped in a virtual space by the physical building scene is a virtual building space in a building information model of the physical building in a virtual space (BIM), for example, the building information model scene may be a room B in the building information model, or may be a partial area that can be shot by a shooting device in the room B; the embodiment of the present application also does not limit the size of the area of the physical building scene.
In step S101, the camera view angle rotation angle satisfies a preset condition, which may be that the camera view angle is greater than or equal to a preset rotation angle threshold; for example, when the camera rotation angle is greater than 5 degrees, the target rotation matrix of the camera needs to be recalibrated.
Parallel straight lines in the space are intersected into a point through perspective transformation, namely a vanishing point, and the straight line intersected with the vanishing point is a vanishing line. The world coordinate system of the vanishing point constructed according to the vanishing point and the origin of the coordinate system of the camera (namely the optical center of the camera) is coincided with the origin of the coordinate system of the camera; the vanishing point world coordinate system and the virtual space world coordinate system are parallel to each other, and the positive directions of the coordinate axes are not necessarily corresponding to each other, and the original points are not consistent. According to the method, translation vectors in the external parameters do not need to be determined, and only a target rotation matrix in the external parameters needs to be calibrated, so that three vanishing points can be determined from a target image frame, a vanishing point world coordinate system is established, the vanishing point world coordinate system is used as a transfer, and the relative direction between the coordinate axes of the virtual space world coordinate system and the coordinate axes of the camera coordinate system, namely the target rotation matrix, is determined by utilizing the relationship between the pixel coordinate system and the camera coordinate system, the relationship between the camera coordinate system and the vanishing point world coordinate system, and the relationship between the vanishing point world coordinate system and the virtual space world coordinate system.
The virtual space world coordinate system described herein is the world coordinate system to which the position coordinates within the digital twin model conform.
Specifically, in the embodiment of the present application, the result of detecting the straight line in the edge of the target image frame is obtained by the following straight line detection method as shown in fig. 3:
s301, extracting an edge in the target image frame according to a preset edge detection algorithm;
s302, carrying out straight line detection on the edge in the target image frame, and determining a straight line detection result in the edge.
The image detection algorithm in step S301 is a Canny edge detection algorithm. The method for extracting the edge in the target image frame comprises the following four steps:
step 3011, perform noise reduction on the target image frame, and obtain a target image frame after noise reduction; in step 3011, the denoising process is performed by using a gaussian filtering method, and a gaussian matrix is used to multiply each pixel point and its neighborhood to obtain a gray value of each pixel as a weighted average of itself and its neighborhood; by the gaussian filtering process, the target image frame becomes smooth, but it is possible to increase the width of the edge in the target image frame.
Step 3012, calculating the gradient of the target image frame after noise reduction, and determining the edge in the target image frame;
the gradient in the image is used for expressing the change degree and direction of the gray value, and the edge is the collection of the pixel points with larger gray value change.
Step 3013, remove the point with small gradient change in the local edge range, and adjust the edge with multiple pixel widths to a single pixel width edge.
In the embodiment of the present application, the step 3012 of suppressing and filtering non-maximum values determines points that do not belong to the edge in the edge of the target image frame, so that the width of the edge is as wide as one pixel point as possible.
The principle of non-maxima suppression is as follows: if a pixel belongs to the edge, the gradient value of the pixel in the gradient direction is the maximum, otherwise, the pixel is not the edge.
In step 3014, the edges adjusted in step 3013 are screened out as close as possible by using a dual threshold.
And the double-threshold screening is to set two thresholds, namely an upper threshold and a lower threshold. Wherein pixel points above the upper threshold are all detected as edges and those below the lower threshold are all detected as non-edges. For the pixel point between the upper threshold and the lower threshold, if the pixel point is adjacent to the pixel point determined as the edge, the pixel point is determined as the edge; otherwise, it is not edge.
In the step S302, the edges in the target image frame are subjected to line detection based on a hough line detection algorithm.
The basic principle of performing line detection based on the Hough line detection algorithm is to utilize duality of points and lines; the straight lines in the image space correspond one-to-one to the points in the parameter space, and the straight lines in the parameter space correspond one-to-one to the points in the image space.
Thus: (1) each line in the image space is represented in the parameter space corresponding to a single point; (2) any part of line segments on the straight line in the image space correspond to the same point in the parameter space.
Therefore, the Hough line detection algorithm is to convert the line detection problem in the image space into the point detection problem in the parameter space, and the line detection task is completed by searching the peak value in the parameter space.
For the target image frame according to the embodiment of the present application, the edge extraction result in the target image frame is shown in fig. 4, and the straight line detection result in the edge in the target image frame is shown in fig. 5.
In step S101, determining three vanishing points corresponding to three coordinate axis directions in the virtual space world coordinate system from the target image frame according to a detection result of a straight line in an edge line in the target image frame, including:
according to the straight line detection result in the edge line of the target image frame, performing cluster analysis on straight line segments in the straight line detection result, and determining that the directions of three groups of parallel straight line segments with the largest number of parallel straight line segments are three coordinate axis directions of a virtual space world coordinate system;
and determining three vanishing points respectively corresponding to the three coordinate axis directions in the virtual space world coordinate system according to the parallel straight line segment groups in the three coordinate axis directions.
Parallel straight lines in the space are intersected into a point through perspective transformation, namely a vanishing point, and the straight line intersected with the vanishing point is a vanishing line.
The linear segments in the linear detection result are subjected to clustering analysis, and in order to perform iterative clustering on the linear detection result through a random sampling consistency algorithm, parallel vanishing lines in the same direction intersect at the same vanishing point, so that a vanishing line set and vanishing point pixel coordinates in three directions of a vanishing point world coordinate system X, Y, Z are obtained.
The basic steps of the random sample consensus algorithm are as follows: 1) selecting a group of random subsets to estimate model parameters; 2) testing other data by using the model, and determining the applicable point as a local point; 3) if the number of the local interior points is enough, the model is reasonable enough, and all the local interior points are used for estimating the model again; 4) and repeatedly iterating to obtain a better model.
Based on the random consistency algorithm, the process of carrying out iterative clustering on the linear detection result is as follows:
1) randomly initializing and selecting two straight lines from all straight line segments of the straight line detection result of the target image frame, and calculating the intersection point of the two straight lines; setting two straight lines as Line _ a and Line _ b respectively; specifically, the method comprises the following steps:
wherein, the [ alpha ], [ beta ] -a,],[,]Coordinates of two end points of the straight Line _ a in a pixel coordinate system;
[,],[,]is the coordinates of two end points of the straight Line _ b in the pixel coordinate system.
Here, as an alternative embodiment, the intersection Point of the two straight lines Line _ a and Line _ b may be calculated according to the following formula, specifically:
Point1 = L1 × L2;
2) Selecting other Line segments Line _ c, connecting the endpoint of the Line segment Line _ c with the intersection Point to obtain a connecting Line, and calculating the included angle between the connecting Line and the Line _ c, specifically:
wherein, the [ alpha ], [ beta ] -a,],[,]Are coordinates of two end points of the straight Line _ c in the pixel coordinate system.
Here, an angle between a Line between an end Point of the straight Line segment Line _ c and the intersection Point and the current straight Line is calculated according to the following formula:
Wherein,the expression form is a vector expression form of a connecting Line between the end Point of the straight Line segment Line _ c and the intersection Point;
here, ifIf the linear Line is greater than the preset threshold value, screening out the linear Line _ c; if it isAnd if the Line is not greater than the preset threshold, it indicates that the Line _ c may be spatially parallel to the initially selected lines Line _ a and Line _ b.
And repeating the steps, determining a group of parallel straight Line segments which are spatially parallel to the initially selected straight lines Line _ a and Line _ b, and counting the number of parallel lines in the group of parallel straight Line segments, namely, vote _ count.
3) Repeating the step 1) and the step 2), wherein the two initial straight lines selected in each repetition are different, and iterating for a preset number of times, so as to determine a parallel straight line section group corresponding to each iteration and the number of parallel lines in the parallel straight line section group, namely, vote _ count;
4) and selecting a group of parallel straight line segments with the maximum parallel line quantity vote _ count in the iteration, wherein the direction of the parallel straight line segment group is the direction of one coordinate axis in the virtual space world coordinate system.
Therefore, if the directions of three coordinate axes in the virtual space world coordinate system are to be determined, the three iterative clustering processes need to be performed on the straight line segments in the straight line detection result. Based on this, in the embodiment of the application, in each iteration, straight line segments parallel to the determined coordinate axis direction are screened out, so that the operation amount in the iteration process is reduced.
Specifically, in the embodiment of the present application, performing cluster analysis on the straight line segments in the straight line detection result, and determining that the directions of three groups of parallel straight line segments with the largest number of parallel straight line segments are three coordinate axis directions of a virtual space world coordinate system, includes:
performing first iterative clustering on the straight line segments in the straight line detection result, and determining that the direction of a first group of parallel straight line segments with the largest number of parallel straight line segments is the direction of a first coordinate axis under a virtual space world coordinate system;
removing a first group of parallel straight line segments from the straight line detection result, performing second iterative clustering on the remaining straight line segments in the straight line detection result, and determining that the direction of a second group of parallel straight line segments with the largest number of parallel straight line segments is the direction of a second coordinate axis under the virtual space world coordinate system;
and removing the first group of parallel straight line segments and the second group of parallel straight line segments from the straight line detection result, carrying out third iterative clustering on the remaining straight line segments in the straight line detection result, and determining that the direction of the third group of parallel straight line segments with the largest number of parallel straight line segments is the direction of a third coordinate axis under the virtual space world coordinate system.
Parallel vanishing lines in the same direction intersect at the same vanishing point, and the vanishing point pixel coordinates can be determined according to the three groups of parallel line segments.
The results of clustering the straight line segments in the straight line detection results are shown in fig. 6, 7, and 8. Fig. 6, 7, and 8 show results of straight-line segment cluster analysis in three directions, respectively.
The vanishing points in the three directions of the virtual space world coordinate system X, Y, Z are connected with the optical center of the camera, and then a vanishing point world coordinate system can be constructed. The connecting line of the left vanishing point and the optical center is an X axis, the corresponding world system coordinate is [1,0,0], the connecting line of the right vanishing point and the optical center is a Y axis, the connecting line of the vertical vanishing point and the optical center is a Z axis, and the corresponding world system coordinate is [0,0,1 ].
In step S102, a first rotation matrix is determined according to vanishing point coordinates of the three vanishing points in the vanishing point world coordinate system and pixel coordinates of the three vanishing points in the pixel coordinate system determined from the straight line detection result, and the specific principle is as follows:
in the above step S102, taking the first position coordinates (u, v) of the left vanishing point in the pixel coordinate system as an example, by using the camera internal reference matrix K of the shooting device and the camera external reference of the shooting device (such as the rotation matrix R and the translation vector t of the shooting device), the conversion of the pixel coordinates (u, v, 1) (i.e. the first position coordinates) between the pixel coordinate system and the vanishing point world coordinate system can be completed by using the camera coordinate system as a transfer station of coordinate conversion (the conversion between the pixel coordinate system and the vanishing point world coordinate system depends on the transfer of the camera coordinate system) according to the following formula, and the first spatial position coordinates (X, Y, Z) of the first vanishing point in the vanishing point world coordinate system is obtained, specifically:
wherein (A), (B), (C), (D), (C), (B), (C),) Is the camera principal point of the photographing apparatus;
the shooting device is arranged in the pixelNormalized focal length on the abscissa axis of the system;
is the normalized focal length of the shooting device on the ordinate axis of the pixel coordinate system;
(x, y, z) is the position coordinates of the pixel coordinates (u, v, 1) in the camera coordinate system;
r is a first rotation matrix of the camera, t is a translation vector of the camera, and in the embodiment of the present application, t = 0;
Based on this, let a first rotation matrix R = [ R1, R2, R3] of the camera;
the following equations are combined to solve a first rotation matrix R of the camera in the world coordinate system of the vanishing point:
the VP _ Left is a position coordinate of the Left vanishing point in the camera coordinate system, the VP _ Right is a position coordinate of the Right vanishing point in the camera coordinate system, and the VP _ Bottom is a position coordinate of the vertical vanishing point in the camera coordinate system.
In step S103, the vanishing point world coordinate system and the virtual space world coordinate system are as shown in fig. 9, the left coordinate system is a virtual space world coordinate system 901, and the right coordinate system is a vanishing point world coordinate system 902, and since the coordinate axes of the virtual space world coordinate system 901 and the vanishing point world coordinate system 902 are parallel, but the directions of the coordinate axes may be different, an uncertain axis transformation adjustment relationship exists between the virtual space world coordinate system 901 and the vanishing point world coordinate system 902. Therefore, a target axis transformation adjustment relationship needs to be determined according to the similarity between the target image frame and the scene image of the building information model scene in the virtual space, and the first rotation matrix is subjected to coordinate transformation processing according to the target axis transformation adjustment relationship to obtain the target rotation matrix.
Specifically, determining the target axis transformation adjustment relationship according to the similarity between the target image frame and a scene image in a building information model in a virtual space includes:
determining a plurality of candidate axis transformation adjustment relations in the vanishing point world coordinate system and the virtual space world coordinate system according to a preset rule, and determining a plurality of candidate rotation matrixes, wherein each candidate rotation matrix corresponds to a scene image intercepted by a building information model scene in the virtual space under a visual angle;
and respectively calculating the similarity of each scene image and the target image frame, and selecting a candidate rotation matrix corresponding to the scene image with the highest similarity as a target rotation matrix.
When the multiple candidate rotation matrixes are determined, the Z-axis relation between the vanishing point world system and the BIM world system is determined, and four candidate axis transformation adjustment relations can be determined according to the right-hand rule of the three-dimensional coordinate system.
The four candidate axis transformation adjustment relationships are represented by the following axis transformation matrices trans1, trans2, trans3, and trans 4.
trans1 = [[0, 1, 0],
[1, 0, 0],
[0, 0, -1]]
trans2 = [[-1, 0, 0],
[0, 1, 0],
[0, 0, -1]]
trans3 = [[1, 0, 0],
[0, -1, 0],
[0, 0, -1]]
trans4 = [[0, -1, 0],
[-1, 0, 0],
[0, 0, -1]]
Target rotation matrix R = R of camera under virtual space world coordinate system0 X trans. Trans is an axis transformation matrix corresponding to a target axis symmetry relation in a candidate axis transformation adjustment relation, R0Is a first rotation matrix.
Determining four display visual angles of the building information model scene in the virtual space on the terminal device based on the four candidate axisymmetric relations, respectively intercepting scene images of the building information model scene under the four display visual angles, and calculating the similarity between each scene image and the target image frame based on the following similarity calculation method, including:
respectively determining the pixel category of each pixel in the target image frame and the pixel category of each pixel in each scene image according to multiple preset pixel categories to obtain a pixel set of the target image frame and a pixel set of each scene image under each pixel category;
and determining the similarity between each scene image and the target image frame according to the similarity between the pixel set of the target image frame and the pixel set of each scene image under the multiple pixel categories.
The preset multiple pixel categories are determined according to the categories of the entities in the entity building scene. For example, the categories of entities in the solid building scene include wall surfaces, ground surfaces and furniture, and the preset multiple pixel categories are also three categories of wall surfaces, ground surfaces and furniture.
Specifically, the determining of the pixel category of each pixel in the target image frame is implemented by the following image physical semantic segmentation method:
and segmenting semantic segmentation entities of various categories in the target image frame by utilizing a trained building scene semantic segmentation model, and rendering the semantic segmentation entities of different categories into different colors on the pixel granularity.
The building scene semantic segmentation model can be trained according to different building scenes, such as an indoor scene, an outdoor scene and the like.
The building scene semantic segmentation model is obtained by training a DeepLabv3+ neural network on an SUN RGB-D data set by adopting the DeepLabv3+ neural network.
Fig. 10 shows a semantic segmentation result of the target image frame shown in fig. 5 in the embodiment of the present application.
The pixel types of the scene images are determined by rendering components of different types, and in the entities in the semantic segmentation entity and the building information model scene, the colors of the entities in the same type are consistent, namely the pixel types are consistent.
As shown in fig. 10, for example, the colors rendered in the present application include red 1001, green 1002, and blue 1003; wherein red 1001 represents a wall surface, green 1002 represents a floor surface, and blue 1003 represents furniture.
Specifically, components of different types are rendered through an interface on the BOS platform.
As shown in fig. 11, fig. 12, fig. 13, and fig. 14, rendering results of four scene images of the building information model scene and a similarity between each scene image and a target image frame under four axis transformation adjustment relationships are respectively shown.
In fig. 11, 12, 13 and 14, reference numeral 1001 denotes a wall surface rendered red, reference numeral 1002 denotes a floor surface rendered green, and reference numeral 1003 denotes furniture rendered blue.
An average intersection ratio of each category of pixel sets is calculated by using a semantic segmentation evaluation index mlou as a similarity evaluation criterion, in this embodiment of the present application, the semantic segmentation evaluation index mlou between the scene image and the target image frame shown in fig. 11 is 0.075, the semantic segmentation evaluation index mlou between the scene image and the target image frame shown in fig. 12 is 0.121, in this embodiment of the present application, the semantic segmentation evaluation index mlou between the scene image and the target image frame shown in fig. 13 is 0.131, and in this embodiment of the present application, the semantic segmentation evaluation index mlou between the scene image and the target image frame shown in fig. 14 is 0.519.
It can be determined that, in fig. 11, 12, 13, and 14, the scene image shown in fig. 14 has the highest frame similarity with the target image, and therefore, it is determined that the axis transformation adjustment relationship corresponding to the scene image shown in fig. 14 is correct, thereby achieving the alignment of the camera view angle and the BIM view angle, and determining the target rotation matrix.
In the embodiment of the application, a semantic segmentation evaluation index mIoU calculation formula is as follows:
wherein i represents a real value, j represents a predicted value, pij represents the real value as i and is predicted as the number of j; pii represents the actual value i, predicted as the number of i, pji represents the actual value j, predicted as the number of i; k is the number of classes.
In the embodiment of the application, the similarity between each scene image and the target image frame is calculated, that is, the average value of the ratio of the intersection and union of the pixel set of the target image frame and the pixel set of the scene image in the categories of wall, ground, desktop and the like is calculated; here, a pixel set of the target image frame and a pixel set of the scene image may be respectively used as the true value and the predicted value, and k is a preset number of pixel categories.
In this embodiment of the application, in order to improve the accuracy of the line detection result, before determining three vanishing points corresponding to three coordinate axis directions in the virtual space world coordinate system from the target image frame according to the line detection result in the edge of the target image frame, the method further includes:
correcting the target image frame according to the camera internal parameters of the camera to obtain a corrected target image frame; and performing image enhancement processing on the corrected target image frame to enhance the image difference at two sides of the inner edge of the image so as to obtain an enhanced target image frame.
And the enhanced target image frame is used for carrying out edge extraction and straight line detection to obtain a detection result.
Wherein the camera internal reference of the camera comprises an internal reference base matrix and a distortion vector; the internal reference basis matrix and distortion vector are determined by a checkerboard scaling method.
The process of obtaining the camera internal reference basic matrix and the distortion vector by the checkerboard calibration method is as follows: shooting a plurality of groups of checkerboard pictures at different angles, then adopting a Harris operator to detect the checkerboard angular points, and finally solving the camera internal reference basic matrix K and distortion vectors through the homography relation between the checkerboard plane and the image plane.
The image distortion is caused by the optical distortion of the camera lens, and the projection of a spatial straight line to an image cannot be maintained as a straight line, which affects the calibration of external parameters and the accuracy of subsequent correlation calculation. And carrying out distortion correction on the original image through camera internal reference so that the target image frame can more accurately extract a vanishing line and a vanishing point.
The image enhancement processing mainly comprises the steps of improving the image contrast and carrying out image sharpening. The image contrast is the difference in gray level between the highest and lowest gray levels of the image, reflecting the hierarchical impression of bright and dark areas on the image. The image sharpening is to enhance the edge and the gradation transition portion of the image to highlight the edge and the contour of the image. And carrying out contrast enhancement and image sharpening on the corrected target image frame to obtain an enhanced target image frame so as to better carry out edge contour extraction.
The camera external reference automatic calibration can effectively acquire the target rotation matrix of the camera in the world coordinate system of the virtual space, realizes the alignment of the view angles, and is a precondition for the visual positioning of image objects.
Selecting four groups of calibration points from the solid building scene shown in fig. 5, respectively obtaining pixel coordinates of the calibration points and world coordinates in the BIM as real labels, remapping the four groups of calibration points to a virtual space world coordinate system through internal and external parameters of a calibrated camera, and calculating errors of predicted values and real values. As shown in Table 1, the average error of the external parameters of the manual calibration camera is about 22mm, the precision of the external parameters of the automatic calibration camera is influenced by the linear detection and random sampling consistency algorithm, the effect is better when the number of the shielded linear segments is less and the number of the shielded linear segments is more in a scene, and the best error is 75.8mm in three groups of automatic calibration experiments.
TABLE 1 Camera external reference calibration error analysis
Based on the same inventive concept, the embodiment of the present application further provides a system of an external camera reference automatic calibration method corresponding to the above-mentioned external camera reference automatic calibration method, and as the principle of solving the problem of the external camera reference automatic calibration system in the embodiment of the present application is similar to the external camera reference automatic calibration method in the above-mentioned embodiment of the present application, the implementation of the external camera reference automatic calibration system can refer to the implementation of the above-mentioned external camera reference automatic calibration method, and repeated details are not repeated.
Specifically, fig. 2 shows an automatic calibration system for external camera parameters of a building digital twin according to an embodiment of the present application, and refers to the automatic calibration system for external camera parameters shown in fig. 2; the camera external parameter automatic calibration system at least comprises terminal equipment 200 and a shooting device 201, wherein the terminal equipment 200 is used for calibrating a target rotation matrix expressed by a camera in a target image frame under a virtual space world coordinate system; the target image frame is used for representing a scene image in an entity building scene shot by a camera in a shooting device arranged at a fixed position at a rotated visual angle, and the entity building scene is mapped with a building information model scene in a virtual space; the target rotation matrix is used for representing the relative direction between the coordinate axis of the virtual space world coordinate system and the coordinate axis of the camera coordinate system; the terminal device 200 is configured to:
when the rotation angle of the camera visual angle meets the preset condition, according to a straight line detection result in the edge of a target image frame, three vanishing points respectively corresponding to three coordinate axis directions in the virtual space world coordinate system are determined from the target image frame, and a vanishing point world coordinate system is established according to the three vanishing points and the origin of the camera coordinate system; three coordinate axes in the vanishing point world coordinate system and the virtual space world coordinate system are respectively parallel;
determining a first rotation matrix according to vanishing point coordinates of the three vanishing points in the vanishing point world coordinate system and pixel coordinates of the three vanishing points in the pixel coordinate system determined from the straight line detection result; the first rotation matrix is used for representing the relative direction between the coordinate axis of the vanishing point world coordinate system and the coordinate axis of the camera coordinate system;
and according to a target axial symmetry adjustment relation between the vanishing point world coordinate system and the virtual space world coordinate system, carrying out coordinate transformation processing on the first rotation matrix to obtain the target rotation matrix, wherein the target axial symmetry adjustment relation is determined according to the similarity between the target image frame and a scene image of a building information model scene in a virtual space.
In an optional embodiment, the terminal device 200 is further configured to: when the target axis transformation adjustment relationship is determined according to the similarity between the target image frame and the scene image in the building information model in the virtual space, determining a plurality of candidate axis transformation adjustment relationships in the vanishing point world coordinate system and the virtual space world coordinate system according to a preset rule, and determining a plurality of candidate rotation matrixes, wherein each candidate rotation matrix corresponds to one scene image intercepted by the building information model scene in the virtual space under one view angle;
and respectively calculating the similarity of each scene image and the target image frame, and selecting a candidate rotation matrix corresponding to the scene image with the highest similarity as a target rotation matrix.
In an optional embodiment, the terminal device 200 is further configured to: when the similarity between each scene image and the target image frame is calculated, respectively determining the pixel type of each pixel in the target image frame and the pixel type of each pixel in each scene image according to multiple preset pixel types to obtain a pixel set of the target image frame and a pixel set of each scene image under each pixel type;
and determining the similarity between each scene image and the target image frame according to the similarity between the pixel set of the target image frame and the pixel set of each scene image under the multiple pixel categories.
In an optional embodiment, the terminal device 200 is specifically configured to: extracting edges in the target image frame according to a preset edge detection algorithm; and carrying out linear detection on the edge in the target image frame to determine a linear detection result in the edge.
In an optional embodiment, the terminal device 200 is further configured to: before three vanishing points respectively corresponding to three coordinate axis directions in a virtual space world coordinate system are determined from the target image frame according to a straight line detection result in the inner edge of the target image frame, correcting the target image frame according to camera internal parameters of the camera to obtain a corrected target image frame;
and performing image enhancement processing on the corrected target image frame to enhance the image difference of two sides of the inner edge of the image and obtain the enhanced target image frame.
In an optional embodiment, according to a detection result of a straight line in an edge of a target image frame, the terminal device 200 determines three vanishing points corresponding to three coordinate axis directions in the virtual space world coordinate system from the target image frame, and is specifically configured to:
according to the straight line detection result in the inner edge of the target image frame, performing cluster analysis on straight line segments in the straight line detection result, and determining that the directions of three groups of parallel straight line segments with the largest number of parallel straight line segments are three coordinate axis directions of a virtual space world coordinate system;
and determining three vanishing points respectively corresponding to the three coordinate axis directions in the virtual space world coordinate system according to the parallel straight line segment groups in the three coordinate axis directions.
In an optional embodiment, the terminal device 200 performs cluster analysis on the straight line segments in the straight line detection result, and determines that the directions of three groups of parallel straight line segments with the largest number of parallel straight line segments are three coordinate axis directions of a virtual space world coordinate system, and is specifically configured to:
performing first iterative clustering on the straight line segments in the straight line detection result, and determining that the direction of a first group of parallel straight line segments with the largest number of parallel straight line segments is the direction of a first coordinate axis under a virtual space world coordinate system;
removing a first group of parallel straight line segments from the straight line detection result, performing second iterative clustering on the remaining straight line segments in the straight line detection result, and determining that the direction of a second group of parallel straight line segments with the largest number of parallel straight line segments is the direction of a second coordinate axis under the virtual space world coordinate system;
and removing the first group of parallel straight line segments and the second group of parallel straight line segments from the straight line detection result, carrying out third iterative clustering on the remaining straight line segments in the straight line detection result, and determining that the direction of the third group of parallel straight line segments with the largest number of parallel straight line segments is the direction of a third coordinate axis under the virtual space world coordinate system.
As shown in fig. 15, an embodiment of the present application provides a computer device 1500 for executing the camera external parameter automatic calibration method in the present application, where the device includes a memory 1501, a processor 1502, and a computer program stored in the memory 1501 and executable on the processor 1502, where the memory 1501 and the processor 1502 are communicatively connected through a bus, and the processor 1502 implements the steps of the camera external parameter automatic calibration method when executing the computer program.
Specifically, the memory 1501 and the processor 1502 may be general memories and processors, which are not specifically limited herein, and when the processor 1502 runs a computer program stored in the memory 1501, the above-mentioned camera external reference automatic calibration method can be executed.
Corresponding to the external camera parameter automatic calibration method in the present application, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the steps of the external camera parameter automatic calibration method.
Specifically, the storage medium can be a general-purpose storage medium, such as a removable disk, a hard disk, or the like, and when a computer program on the storage medium is executed, the above-mentioned camera external reference automatic calibration method can be executed.
In the embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. The above-described system embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and there may be other divisions in actual implementation, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of systems or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some features, within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (8)
1. The automatic camera external parameter calibration method for the building digital twin is characterized by being used for calibrating a target rotation matrix expressed by a camera in a target image frame under a virtual space world coordinate system; the target image frame is used for representing a scene image in an entity building scene shot by a camera arranged at a fixed position at a rotated visual angle, and the entity building scene is mapped with a building information model scene in a virtual space; the target rotation matrix is used for representing the relative direction between the coordinate axis of the virtual space world coordinate system and the coordinate axis of the camera coordinate system; the method comprises the following steps:
when the rotation angle of the camera visual angle meets the preset condition, according to a straight line detection result in the edge of a target image frame, three vanishing points respectively corresponding to three coordinate axis directions in the virtual space world coordinate system are determined from the target image frame, and a vanishing point world coordinate system is established according to the three vanishing points and the origin of the camera coordinate system; three coordinate axes in the vanishing point world coordinate system and the virtual space world coordinate system are respectively parallel;
determining a first rotation matrix according to vanishing point coordinates of the three vanishing points in the vanishing point world coordinate system and pixel coordinates of the three vanishing points in the pixel coordinate system determined from the straight line detection result; the first rotation matrix is used for representing the relative direction between the coordinate axis of the vanishing point world coordinate system and the coordinate axis of the camera coordinate system;
according to a target axis transformation adjustment relation between the vanishing point world coordinate system and the virtual space world coordinate system, carrying out coordinate transformation processing on the first rotation matrix to obtain a target rotation matrix, wherein the target axis transformation adjustment relation is determined according to the similarity of the target image frame and a scene image of a building information model scene in a virtual space;
determining the target axis transformation adjustment relationship according to the similarity between the target image frame and a scene image in a building information model in a virtual space, comprising:
determining a plurality of candidate axis transformation adjustment relations in the vanishing point world coordinate system and the virtual space world coordinate system according to a preset rule, and determining a plurality of candidate rotation matrixes, wherein each candidate rotation matrix corresponds to a scene image of a building information model scene in the virtual space, and the scene image is captured at a view angle;
respectively calculating the similarity of each scene image and the target image frame, and selecting a candidate rotation matrix corresponding to the scene image with the highest similarity as a target rotation matrix;
the calculating the similarity between each scene image and the target image frame comprises the following steps:
respectively determining the pixel category of each pixel in the target image frame and the pixel category of each pixel in each scene image according to multiple preset pixel categories to obtain a pixel set of the target image frame and a pixel set of each scene image under each pixel category;
and determining the similarity between each scene image and the target image frame according to the similarity between the pixel set of the target image frame and the pixel set of each scene image under the multiple pixel categories.
2. The automatic calibration method for the external camera parameters of the building digital twin facing to the claim 1 is characterized in that the straight line detection result in the edge of the target image frame is obtained by the following straight line detection method:
extracting edges in the target image frame according to a preset edge detection algorithm;
and carrying out linear detection on the edge in the target image frame to determine a linear detection result in the edge.
3. The automatic calibration method for the external camera parameters of the building digital twin as claimed in claim 1, wherein before determining three vanishing points corresponding to three coordinate axis directions in the virtual space world coordinate system from the target image frame according to the detection result of the straight line in the inner edge of the target image frame, the method further comprises:
correcting the target image frame according to the camera internal parameters of the camera to obtain a corrected target image frame;
and performing image enhancement processing on the corrected target image frame to enhance the image difference at two sides of the inner edge of the image so as to obtain an enhanced target image frame.
4. The automatic calibration method for the external camera parameters of the building digital twin facing to the building digital twin as claimed in claim 1,
according to a straight line detection result in the inner edge of a target image frame, three vanishing points respectively corresponding to three coordinate axis directions in the virtual space world coordinate system are determined from the target image frame, and the method comprises the following steps:
according to the straight line detection result in the inner edge of the target image frame, performing cluster analysis on straight line segments in the straight line detection result, and determining that the directions of three groups of parallel straight line segments with the largest number of parallel straight line segments are three coordinate axis directions of a virtual space world coordinate system;
and determining three vanishing points respectively corresponding to the three coordinate axis directions in the virtual space world coordinate system according to the parallel straight line segment groups in the three coordinate axis directions.
5. The automatic calibration method for the external reference of the camera facing the building digital twin as claimed in claim 1, wherein the clustering analysis is performed on the straight line segments in the straight line detection result, and the directions of three groups of parallel straight line segments with the largest number of parallel straight line segments are determined to be three coordinate axis directions of a virtual space world coordinate system, and the method comprises the following steps:
performing first iterative clustering on the straight line segments in the straight line detection result, and determining that the direction of a first group of parallel straight line segments with the largest number of parallel straight line segments is the direction of a first coordinate axis under a virtual space world coordinate system;
removing a first group of parallel straight line segments from the straight line detection result, performing second iterative clustering on the remaining straight line segments in the straight line detection result, and determining that the direction of a second group of parallel straight line segments with the largest number of parallel straight line segments is the direction of a second coordinate axis under the virtual space world coordinate system;
and removing the first group of parallel straight line segments and the second group of parallel straight line segments from the straight line detection result, carrying out third iterative clustering on the remaining straight line segments in the straight line detection result, and determining that the direction of the third group of parallel straight line segments with the largest number of parallel straight line segments is the direction of a third coordinate axis under the virtual space world coordinate system.
6. The automatic camera external parameter calibration system for the digital twin of a building is characterized by at least comprising terminal equipment and a shooting device, wherein the terminal equipment is used for calibrating a target rotation matrix expressed by a camera in a target image frame under a virtual space world coordinate system; the target image frame is used for representing a scene image in an entity building scene shot by a camera in a shooting device arranged at a fixed position at a rotated visual angle, and the entity building scene is mapped with a building information model scene in a virtual space; the target rotation matrix is used for representing the relative direction between the coordinate axis of the virtual space world coordinate system and the coordinate axis of the camera coordinate system; the terminal device is configured to:
when the rotation angle of the camera visual angle meets the preset condition, according to a straight line detection result in the edge of a target image frame, three vanishing points respectively corresponding to three coordinate axis directions in the virtual space world coordinate system are determined from the target image frame, and a vanishing point world coordinate system is established according to the three vanishing points and the origin of the camera coordinate system; three coordinate axes in the vanishing point world coordinate system and the virtual space world coordinate system are respectively parallel;
determining a first rotation matrix according to vanishing point coordinates of the three vanishing points in the vanishing point world coordinate system and pixel coordinates of the three vanishing points in the pixel coordinate system determined from the straight line detection result; the first rotation matrix is used for representing the relative direction between the coordinate axis of the vanishing point world coordinate system and the coordinate axis of the camera coordinate system;
according to a target axis transformation adjustment relation between the vanishing point world coordinate system and the virtual space world coordinate system, carrying out coordinate transformation processing on the first rotation matrix to obtain a target rotation matrix, wherein the target axis transformation adjustment relation is determined according to the similarity of the target image frame and a scene image of a building information model scene in a virtual space;
the terminal device is further configured to: when the target axis transformation adjustment relationship is determined according to the similarity between the target image frame and a scene image in a building information model in a virtual space, determining multiple candidate axis transformation adjustment relationships in the vanishing point world coordinate system and the virtual space world coordinate system according to a preset rule, and determining multiple candidate rotation matrixes, wherein each candidate rotation matrix corresponds to a scene image intercepted by a building information model scene in the virtual space under a view angle;
respectively calculating the similarity of each scene image and the target image frame, and selecting a candidate rotation matrix corresponding to the scene image with the highest similarity as a target rotation matrix;
the terminal device is further configured to: when the similarity between each scene image and the target image frame is calculated, respectively determining the pixel type of each pixel in the target image frame and the pixel type of each pixel in each scene image according to multiple preset pixel types to obtain a pixel set of the target image frame and a pixel set of each scene image under each pixel type;
and determining the similarity between each scene image and the target image frame according to the similarity between the pixel set of the target image frame and the pixel set of each scene image under the multiple pixel categories.
7. A computer device, comprising: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when a computer device is running, the machine readable instructions when executed by the processor performing the steps of the building digital twin oriented camera extrinsic automatic calibration method according to any one of claims 1 to 5.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, performs the steps of the building digital twin-oriented camera external parameter automatic calibration method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210217830.4A CN114359412B (en) | 2022-03-08 | 2022-03-08 | Automatic calibration method and system for external parameters of camera facing to building digital twins |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210217830.4A CN114359412B (en) | 2022-03-08 | 2022-03-08 | Automatic calibration method and system for external parameters of camera facing to building digital twins |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114359412A CN114359412A (en) | 2022-04-15 |
CN114359412B true CN114359412B (en) | 2022-05-27 |
Family
ID=81095284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210217830.4A Active CN114359412B (en) | 2022-03-08 | 2022-03-08 | Automatic calibration method and system for external parameters of camera facing to building digital twins |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114359412B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115439625B (en) * | 2022-11-08 | 2023-02-03 | 成都云中楼阁科技有限公司 | Building sketch auxiliary drawing method and device, storage medium and drawing equipment |
CN116012626B (en) * | 2023-03-21 | 2023-06-30 | 腾讯科技(深圳)有限公司 | Material matching method, device, equipment and storage medium for building elevation image |
CN116524030B (en) * | 2023-07-03 | 2023-09-01 | 新乡学院 | Reconstruction method and system for digital twin crane under swinging condition |
CN117786147B (en) * | 2024-02-26 | 2024-05-28 | 北京飞渡科技股份有限公司 | Method and device for displaying data in digital twin model visual field range |
CN118587290A (en) * | 2024-08-06 | 2024-09-03 | 浙江大华技术股份有限公司 | Coordinate conversion method, apparatus and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130259403A1 (en) * | 2012-04-03 | 2013-10-03 | Oluwatosin Osinusi | Flexible easy-to-use system and method of automatically inserting a photorealistic view of a two or three dimensional object into an image using a cd,dvd or blu-ray disc |
CN110349219A (en) * | 2018-04-04 | 2019-10-18 | 杭州海康威视数字技术股份有限公司 | A kind of Camera extrinsic scaling method and device |
CN112102413A (en) * | 2020-07-22 | 2020-12-18 | 西安交通大学 | Virtual lane line-based automatic calibration method for vehicle-mounted camera |
CN112950725A (en) * | 2021-03-22 | 2021-06-11 | 深圳市城市交通规划设计研究中心股份有限公司 | Monitoring camera parameter calibration method and device |
CN113850867A (en) * | 2021-08-20 | 2021-12-28 | 上海商汤临港智能科技有限公司 | Camera parameter calibration method, camera parameter calibration device control method, camera parameter calibration device control device, and storage medium |
-
2022
- 2022-03-08 CN CN202210217830.4A patent/CN114359412B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130259403A1 (en) * | 2012-04-03 | 2013-10-03 | Oluwatosin Osinusi | Flexible easy-to-use system and method of automatically inserting a photorealistic view of a two or three dimensional object into an image using a cd,dvd or blu-ray disc |
CN110349219A (en) * | 2018-04-04 | 2019-10-18 | 杭州海康威视数字技术股份有限公司 | A kind of Camera extrinsic scaling method and device |
CN112102413A (en) * | 2020-07-22 | 2020-12-18 | 西安交通大学 | Virtual lane line-based automatic calibration method for vehicle-mounted camera |
CN112950725A (en) * | 2021-03-22 | 2021-06-11 | 深圳市城市交通规划设计研究中心股份有限公司 | Monitoring camera parameter calibration method and device |
CN113850867A (en) * | 2021-08-20 | 2021-12-28 | 上海商汤临港智能科技有限公司 | Camera parameter calibration method, camera parameter calibration device control method, camera parameter calibration device control device, and storage medium |
Non-Patent Citations (2)
Title |
---|
一种新的相机外参数标定方法;王卫文 等;《半导体光电》;20141215;第35卷(第6期);1127-1130 * |
抛物线运动点目标的单目测量;吕耀文 等;《光学学报》;20130610;第33卷(第6期);1-8 * |
Also Published As
Publication number | Publication date |
---|---|
CN114359412A (en) | 2022-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114359412B (en) | Automatic calibration method and system for external parameters of camera facing to building digital twins | |
Koch et al. | Evaluation of cnn-based single-image depth estimation methods | |
US10659773B2 (en) | Panoramic camera systems | |
US9773302B2 (en) | Three-dimensional object model tagging | |
EP4058983A1 (en) | Method and system for scene image modification | |
KR20230074579A (en) | Method and system for parking space management | |
WO2016110239A1 (en) | Image processing method and device | |
CN114329747B (en) | Virtual-real entity coordinate mapping method and system for building digital twins | |
CN110544294B (en) | Dense three-dimensional reconstruction method based on panoramic video | |
Navarrete et al. | Color smoothing for RGB-D data using entropy information | |
CN113689578B (en) | Human body data set generation method and device | |
CN110517348B (en) | Target object three-dimensional point cloud reconstruction method based on image foreground segmentation | |
CN109934873B (en) | Method, device and equipment for acquiring marked image | |
CN112802208B (en) | Three-dimensional visualization method and device in terminal building | |
CN114697623A (en) | Projection surface selection and projection image correction method and device, projector and medium | |
CN116805356A (en) | Building model construction method, building model construction equipment and computer readable storage medium | |
CN109766896A (en) | A kind of method for measuring similarity, device, equipment and storage medium | |
CN115761270A (en) | Color card detection method and device, electronic equipment and storage medium | |
CN116342519A (en) | Image processing method based on machine learning | |
CN114332741A (en) | Video detection method and system for building digital twins | |
CN114821274A (en) | Method and device for identifying state of split and combined indicator | |
CN114674826A (en) | Visual detection method and detection system based on cloth | |
CN117541537B (en) | Space-time difference detection method and system based on all-scenic-spot cloud fusion technology | |
CN113658345A (en) | Sample labeling method and device | |
JP5295044B2 (en) | Method and program for extracting mask image and method and program for constructing voxel data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |