CN111199584A - Target object positioning virtual-real fusion method and device - Google Patents

Target object positioning virtual-real fusion method and device Download PDF

Info

Publication number
CN111199584A
CN111199584A CN201911404384.2A CN201911404384A CN111199584A CN 111199584 A CN111199584 A CN 111199584A CN 201911404384 A CN201911404384 A CN 201911404384A CN 111199584 A CN111199584 A CN 111199584A
Authority
CN
China
Prior art keywords
camera
coordinate system
target object
world
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911404384.2A
Other languages
Chinese (zh)
Other versions
CN111199584B (en
Inventor
何勇
谢瑜
钟义明
姚继锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingyun Construction Technology Hangzhou Co ltd
Qingdao Industrial Software Research Institute Qingdao Branch Of Software Research Institute Cas
Wuhan Urban Construction Engineering Co ltd
Original Assignee
Jingyun Construction Technology Hangzhou Co ltd
Qingdao Industrial Software Research Institute Qingdao Branch Of Software Research Institute Cas
Wuhan Urban Construction Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingyun Construction Technology Hangzhou Co ltd, Qingdao Industrial Software Research Institute Qingdao Branch Of Software Research Institute Cas, Wuhan Urban Construction Engineering Co ltd filed Critical Jingyun Construction Technology Hangzhou Co ltd
Priority to CN201911404384.2A priority Critical patent/CN111199584B/en
Publication of CN111199584A publication Critical patent/CN111199584A/en
Application granted granted Critical
Publication of CN111199584B publication Critical patent/CN111199584B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Library & Information Science (AREA)
  • Computer Hardware Design (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a device for fusing virtual and real positioning of a target object, wherein the method comprises the steps of acquiring the coordinates of the target object in a world coordinate system, the position and the view angle direction of each camera in the world coordinate system, by introducing a world matrix and an inverse matrix thereof, coordinates of the target object in a camera coordinate system of each camera are obtained through coordinate conversion, and judging whether the coordinates in the camera coordinate system are in the three-dimensional view range of the BIM three-dimensional model of each camera, the display monitoring state of the target object is obtained by calling the monitoring video of the camera in the visual field range, so that the bidirectional mapping of the target object between a physical entity (namely the target object) corresponding to the monitoring video and a digital space corresponding to the BIM three-dimensional model is realized, the retrieval and playing of the monitoring video containing the target object are realized, and the actual working state of the target object is obtained.

Description

Target object positioning virtual-real fusion method and device
Technical Field
The invention relates to the technical field of digital management of construction engineering construction, in particular to a method and a device for fusing positioning virtual and real of a target object.
Background
The Building Information Modeling (BIM) is a new tool for architecture, engineering and civil engineering, the realization of three-dimensional visualization of a target object based on the BIM is a technology widely adopted in the current construction site management, and coordinate Information of the target object in the BIM is obtained through positioning technologies such as GIS, Beidou GNSS, Bluetooth and wifi ranging or a video intelligent analysis technology, and is displayed in a three-dimensional visualization manner.
The main disadvantage of the prior art is that only one-way mapping from a real physical space to a three-dimensional digital virtual space represented by a BIM can be realized, namely the position of a current target object can be seen in a BIM three-dimensional model, but the actual state of the target object cannot be effectively reflected.
Disclosure of Invention
Aiming at the technical problems that the position of the current target object can be seen in the BIM three-dimensional model in the prior art but the actual state of the target object cannot be effectively reflected, the invention provides the target object positioning virtual-real fusion method. The invention also provides a device for fusing the virtual object and the actual object in the positioning process.
The technical scheme of the invention is as follows:
a target positioning virtual-real fusion method is used for acquiring the actual state of a target on a construction site, and is characterized by comprising the following steps:
the method comprises the steps of information acquisition, namely deploying a plurality of monitoring cameras on a building site or utilizing the existing monitoring cameras on the building site to carry out BIM modeling on the building site so as to obtain a BIM three-dimensional model of the building site, and acquiring the position and the view angle direction of each camera in the BIM three-dimensional model in a world coordinate system; acquiring coordinates of the target object in a world coordinate system; calculating to obtain a world matrix of each camera in a world coordinate system according to the position and the view angle direction of each camera in the world coordinate system; acquiring a three-dimensional visual field of the BIM corresponding to each camera;
a conversion step of converting the coordinates of the target object in the world coordinate system into coordinates in the camera coordinate system of each camera according to the world matrix of each camera in the world coordinate system;
judging whether the coordinates of the target object in the camera coordinate system of each camera are in the three-dimensional view range of the BIM three-dimensional model corresponding to the corresponding camera or not to obtain all views containing the coordinates of the target object in the camera coordinate system;
and calling, namely calling and displaying the monitoring videos of the cameras in all the fields of view containing the coordinates of the target object in the camera coordinate system so as to obtain the display monitoring state of the target object.
Further, in the converting step, the converting the coordinates of the object in the world coordinate system into the coordinates in the camera coordinate system of each camera according to the world matrix of each camera in the world coordinate system includes:
and calculating an inverse matrix corresponding to the world matrix of each camera in the world coordinate system, and calculating the product of the inverse matrix corresponding to the world matrix of each camera in the world coordinate system and the coordinates of the target object in the world coordinate system to obtain the coordinates of the target object in the camera coordinate system of the corresponding camera within the view cone range of the corresponding camera.
Further, the world matrix is a 4 x 4 matrix, each world matrix includes the position and the view angle direction of each camera, the first three columns of the world matrix respectively represent vectors of the model from right to left, upward and forward, and the fourth column of the world matrix represents the position of the camera.
Further, the target includes one or more constructors.
An object positioning virtual-real fusion device for acquiring an actual state of an object on a construction site, comprising:
the information acquisition module is used for acquiring the position and the view angle direction of each camera in a world coordinate system in a building site BIM three-dimensional model obtained by carrying out BIM modeling on the building site by a plurality of monitoring cameras deployed on the building site; acquiring coordinates of the target object in a world coordinate system; calculating to obtain a world matrix of each camera in a world coordinate system according to the position and the view angle direction of each camera in the world coordinate system; acquiring a three-dimensional visual field of the BIM corresponding to each camera;
the conversion module is used for converting the coordinates of the target object in the world coordinate system into the coordinates in the camera coordinate system of each camera according to the world matrix of each camera in the world coordinate system;
the judging module is used for judging whether the coordinates of the target object in the camera coordinate system of each camera are in the three-dimensional view range of the BIM three-dimensional model corresponding to the corresponding camera, and obtaining all views containing the coordinates of the target object in the camera coordinate system;
and the calling module is used for calling and displaying the monitoring video of the cameras in all the view fields containing the coordinates of the target object in the camera coordinate system so as to obtain the display monitoring state of the target object.
Further, the conversion module is configured to calculate an inverse matrix corresponding to the world matrix of each camera in the world coordinate system, and calculate a product of the inverse matrix corresponding to the world matrix of each camera in the world coordinate system and the coordinate of the target object in the world coordinate system, so as to obtain the coordinate of the target object in the camera coordinate system of the corresponding camera within the view cone range of the corresponding camera.
Further, the world matrix is a 4 x 4 matrix, each world matrix includes the position and the view angle direction of each camera, the first three columns of the world matrix respectively represent vectors of the model from right to left, upward and forward, and the fourth column of the world matrix represents the position of the camera.
Further, the target includes one or more constructors.
The invention has the following technical effects:
the invention provides a virtual-real fusion method for positioning an object, which is based on BIM and video intelligent analysis to perform virtual-real fusion for positioning constructors, and comprises the steps of displaying all real cameras in a computer three-dimensional world coordinate system in a BIM three-dimensional model of a construction site as completely as possible, enabling each camera to have a camera coordinate system and a view cone range (namely a three-dimensional view), converting coordinates of the object stored in the world coordinate system into the camera coordinate system of each camera, judging whether the coordinates are in the view cone range of each camera, determining the cameras of the object in the view cone range of the cameras, calling monitoring videos of the cameras in the real world about the object, converting the coordinates of the object in the world coordinate system into the coordinates in the camera coordinate system of each camera, and judging whether the coordinates of the object in each camera coordinate system are in the view cone range of each camera, the monitoring video frequency of the camera of the target object in the view cone range is used for displaying the monitoring state of the target object, the actual state of the target object can be effectively reflected, the bidirectional mapping of the target object between a physical entity (namely the target object) corresponding to the monitoring video and a digital space corresponding to the BIM three-dimensional model is realized, the retrieval and the playing of the monitoring video containing the target object can be realized based on the monitoring camera information and the positioned target object information in the BIM three-dimensional model, and the actual working state of the target object is obtained.
The invention also provides a target object positioning virtual-real fusion device, which corresponds to the target object positioning virtual-real fusion method and can also be understood as a device for realizing the target object positioning virtual-real fusion method, the device carries out constructor positioning virtual-real fusion based on BIM and video intelligent analysis, and essentially displays the monitoring state of a target object by using the method, an information acquisition module, a conversion module, a judgment module and a calling module of the device are sequentially connected and work cooperatively, the position and the visual angle direction of each camera in a world coordinate system, the coordinates of the target object in the world coordinate system and the world matrix of each camera in the world coordinate system are acquired by the information acquisition module, the coordinates of the target object in the world coordinate system are converted into the coordinates in the camera coordinate system of each camera by the conversion module, and whether the coordinates of the target object in each camera coordinate system are in the view cone of each camera is judged by the judgment module Within the range, the monitoring video of the camera of the target object in the view cone range is called through the calling module to display the monitoring state of the target object, the actual state of the target object can be effectively reflected, the bidirectional mapping of the target object between a physical entity corresponding to the monitoring video and a digital space corresponding to the BIM three-dimensional model is realized, the retrieval and playing of the monitoring video containing the target object can be realized based on the monitoring camera information and the positioned target object information in the BIM three-dimensional model, and the actual working state of the target object is obtained.
Drawings
FIG. 1 is a flow chart of the method for fusing virtual and real objects according to the present invention.
FIG. 2 is a flowchart of a preferred embodiment of the method for fusing virtual and real object locations according to the present invention.
FIG. 3 is a schematic diagram of the position of the camera and its three-dimensional field of view according to the present invention.
Fig. 4 is a block diagram of the target object positioning virtual-real fusion device according to the present invention.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
The invention provides a target object positioning virtual-real fusion method, which is used for acquiring the actual state of a target object of a construction site, and as shown in figure 1, the method comprises the following steps: the method comprises the steps of information acquisition, namely deploying a plurality of monitoring cameras on a building site or utilizing the existing monitoring cameras on the building site to carry out BIM modeling on the building site so as to obtain a BIM three-dimensional model of the building site, and acquiring the position and the view angle direction of each camera in the BIM three-dimensional model in a world coordinate system; acquiring a camera coordinate system of each camera and coordinates of a target object in a world coordinate system; calculating to obtain a world matrix of each camera in a world coordinate system according to the position and the visual angle direction of each camera in the world coordinate system; acquiring a three-dimensional visual field of the BIM corresponding to each camera; converting the coordinates of the target object in the world coordinate system into the coordinates in the camera coordinate system of each camera according to the world matrix of each camera in the world coordinate system; judging whether the coordinates of the target object in the camera coordinate system of each camera are in the three-dimensional view range of the BIM three-dimensional model corresponding to the corresponding camera or not to obtain all views containing the coordinates of the target object in the camera coordinate system; and calling, namely calling and displaying the monitoring videos of the cameras in all the fields of view containing the coordinates of the target object in the camera coordinate system so as to obtain the display monitoring state of the target object.
Specifically, "virtual" in the target positioning virtual-real fusion method refers to positioning of a target object in a BIM three-dimensional model, and "real" refers to positioning of the target object in a real world coordinate system, and an actual state of the target object refers to comprehensively and largely knowing an omnidirectional action state, a mental state, an alternating state or a wearing state of the target object, that is, the actual state of the target object is known through videos including various directions of the target object, for example, the actual state may be a position of the target object, a real-time body language of the target object, a real-time wearing state of the target object, and the like, for example, when the target object is a constructor, the actual state may be whether a constructor wears a safety helmet, whether the target object is in a safe state, whether the target object talks with a person, and the like, and specific expressions of the actual state are not specifically limited by the present invention. In addition, the installation place of the plurality of monitoring cameras deployed on the construction site or the existing plurality of monitoring cameras may be a high point on the construction site, such as on a tower crane device, a telegraph pole or a roof of a building, so as to obtain more monitoring ranges, which is not specifically limited in the present invention.
Based on the embodiment of the invention, the virtual-real fusion method for positioning the target object provided by the invention carries out virtual-real fusion for constructors based on BIM and video intelligent analysis, the method comprises the steps of displaying all real cameras in a computer three-dimensional world coordinate system in a BIM three-dimensional model of a construction site as completely as possible, enabling each camera to have a camera coordinate system and a view cone range (namely a three-dimensional view), converting the coordinates of the target object stored in the world coordinate system into the camera coordinate system of each camera, judging whether the coordinates are in the view cone range of each camera, determining the cameras of the target object in the view cone range of the cameras, calling monitoring videos of the cameras in the real world about the target object, converting the coordinates of the target object in the world coordinate system into the coordinates in the camera coordinate system of each camera, and judging whether the coordinates of the target object in each camera coordinate system are within the view cone range of each camera, using the monitoring video frequency of the camera of the target object within the view cone range to display the monitoring state of the target object, effectively reflecting the actual state of the target object, realizing the bidirectional mapping of the target object between a physical entity (namely the target object) corresponding to the monitoring video and a digital space corresponding to the BIM three-dimensional model, and realizing the retrieval and playing of the monitoring video containing the target object based on the monitoring camera information and the positioned target object information in the BIM three-dimensional model, thereby obtaining the actual working state of the target object.
As a preferred embodiment of the target object positioning virtual-real fusion method of the present invention, as shown in fig. 2, the method specifically includes the following steps:
the first step, the information acquisition step: deploying a plurality of surveillance cameras 1,2, 3,. once, n at a construction site; carrying out BIM three-dimensional modeling on a construction site to obtain detailed construction site three-dimensional model information; the positions and visual angle directions of all monitoring cameras are identified in a building site BIM three-dimensional model, and a world matrix M of each camera in a world coordinate system is obtained according to the positions and the visual angle directions1,M2,M3,...,MnAnd simultaneously obtaining the three-dimensional vision V of the BIM three-dimensional model corresponding to each camera1,V2,V3,...,VnThe visual field is a three-dimensional conical space region, as shown in fig. 3; and obtains the coordinates P of the target object (i.e. the constructor mentioned below) in the world coordinate system0(x0,y0,z0)。
In reality, each camera has information of actual position, orientation, view angle, aspect ratio and the like, so that the world coordinate matrix of each camera is different, the world matrix is a 4 x 4 matrix, each world matrix contains the position and view angle direction of each camera, the first three columns of the world matrix respectively represent vectors of the model itself towards the right, upwards and forwards, and the fourth column of the world matrix represents the position of the camera. The target object includes one or more operators, and the target object is exemplified as one operator.
Specifically, the world coordinate matrix of each camera is composed of four columns, the first three columns represent three directions, the fourth column represents position information of the camera in a world coordinate system, and the last row of the world coordinate matrix is marked as [ 0001 ], wherein 0 in the last row represents a direction, and 1 in the last row represents a position point, namely the position point of the camera in the world coordinate system, and the world coordinate matrix of each camera is obtained by determining Euler angles, which need to be respectively rotated around three coordinate axes, of the camera according to the view angle directions of the camera in the three-dimensional world coordinate system, calculating the first two or three columns, except the last row, of the world coordinate matrix of the camera according to the Euler angles, obtaining the Euler angles, which need to be respectively rotated around an xyz axis, of the camera according to the position information of the camera in the three-dimensional world coordinate system, obtaining the fourth column, except the last row, of the world coordinate matrix of the camera according to the view angle directions of the camera in the world coordinate system, and calculating the first two or three columns, namely, the Euler angle formula (α, gamma) of the Euler angles, α, the first two or three columns of the Euler angles of the world coordinate matrix are calculated according to a preset formula ①:
Figure BDA0002348243350000061
the fourth column of the world coordinate matrix excluding the last row is position information of the camera in a world coordinate system and is expressed as
Figure BDA0002348243350000062
The last row of the world coordinate matrix, denoted as [ 0001 ];
the 4-order world coordinate matrix M of each camera is obtained through the analysis1,M2,M3,...,Mn
According to the embodiment of the invention, the target object positioning virtual-real fusion method provided by the invention can be used for checking the display monitoring state of the target object through the known positioning of the target object (namely, a constructor) in a world coordinate system and the positions and visual angle directions of all cameras in the world coordinate system, so that the bidirectional mapping from a real physical space corresponding to a monitoring video to a three-dimensional digital virtual space represented by BIM is realized, and the actual state of the constructor can be effectively reflected.
The second step, the conversion step: according to the inverse matrix M of the world matrix of each camera1 -1,M2 -1,M3 -1,...,Mn -1Coordinates P of constructor in world coordinate system0(x0,y0,z0) Conversion into the coordinate P of the constructor in the camera coordinate system of each camera1(x1,y1,z1),P2(x2,y2,z2),P3(x3,y3,z3),...,Pn(xn,yn,zn) The specific conversion method is as follows:
Pi=Mi -1*P0,i=1,2,……n。
according to the invention, by introducing the inverse matrix of the world matrix, the coordinates of the constructor in the world coordinate system are simply converted into the coordinates in the camera coordinate system of each camera, no complex conversion step is required, the operation is simple and convenient, and the efficiency is high.
The third step, namely the judging step: judging the coordinate P of the constructor in a coordinate system with each camera as the origini=Mi -1*P0I is 1,2, … … n, and is in the three-dimensional view V of the BIM three-dimensional model of the corresponding cameraiWhere i is 1,2, … … n, P is calculated to includei=Mi -1*P0All views V of 1,2, … … n, iiI 1,2, … … m (1 m n) to obtain one or more views K in each camera containing the coordinates1,K2,K3,...,KmWherein m is more than or equal to 1 and less than or equal to n.
In particular, a plurality of views K1,K2,K3,...,KmThe actual working state of one constructor is completely shown from multiple angles, when multiple constructors are involved, the actual state information of the multiple constructors is collected in the same way, and the repeated description is omitted。
According to the invention, the views of constructors in the three-dimensional view field of the BIM three-dimensional model of each camera are automatically found out, so that the time for checking a plurality of cameras in actual operation is saved, and the working efficiency is improved.
The fourth step, the calling step: invoking display K1,K2,K3,...,KmAnd viewing the corresponding m monitoring videos.
Specifically, the method comprises the following steps: coordinates P of constructor in a coordinate system with each camera as an origini=Mi -1*P0I is 1,2, … … n in the three-dimensional view V of the BIM three-dimensional model of the corresponding cameraiView K captured by each camera in 1,2, … … m (1 ≦ m ≦ n)1,K2,K3,...,KmAnd calling the corresponding m monitoring videos to obtain the display monitoring state of the constructor.
The method and the device realize retrieval and playing of the monitoring video containing the constructors based on the monitoring camera information and the positioned constructor information in the BIM three-dimensional model, so that the actual working state of the constructors can be obtained, and the method and the device are convenient, rapid and high in efficiency. That is, the method obtains the monitoring video of the constructor from the BIM three-dimensional model in the reverse direction; and obtaining a monitoring video including the constructors through the three-dimensional vision field corresponding to the camera in the BIM three-dimensional model and the position analysis of the constructors in the BIM three-dimensional model.
The invention also provides a target object positioning virtual-real fusion device, which corresponds to the target object positioning virtual-real fusion method and can also be understood as a device for realizing the target object positioning virtual-real fusion method, the device carries out constructor positioning virtual-real fusion based on BIM and video intelligent analysis, the essence is that the method is utilized to display the monitoring state of the target object, the embodiment of the target object positioning virtual-real fusion device and the achieved effect and the solved technical problem are the same as the effect and the solved technical problem of the target object positioning virtual-real fusion method, only brief description is made here, and repeated parts are not repeated. As shown in fig. 4, the device is used for acquiring an actual state of a target (i.e. a constructor) on a construction site, and includes an information acquisition module, a conversion module, a judgment module and a call module, which are connected in sequence, and specifically refer to fig. 2, that is, fig. 2 may also be understood as an operation schematic diagram of the target positioning virtual-real fusion device of the present invention.
Wherein, the information acquisition module: the system comprises an information acquisition module, a display module and a display module, wherein the information acquisition module is used for acquiring the position and the view angle direction of each camera (1, 2, 3.., n) deployed on a building site in a world coordinate system of a building site BIM three-dimensional model obtained by BIM modeling of the building site; acquiring coordinates of a camera coordinate system target object of each camera in a world coordinate system of a camera coordinate system; calculating to obtain a world matrix of each camera in the camera coordinate system of the camera in the world coordinate system of the camera according to the position and the view angle direction of each camera in the world coordinate system of the camera; and acquiring a three-dimensional visual field (V) of a camera coordinate system BIM three-dimensional model of the camera corresponding to each camera1,V2,V3,...,Vn)。
And the conversion module is used for converting the coordinates of the target object in the world coordinate system into the coordinates in the camera coordinate system of each camera according to the world matrix of each camera in the world coordinate system. Preferably, in this embodiment, the conversion module is configured to calculate an inverse matrix corresponding to the world matrix of each camera in the world coordinate system, and calculate a product of the inverse matrix corresponding to the world matrix of each camera in the world coordinate system and the coordinates of the target object in the world coordinate system, so as to obtain the coordinates of the target object in the camera coordinate system of the corresponding camera within the view cone range of the corresponding camera.
Preferably, in this embodiment, the world matrix is a 4 × 4 matrix, each world matrix includes the position and the viewing direction of each camera, the first three columns of the world matrix respectively represent vectors of the model itself to the right, upward and forward, the fourth column of the world matrix represents the position of the camera, and the target object includes one or more constructors.
A judging module, configured to judge whether a coordinate (x, y, z) of the target object in the camera coordinate system of each camera is within a three-dimensional view range of the BIM three-dimensional model corresponding to the corresponding camera, to obtain a coordinate P including the target object in the camera coordinate systemi=Mi -1*P0All views V of 1,2, … … n, ii,i=1,2,……m(1≤m≤n)。
A calling module for calling the monitoring video of the cameras in all the fields of view including the coordinates of the target object in the camera coordinate system, namely calling the display K1,K2,K3,...,KmAnd viewing the corresponding m monitoring videos to obtain the display monitoring state of the target object.
The invention provides a target object positioning virtual-real fusion device, which is characterized in that an information acquisition module, a conversion module, a judgment module and a calling module of the device are sequentially connected and work cooperatively, the position and visual angle direction of each camera in a world coordinate system, the coordinates of a target object in the world coordinate system and a world matrix of each camera in the world coordinate system are acquired through the information acquisition module, the coordinates of the target object in the world coordinate system are converted into the coordinates in the camera coordinate system of each camera through the conversion module, whether the coordinates of the target object in each camera coordinate system are in the view cone range of each camera is judged through the judgment module, a monitoring video of the camera of the target object in the view cone range is called through the calling module to display the monitoring state of the target object, the actual state of the target object can be effectively reflected, and the target object is positioned between a physical entity corresponding to the monitoring video and a digital space corresponding to a BIM three-dimensional model And bidirectional mapping can be used for realizing the retrieval and playing of the monitoring video containing the target object based on the monitoring camera information in the BIM three-dimensional model and the positioned target object information, so that the actual working state of the target object can be obtained.
It should be noted that the above-mentioned embodiments enable a person skilled in the art to more fully understand the invention, without restricting it in any way. Therefore, although the present invention has been described in detail with reference to the drawings and examples, it will be understood by those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention.

Claims (8)

1. A target positioning virtual-real fusion method is used for acquiring the actual state of a target on a construction site, and is characterized by comprising the following steps:
the method comprises the steps of information acquisition, namely deploying a plurality of monitoring cameras on a building site or utilizing the existing monitoring cameras on the building site to carry out BIM modeling on the building site so as to obtain a BIM three-dimensional model of the building site, and acquiring the position and the view angle direction of each camera in the BIM three-dimensional model in a world coordinate system; acquiring coordinates of the target object in a world coordinate system; calculating to obtain a world matrix of each camera in a world coordinate system according to the position and the view angle direction of each camera in the world coordinate system; acquiring a three-dimensional visual field of the BIM corresponding to each camera;
a conversion step of converting the coordinates of the target object in the world coordinate system into coordinates in the camera coordinate system of each camera according to the world matrix of each camera in the world coordinate system;
judging whether the coordinates of the target object in the camera coordinate system of each camera are in the three-dimensional view range of the BIM three-dimensional model corresponding to the corresponding camera or not to obtain all views containing the coordinates of the target object in the camera coordinate system;
and calling, namely calling and displaying the monitoring videos of the cameras in all the fields of view containing the coordinates of the target object in the camera coordinate system so as to obtain the display monitoring state of the target object.
2. The method of claim 1, wherein in the converting step, the converting the coordinates of the object in the world coordinate system to coordinates in the camera coordinate system of each camera according to the world matrix of each camera in the world coordinate system comprises:
and calculating an inverse matrix corresponding to the world matrix of each camera in the world coordinate system, and calculating the product of the inverse matrix corresponding to the world matrix of each camera in the world coordinate system and the coordinates of the target object in the world coordinate system to obtain the coordinates of the target object in the camera coordinate system of the corresponding camera within the view cone range of the corresponding camera.
3. The method of claim 1 or 2, wherein the world matrices are 4 x 4 matrices, each of the world matrices includes a position and a viewing direction of each camera, the first three columns of the world matrices represent vectors of the model itself to the right, up, and forward, respectively, and the fourth column of the world matrices represents a position of a camera.
4. The method of claim 1 or 2, wherein the target comprises one or more constructors.
5. An object positioning virtual-real fusion device for acquiring an actual state of an object on a construction site, comprising:
the information acquisition module is used for acquiring the position and the view angle direction of each camera in a world coordinate system in a building site BIM three-dimensional model obtained by carrying out BIM modeling on the building site by a plurality of monitoring cameras deployed on the building site; acquiring coordinates of the target object in a world coordinate system; calculating to obtain a world matrix of each camera in a world coordinate system according to the position and the view angle direction of each camera in the world coordinate system; acquiring a three-dimensional visual field of the BIM corresponding to each camera;
the conversion module is used for converting the coordinates of the target object in the world coordinate system into the coordinates in the camera coordinate system of each camera according to the world matrix of each camera in the world coordinate system;
the judging module is used for judging whether the coordinates of the target object in the camera coordinate system of each camera are in the three-dimensional view range of the BIM three-dimensional model corresponding to the corresponding camera, and obtaining all views containing the coordinates of the target object in the camera coordinate system;
and the calling module is used for calling and displaying the monitoring video of the cameras in all the view fields containing the coordinates of the target object in the camera coordinate system so as to obtain the display monitoring state of the target object.
6. The apparatus of claim 5, wherein the conversion module is configured to calculate an inverse matrix corresponding to the world matrix of each camera in the world coordinate system, and calculate a product of the inverse matrix corresponding to the world matrix of each camera in the world coordinate system and the coordinates of the target object in the world coordinate system, so as to obtain the coordinates of the target object in the camera coordinate system of the corresponding camera within the view cone of the corresponding camera.
7. The apparatus of claim 5 or 6, wherein the world matrices are 4 x 4 matrices, each of the world matrices includes the position and the viewing direction of each camera, the first three columns of the world matrices represent vectors of the model itself to the right, up, and forward, respectively, and the fourth column of the world matrices represents the position of the camera.
8. The apparatus of claim 5 or 6, wherein the target comprises one or more constructors.
CN201911404384.2A 2019-12-31 2019-12-31 Target object positioning virtual-real fusion method and device Active CN111199584B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911404384.2A CN111199584B (en) 2019-12-31 2019-12-31 Target object positioning virtual-real fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911404384.2A CN111199584B (en) 2019-12-31 2019-12-31 Target object positioning virtual-real fusion method and device

Publications (2)

Publication Number Publication Date
CN111199584A true CN111199584A (en) 2020-05-26
CN111199584B CN111199584B (en) 2023-10-20

Family

ID=70746215

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911404384.2A Active CN111199584B (en) 2019-12-31 2019-12-31 Target object positioning virtual-real fusion method and device

Country Status (1)

Country Link
CN (1) CN111199584B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696216A (en) * 2020-06-16 2020-09-22 浙江大华技术股份有限公司 Three-dimensional augmented reality panorama fusion method and system
CN111832105A (en) * 2020-06-24 2020-10-27 万翼科技有限公司 Model fusion method and related device
CN113660421A (en) * 2021-08-16 2021-11-16 北京中安瑞力科技有限公司 Linkage method and linkage system for positioning videos
CN118400510A (en) * 2024-06-26 2024-07-26 中科星图金能(南京)科技有限公司 Method for assisting park emergency command based on spatialization video

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011118282A1 (en) * 2010-03-24 2011-09-29 株式会社日立製作所 Server using world coordinate system database and terminal
DE102011100628A1 (en) * 2011-05-05 2012-11-08 Deutsches Zentrum für Luft- und Raumfahrt e.V. Method for determining parameter of camera i.e. mobile traffic monitoring camera, involves determining camera parameter based on image coordinates of structure, absolute Cartesian coordinates of structure and programmable mapping function
US20140192159A1 (en) * 2011-06-14 2014-07-10 Metrologic Instruments, Inc. Camera registration and video integration in 3d geometry model
WO2017077217A1 (en) * 2015-11-04 2017-05-11 Smart Pixels 3-d calibration of a video mapping system
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011118282A1 (en) * 2010-03-24 2011-09-29 株式会社日立製作所 Server using world coordinate system database and terminal
DE102011100628A1 (en) * 2011-05-05 2012-11-08 Deutsches Zentrum für Luft- und Raumfahrt e.V. Method for determining parameter of camera i.e. mobile traffic monitoring camera, involves determining camera parameter based on image coordinates of structure, absolute Cartesian coordinates of structure and programmable mapping function
US20140192159A1 (en) * 2011-06-14 2014-07-10 Metrologic Instruments, Inc. Camera registration and video integration in 3d geometry model
WO2017077217A1 (en) * 2015-11-04 2017-05-11 Smart Pixels 3-d calibration of a video mapping system
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马伟;宫乐;冯浩;殷晨波;周俊静;曹东辉;: "基于视觉的挖掘机工作装置位姿测量" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696216A (en) * 2020-06-16 2020-09-22 浙江大华技术股份有限公司 Three-dimensional augmented reality panorama fusion method and system
CN111696216B (en) * 2020-06-16 2023-10-03 浙江大华技术股份有限公司 Three-dimensional augmented reality panorama fusion method and system
CN111832105A (en) * 2020-06-24 2020-10-27 万翼科技有限公司 Model fusion method and related device
CN113660421A (en) * 2021-08-16 2021-11-16 北京中安瑞力科技有限公司 Linkage method and linkage system for positioning videos
CN118400510A (en) * 2024-06-26 2024-07-26 中科星图金能(南京)科技有限公司 Method for assisting park emergency command based on spatialization video

Also Published As

Publication number Publication date
CN111199584B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN111199584B (en) Target object positioning virtual-real fusion method and device
US11394950B2 (en) Augmented reality-based remote guidance method and apparatus, terminal, and storage medium
US11710322B2 (en) Surveillance information generation apparatus, imaging direction estimation apparatus, surveillance information generation method, imaging direction estimation method, and program
US9760987B2 (en) Guiding method and information processing apparatus
CN108234927B (en) Video tracking method and system
JP6586834B2 (en) Work support method, work support program, and work support system
US20210027545A1 (en) Method and system for visualizing overlays in virtual environments
CN111192321B (en) Target three-dimensional positioning method and device
CN105828045A (en) Method and device for tracking target by using spatial information
CN101707671A (en) Panoramic camera and PTZ camera combined control method and panoramic camera and PTZ camera combined control device
CN111914819A (en) Multi-camera fusion crowd density prediction method and device, storage medium and terminal
CN111246181B (en) Robot monitoring method, system, equipment and storage medium
Assadzadeh et al. Excavator 3D pose estimation using deep learning and hybrid datasets
KR20120076175A (en) 3d street view system using identification information
CN107124581A (en) Video camera running status and suspected target real-time display system on the electronic map
US20180020203A1 (en) Information processing apparatus, method for panoramic image display, and non-transitory computer-readable storage medium
CN103260008B (en) A kind of image position is to the projection conversion method of physical location
CN104700409B (en) A method of according to monitoring objective adjust automatically preset positions of camera
CN116524143A (en) GIS map construction method
CN104157135A (en) Intelligent traffic system
CN113836337B (en) BIM display method, device, equipment and storage medium
Kamat et al. GPS and 3DOF tracking for georeferenced registration of construction graphics in outdoor augmented reality
CN113436317B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112860946B (en) Method and system for converting video image information into geographic information
KR101710860B1 (en) Method and apparatus for generating location information based on video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant