WO2023178729A1 - 一种基于bim和视频监控的博物馆参观分析的方法及系统 - Google Patents

一种基于bim和视频监控的博物馆参观分析的方法及系统 Download PDF

Info

Publication number
WO2023178729A1
WO2023178729A1 PCT/CN2022/084962 CN2022084962W WO2023178729A1 WO 2023178729 A1 WO2023178729 A1 WO 2023178729A1 CN 2022084962 W CN2022084962 W CN 2022084962W WO 2023178729 A1 WO2023178729 A1 WO 2023178729A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
bim
museum
voxel
model
Prior art date
Application number
PCT/CN2022/084962
Other languages
English (en)
French (fr)
Inventor
薛帆
叶嘉安
吴怡洁
杨仲泽
Original Assignee
香港大学深圳研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 香港大学深圳研究院 filed Critical 香港大学深圳研究院
Publication of WO2023178729A1 publication Critical patent/WO2023178729A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the present invention relates to the field of computer technology. Specifically, the present invention relates to a method and system for museum visit analysis based on BIM and video surveillance.
  • Visitor management is an important part of the daily work of museums. At the same time, under the requirements of normalized epidemic prevention and control, strictly controlling the number of visitors and avoiding gatherings of visitors is the top priority of current visit management. Museums often need to invest considerable efforts. Manpower ensures the orderly visit of the audience. In addition, the visiting behavior of the audience is also an important feedback and reference for the museum in exhibition planning and exhibition area exhibit layout. In the context of the rapid development of information technology, more intelligent and efficient visitor visit analysis and management can be achieved with the help of three-dimensional digitization, building information modeling and visual data understanding.
  • BIM Building Information Modeling
  • the surveillance video stream is automatically identified and counted, which can quantitatively analyze the exhibition effect and provide more accurate information for exhibition planning and exhibition area exhibit layout. Audience feedback reference.
  • the present invention provides a method and system for museum visit analysis based on BIM and video surveillance.
  • the present invention provides a method and system for museum visit analysis based on BIM and video surveillance to solve the above technical problems.
  • the technical method adopted by the present invention to solve the technical problem is: a method of museum visit analysis based on BIM and video surveillance.
  • the improvement is that it includes the following steps: S1.
  • the BIM model building module performs laser spotting on the interior of the museum. Cloud scanning completes the museum BIM modeling, generates a voxel model, and records the camera pose fitting results into the BIM model;
  • S2 the video stream acquisition and calibration module calls the video stream, intercepts the corresponding video frame, and analyzes the camera's content
  • the parameters are calibrated, and the results are integrated into the matrix camera internal parameter matrix K; S3.
  • the spatial registration module calculates the three-dimensional voxel coordinates corresponding to each pixel coordinate of the video stream based on the voxel model, camera pose, and camera internal parameter K. Obtain the correspondence between pixels and voxels, and complete the spatial registration of the surveillance video image and the BIM model; S4, the audience detection and positioning module detects the human body key points in the video frame, and saves the bipedal nodes in the human body key point results.
  • the attention analysis module obtains all exhibition areas and After the duration of visits to the exhibits, the data of the duration of visits to the exhibition area and exhibits are normalized to calculate the attention.
  • step S6 is also included.
  • the attention analysis module performs accessibility analysis on the exhibition area and exhibits.
  • the step S6 includes the following steps: S61. Calculate the shortest path from the museum entrance to the ground voxel area of the exhibition area; calculate The shortest path from the museum entrance to the center point of the exhibit; calculate the number of ground voxels corresponding to the exhibition area; calculate the volume of the cuboid surrounding the exhibit voxel; calculate the reciprocal of the shortest distance between the exhibit and the wall voxel; S62, use the A* algorithm to calculate the Calculate the five indicators in step S61; S63, normalize the five indicators to obtain the accessibility indicators of the exhibition area and exhibits:
  • Exhibit accessibility (1/exhibit path length) ⁇ exhibit size ⁇ exhibit centrality.
  • step S1 includes the following steps:
  • the axis alignment operation in step S14 is to rotate the coordinate system of the point cloud around the z-axis.
  • the calculation steps of the rotation angle are as follows:
  • S142 Calculate the projection direction and length of the normal vector in the horizontal direction. If the length is greater than the threshold 0.5, it is judged that the point belongs to the vertical structure and needs to be retained in the calculation of the rotation angle. If it is less than the threshold 0.5, the point is eliminated and does not participate in the rotation. angle calculation;
  • ⁇ i is the difference between the fitted angle and the horizontal projection angle of the normal vector of a certain point, and N is the number of points participating in the calculation of the rotation angle;
  • step S2 includes the following steps:
  • step S3 the three-dimensional voxel coordinates corresponding to each pixel coordinate of the video stream are calculated.
  • the relationship between the pixel coordinates Pi (u, v) and P c is: With the optical center of the camera as the origin, the front of the camera as the z-axis, and the horizontal and vertical directions of the imaging plane as the x- and y-axes respectively, the camera coordinate system is established. K is the camera internal parameter.
  • the coordinates of the photographed point are P c (x c , y c , z c ), z c is the distance from the photographed point to the optical center of the camera, and the coordinate coordinates of the photographed point P c and the point in the BIM model
  • step S4 includes the following steps:
  • step S5 includes the following steps:
  • the above method also includes step S7.
  • the crowd density analysis and early warning module generates a ground thermal voxel model based on the audience positioning results, displays the voxel color according to the density, and completes the crowd density analysis and early warning;
  • the step S7 includes the following steps:
  • the present invention also provides a system for analyzing museum visits based on BIM and video surveillance, including
  • BIM model building module video stream acquisition and calibration module, spatial registration module, audience detection and positioning module, and attention analysis module,
  • the BIM model building module is used to scan the interior of the museum with laser point clouds, complete the museum BIM modeling, generate a voxel model, and record the camera pose fitting results into the BIM model;
  • the video stream acquisition and calibration module is used to call the video stream, intercept the corresponding video frame, perform internal parameter calibration of the camera, and obtain the internal parameter K of each model of camera;
  • the spatial registration module is connected to the BIM model building module and the video stream acquisition and calibration module, and is used to calculate the three-dimensional coordinates corresponding to each pixel coordinate of the video stream based on the voxel model, camera pose and internal parameter K of the camera. Voxel coordinates, obtain the correspondence between pixels and voxels, and complete the spatial registration of surveillance video images and BIM models;
  • the audience detection and positioning module is connected to the video stream acquisition and calibration module and the spatial registration module, and is used to detect human body key points in the video frame and save the pixel positions of the bipedal nodes in the human body key point results, And access the corresponding relationship between the pixels and voxels, determine the voxel where the audience's feet are located, and perform indoor positioning of the audience;
  • the attention analysis module is connected to the audience detection and positioning module. According to the audience positioning results, after obtaining the visit time of all exhibition areas and exhibits, normalize the visit time data of the exhibition areas and exhibits, and make statistics. Attention.
  • the beneficial effects of this invention are: constructing a museum BIM model based on point clouds and three-dimensional models of existing exhibits and cameras in the museum, spatially registering the BIM model and pixels in the surveillance video, detecting the pixels of the audience's feet, and The measured pixel coordinates of the feet are mapped to the three-dimensional space coordinates of the BIM model to complete the audience positioning. Based on the positioning results and the accessibility of the exhibits and exhibition areas, the number of visitors visiting the exhibition area and viewing the exhibits within a given period of time is counted for analysis. The audience's attention to each exhibit and exhibition area; and real-time crowd density monitoring and early warning can be achieved. Museum staff can set a crowd density alarm threshold.
  • the museum staff can Audiences can be properly guided during their tours to avoid crowd gatherings; by using the existing surveillance video network of the museum, there is no need to install new equipment or increase additional equipment costs, enabling low-cost analysis of museum audience density and exhibit area attention, with Higher practicality.
  • Figure 1 is a flow chart of a museum visit analysis method based on BIM and video surveillance according to the present invention.
  • Figure 2 is a schematic diagram of the correspondence between the bipedal coordinates and the three-dimensional coordinate system of the BIM model in the present invention.
  • Figure 3 is a schematic diagram of the correspondence between pixel coordinates and voxel coordinates in the video image in the present invention.
  • a method of museum visit analysis based on BIM and video surveillance of the present invention includes the following steps:
  • the BIM model building module performs laser point cloud scanning of the interior of the museum, completes the BIM modeling of the museum, generates a voxel model, and records the camera pose fitting results into the BIM model;
  • step S1 includes the following steps:
  • RandLA-Net Use the RandLA-Net algorithm to perform three-dimensional semantic segmentation on each segmented point cloud, and divide different BIM model elements such as walls, floors, and steps.
  • RandLA-Net is a random point sampling-based method for large-scale point cloud semantic segmentation tasks. and local special aggregation neural network model;
  • Open3D is the three-dimensional data open source algorithm library, and the Registration interface is the point cloud registration interface;
  • S14 In order to reduce the accuracy loss of subsequent point cloud and voxel processing, perform an axis alignment operation on the global point cloud of the museum.
  • the axis alignment operation is to rotate the coordinate system of the point cloud around the z-axis (vertical direction). , making most of the vertical structures (walls, etc.) in the point cloud parallel to the x and y axes in the new coordinate system.
  • the key to axis alignment is the calculation of the rotation angle.
  • the steps for calculating the rotation angle are as follows: S141.
  • the EstimateNormal function in the Open3D library to calculate the normal vectors of all points in the point cloud, and normalize the normal vectors so that each normal vector
  • the three-dimensional length of is 1, and the EstimateNormal function is the normal vector estimation function; S142. Calculate the projection direction and length of the normal vector in the horizontal direction. If the length is greater than the threshold 0.5, it is judged that the point belongs to the vertical structure and needs to be retained to participate in the calculation of the rotation angle. , if it is less than the threshold 0.5, the point will be eliminated and will not participate in the calculation of the rotation angle; S143.
  • ⁇ i is the difference between the fitted angle and the horizontal projection angle of the normal vector of a certain point, N is the number of points participating in the calculation of the rotation angle; S144, use the derivative-free optimization method to solve, call the nlopt library to complete the solution process, and obtain the rotation angle, nlopt
  • the library is a nonlinear derivative-free optimization algorithm library.
  • Voxels in this embodiment, store the voxel model in three parts: (1) Independent voxel model: The file header records the origin coordinates and voxel side length of the voxel model, and each voxel is divided into (vid, x, y ,z,tid,pid,rid) records the three-dimensional coordinates and attributes, where vid is the id of the voxel, x, y and z are the three-dimensional coordinates of the voxel, all are integers, and the attributes include tid, which represents the type of voxel ( Wall: 0, ground: 1, steps: 2, exhibits: 3); pid, if the voxel is an exhibit voxel, then pid is the exhibit id in the corresponding digital exhibit information system; rid, if the voxel is a ground volume If the element belongs to a certain exhibition area, then rid is the id of the corresponding exhibition area.
  • vid is
  • Independent voxel files are stored in text files and can be compressed into binary files on demand;
  • Digital exhibit associated storage In the museum’s existing digital exhibit information system, a corresponding voxel field is added to store the voxel corresponding to the exhibit. Voxel vids are recorded into fields in a collection manner;
  • exhibition area associated storage In the museum's existing operation management database, add an exhibition area table, or directly expand the original exhibition area table, add corresponding voxel fields, and add The voxel vid corresponding to the exhibition area is recorded in this field in a set;
  • the model uses absolute dimensions.
  • the camera three-dimensional model as a template, perform template matching operations and three-dimensional pose fitting in the point cloud.
  • the matching method and the template of the three-dimensional model of the digital exhibits Similar to matching, the three-dimensional position coordinates and rotation angle of the camera in the BIM coordinate system are calculated, that is, the camera's external parameter T (hereinafter referred to as the camera's external parameter).
  • the three-dimensional position coordinate can be represented by the three-dimensional vector t
  • the three-dimensional rotation angle can be represented by the three-dimensional vector t.
  • the video stream acquisition and calibration module calls the video stream, intercepts the corresponding video frame, and calibrates the internal parameters of the camera.
  • the internal parameters of the camera include camera focal length, imaging plane translation, distortion, etc., and integrates the results into matrix camera internal parameters. matrixK;
  • step S2 includes the following steps:
  • findChessboardCorners and calibrateCamera functions in the opencv library to perform camera internal parameter calibration and obtain the internal parameter K of each camera model.
  • the internal parameter K is recorded in the BIM model to support subsequent real-time and batch calculations.
  • the opencv library is an open source algorithm for computer vision. Library, findChessboardCorners is the chessboard corner detection function, calibrateCamera is the camera parameter calibration function.
  • the spatial registration module calculates the three-dimensional voxel coordinates corresponding to each pixel coordinate of the video stream based on the voxel model, camera pose and camera internal parameter K, obtains the correspondence between pixels and voxels, and completes the monitoring video image and Spatial registration of BIM models;
  • step S3 the three-dimensional voxel coordinates corresponding to each pixel coordinate of the video stream are calculated.
  • the relationship between pixel coordinates Pi (u, v) and P c is: With the optical center of the camera as the origin, the front of the camera as the z-axis, and the horizontal and vertical directions of the imaging plane as the x- and y-axes respectively, the camera coordinate system is established. K is the camera internal parameter.
  • the coordinates of the photographed point are P c (x c , y c , z c ), z c is the distance from the photographed point to the optical center of the camera, and the coordinates of the photographed point P c are the same as the coordinates of the point in the BIM model
  • z c cannot be determined directly.
  • This solution uses the established three-dimensional voxel model to search for the nearest voxel that can be found in the BIM model for different zcs pixel by pixel, as the corresponding three-dimensional space position of the pixel, that is, registering the pixel to the BIM model. . Furthermore, the voxel coordinates corresponding to each pixel successfully solved by the camera are recorded into the camera attributes to support the subsequent audience detection and positioning module to position the audience indoors.
  • the audience detection and positioning module detects human body key points in the video frame, saves the pixel positions of the two-foot nodes in the human body key point results, and accesses the correspondence between the pixels and voxels to determine the location of the audience's feet. voxels to position the audience indoors;
  • step S4 includes the following steps:
  • MaskR-CNN in the computer vision processing library Detectron to detect human key points in the video frame.
  • This architecture can handle occlusion better. When viewers block each other in the video frame, it can also perform better.
  • the Mask R-CNN architecture is an instance segmentation algorithm based on masks and convolutional neural networks;
  • the attention analysis module obtains the visit duration of all exhibition areas and exhibits based on the audience positioning results, normalizes the visit duration data of the exhibition areas and exhibits, and collects attention statistics;
  • step S5 includes the following steps:
  • a new label of whether to watch the exhibition is added to the data set of the audience detection and positioning module, and the training is performed simultaneously with the human body key point detection branch of the audience detection and positioning module;
  • the fact that the audience stays in the exhibit viewing area does not directly mean that the audience is visiting the exhibit. Therefore, the behavior of the audience also needs to be identified. Therefore, a supervised learning method can be used to determine whether the audience is visiting the corresponding exhibit when staying in the exhibition area. Furthermore, in order to carry out supervised learning of viewing behavior judgment, the corresponding true value data needs to be marked; further, an image processing method based on deep learning is used, and based on the cutting-edge classification convolutional neural network, the true value annotated data is used to carry out Finetuning parameters.
  • step S6 is also included.
  • the attention analysis module performs accessibility analysis on the exhibition area and exhibits.
  • step S6 includes the following steps:
  • Exhibit accessibility (1/exhibit path length) ⁇ exhibit size ⁇ exhibit centrality.
  • the solution of the present invention conducts a comprehensive analysis of the degree of attention based on the visitor's visiting time and the accessibility of the exhibit area, so as to provide a more accurate curatorial reference for museum staff.
  • Museum staff can analyze the visit duration and accessibility of exhibition areas and exhibits respectively, and can also calculate the "net attention" of exhibits and exhibition areas, that is, the normalized visit duration/accessibility, to view each
  • the attention paid to exhibits and exhibition areas after excluding the impact of accessibility can help museum managers find exhibition areas or exhibits that have high accessibility but do not respond enthusiastically to the audience, or that although the accessibility is not high, the audience does. Still entertained by the exhibits.
  • the crowd density analysis and early warning module generates a ground thermal voxel model based on the audience positioning result, displays the voxel color according to the density, and completes the crowd density analysis and early warning;
  • step S7 includes the following steps:
  • S72 Provide managers with a three-dimensional visualization interface that displays voxel colors based on density
  • the invention also provides a system for museum visit analysis based on BIM and video monitoring, including a BIM model building module, a video stream acquisition and calibration module, a spatial registration module, an audience detection and positioning module, and an attention analysis module.
  • the BIM model building module is used to scan the interior of the museum with laser point clouds, complete the museum BIM modeling, generate a voxel model, and record the camera pose fitting results into the BIM model; further, considering that some museums already have This program can use the existing digital three-dimensional model of the exhibits in the pilot museum to perform matching and three-dimensional space position fitting in the LiDAR point cloud to determine the three-dimensional pose of each exhibit in the museum, that is, the position coordinates and angles.
  • This data and the corresponding exhibit model number are stored in the BIM model; further, this solution can create a voxel model of each exhibit in the coordinate system of the museum BIM model based on the fitted three-dimensional model and pose of the exhibit; further, Considering that the setting of exhibition areas is generally flexible and changeable, in this plan, staff or modelers will circle and mark the ground voxel model. You can refer to the ground voxels divided by exhibition area 1 in Figure 3; further, set To determine the distance threshold for viewing exhibits, and divide the visiting area voxels corresponding to each exhibit in the ground voxels, refer to the ground voxel division of exhibits A-D in Figure 3;
  • the video stream acquisition and calibration module is used to call the video stream, intercept the corresponding video frame, perform internal parameter calibration of the camera, and obtain the internal parameter K of each model of camera;
  • the spatial registration module is connected to the BIM model building module and the video stream acquisition and calibration module, and is used to calculate the three-dimensional coordinates corresponding to each pixel coordinate of the video stream based on the voxel model, camera pose and internal parameter K of the camera. Voxel coordinates, obtain the correspondence between pixels and voxels, and complete the spatial registration of surveillance video images and BIM models;
  • the audience detection and positioning module is connected to the video stream acquisition and calibration module and the spatial registration module, and is used to detect human body key points in the video frame and save the pixel positions of the bipedal nodes in the human body key point results, And access the corresponding relationship between the pixels and voxels, determine the voxel where the audience's feet are located, and perform indoor positioning of the audience;
  • the attention analysis module is connected to the audience detection and positioning module. According to the audience positioning results, after obtaining the visit time of all exhibition areas and exhibits, normalize the visit time data of the exhibition areas and exhibits, and make statistics. Attention.
  • This invention builds a museum BIM model based on point clouds and the museum's existing exhibits and camera three-dimensional models, performs spatial registration on the BIM model and pixels in the surveillance video, detects the pixels of the audience's feet, and combines the measured pixels of the feet
  • the coordinates are mapped to the three-dimensional space coordinates of the BIM model to complete the positioning of the audience. Based on the positioning results and the accessibility of the exhibits and exhibition areas, the number of visitors visiting the exhibition area and viewing the exhibits within a given period of time is counted to analyze the audience's understanding of each exhibit and the exhibition area. Attention to the exhibition area; and real-time crowd density monitoring and early warning can be achieved. Museum staff can set a crowd density alarm threshold.
  • the audience can be appropriately visited Guide and avoid crowd gathering; using the museum’s existing surveillance video network, there is no need to install new equipment or increase additional equipment costs, achieving low-cost analysis of museum audience density and exhibit area attention analysis, with high practical operation sex.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种基于BIM和视频监控的博物馆参观分析的方法,S1、对博物馆内部进行激光点云扫描,完成博物馆BIM建模,生成体素模型,并将摄像头位姿拟合结果记录到BIM模型中;S2、调用视频流,截取对应的视频帧,对摄像头的内参数进行标定,并将结果整合为矩阵相机内参矩阵K;S3、根据所述体素模型、摄像头位姿以及摄像头的内参K,计算视频流各像素坐标对应的三维体素坐标,获取像素与体素间的对应关系,完成监控视频图像与BIM模型的空间配准;S4、对视频帧中的人体关键点进行检测,保存人体关键点结果中的双足节点像素位置,并访问所述的像素与体素间的对应关系,对观众进行室内定位;S5、获得所有展区和展品的被参观时长,统计关注度。

Description

一种基于BIM和视频监控的博物馆参观分析的方法及系统 技术领域
本发明涉及计算机技术领域,具体的,本发明涉及一种基于BIM和视频监控的博物馆参观分析的方法及系统。
背景技术
观众参观管理是博物馆日常工作中的一个重要部分,同时,在常态化疫情防控的要求下,严格控制参观人数,避免观众集聚等是当前参观管理的重中之重,博物馆往往需投入相当的人力保证观众的有序参观。此外,观众的参观行为也是博物馆在展览策划和展区展品布置中的一类重要反馈与参考。在信息化技术快速发展的背景之下,可借助三维数字化、建筑信息模型和视觉数据理解等方式实现更智能和高效的观众参观分析与管理。
建筑信息模型(Building Information Modeling,BIM)技术是一种应用于工程设计、建造、管理的数据化工具,BIM的核心是通过建立虚拟的建筑信息三维模型,利用数字化技术,支撑建筑内部的各种管理分析功能。监控摄像头是博物馆内常设的安防设施,在传统的安防工作中,监控视频一般由工作人员负责观看和预警,这一方面需要安排特定人力,另一方面则有可能因人员疲劳等问题而未能及时发出预警。自动化的监控视频解析与预警可减轻博物馆工作人员的安防负担,为疫情防控背景下的参观管理提供多一重保障。此外,监控视频除了满足安防需求以外,同时录制了大量的观众参观画面,对监控视频流进行自动化的参观识别与统计,可对展览效果进行量化分析,为展览策划和展区展品的布置提供更精准的观众反馈参考。鉴于此,本发明提供了一种基于BIM和视频监控的博物馆参观分析的方法及系统。
发明内容
为了克服现有技术的不足,本发明提供了一种基于BIM和视频监控的博物馆参观分析的方法及系统,以解决上述的技术问题。
本发明解决其技术问题所采用的技术方法是:一种基于BIM和视频监控的博物馆参观分析的方法,其改进之处在于:包括以下的步骤:S1、BIM模型构建模块对博物馆内部进行激光点云扫描,完成博物馆BIM建模,生成体素模型,并将摄像头位姿拟合结果记录到BIM模型中;S2、视频流获取与标定模块调用视频流,截取对应的视频帧,对摄像头的内参数进行标定,并将结果整合为矩阵相机内参矩阵K; S3、空间配准模块根据所述体素模型、摄像头位姿以及摄像头的内参K,计算视频流各像素坐标对应的三维体素坐标,获取像素与体素间的对应关系,完成监控视频图像与BIM模型的空间配准;S4、观众检测与定位模块对视频帧中的人体关键点进行检测,保存人体关键点结果中的双足节点像素位置,并访问所述的像素与体素间的对应关系,确定观众双足所在的体素,对观众进行室内定位;S5、关注度分析模块根据所述的观众定位结果,获得所有展区和展品的被参观时长后,对展区和展品的被参观时长数据进行归一化处理,统计关注度。
在上述方法中,还包括步骤S6、关注度分析模块对展区和展品进行可达性分析,所述步骤S6包括以下的步骤:S61、计算博物馆出入口到展区的地面体素区域的最短路径;计算博物馆出入口到展品中心点的最短路径;计算展区对应的地面体素数量;计算展品体素的外包长方体体积;计算展品到墙体素之间最短距离的倒数;S62、使用A*算法对所述步骤S61中的五个指标进行计算;S63、对所述五个指标进行归一化处理,得到展区和展品的可达性指标为:
展区可达性=(1/展区路径长度)×展区规模
展品可达性=(1/展品路径长度)×展品规模×展品中心性。
在上述方法中,所述步骤S1,包括以下的步骤:
S11、采用移动激光雷达扫描设备对博物馆内部采用分段扫描方式进行激光点云扫描;
S12、使用RandLA-Net算法对各分段点云进行三维语义分割,划分出不同的BIM模型要素;
S13、调用Open3D的Registration接口将各分段点云配准到统一的空间坐标基准下;
S14、对全局点云进行轴对齐操作;
S15、将博物馆已有的数字展品模型作为三维模板,在点云中进行模板匹配和三维空间位置拟合,确定数字展品模型在点云中的位姿,生成该数字展品的三维点云,使用数字展品点云替换扫描所得的展品点云;
S16、根据所拟合的展品三维模型及位姿,创建展品在博物馆BIM模型坐标系下的体素模型;
S17、对博物馆所使用的摄像头型号进行三维建模,以摄像头三维模型为模板,在点云中进行模板匹配操作和三维位姿拟合,计算摄像头在BIM坐标系下的三维位 置坐标和旋转角度,即摄像头的外参T,三维位置坐标使用三维向量t表示,三维旋转角度使用三维矩阵R表示,并将此二者写为相机外参矩阵T=[R|t],并将摄像头位姿拟合结果记录到BIM模型中。
在上述方法中,所述步骤S14中的轴对齐操作,即对点云的坐标系进行绕z轴的旋转,该旋转角度的计算步骤如下:
S141、调用Open3D库中的EstimateNormal函数计算点云中所有点的法向量,并对法向量进行归一化,使各法向量的三维长度为1;
S142、计算法向量在水平方向上的投影方向和长度,若长度大于阈值0.5,则判断该点属于垂直结构,需保留参与旋转角度的计算,若小于阈值0.5,则剔除该点,不参与旋转角计算;
S143、建立优化目标函数:
Figure PCTCN2022084962-appb-000001
Δθ i为所拟合角度与某点的法向量水平投影角度之差,N为参与旋转角度计算的点数量;
S144、采用无导数优化方法求解,调用nlopt库完成求解过程,获得旋转角度。
在上述方法中,所述步骤S2,包括以下的步骤:
S21、确定博物馆中所使用的摄像头型号,在每个型号中选一个摄像头进行标定;
S22、通过摄像头厂商所提供的API调用视频流,将张正友标定棋盘置于各标定摄像头前,摄像头拍摄选取的固定位置,并在该视频流中截取对应的视频帧;
S23、利用opencv库中的findChessboardCorners和calibrateCamera函数,进行摄像头内参标定,获得各摄像头型号的内参K。
在上述方法中,所述步骤S3中,计算视频流各像素坐标对应的三维体素坐标,像素坐标P i(u,v)和P c的关系为:
Figure PCTCN2022084962-appb-000002
以相机光心作为原点,以相机正前方为z轴,以成像平面的水平和垂直方向分别为x和y轴,建立相机坐标系,K即摄像头内参。在相机坐标系中,被拍摄点坐标为P c(x c,y c,z c),z c为被拍摄点到相机光心的距离,被拍摄点坐标坐标P c与该点在BIM模型坐标系下的坐标P w(x w,y w,z w)存在空间关系为P C=TP w,T为摄像机的外参,即相机坐标系相对BIM模型坐标系的旋转和平移量[R|t]。
在上述方法中,所述步骤S4,包括以下的步骤:
S41、采用计算机视觉处理库Detectron中的Mask R-CNN架构对视频帧中的人 体关键点进行检测;
S42、制作视频图像的数据集,对数据集中的观众进行实例轮廓与人体关键点标注,并在Detectron库的预训练模型上进行训练;
S43、间隔性的运行检测,保存人体关键点结果中的双足节点像素位置,并访问所述的像素与体素间的对应关系,确定观众双足所在的体素,对观众进行室内定位。
在上述方法中,所述步骤S5,包括以下的步骤:
S51、在所述Mask R-CNN中添加是否观看展品分支,在所述数据集中新增是否看展标注,并与所述的人体关键点检测分支同时进行训练;
S52、在检测新的视频流时,同步输出观众的双足节点像素,判断该观众是否在观看展品,当判断为“不在观看展品”时,不计入观看展品的人数中;当判断为“观看展品”时,则通过所述的像素与体素之间的映射关系,获得所检测双足像素对应的体素;
S53、对体素进行判断,当体素被划分为特定展品的参观区时,则将所检测观众计入到该展品在该帧的观看人数中;当体素被划分为特定展区,则将所检测观众计入到该展区在该帧的观看人数中,各帧所测得的参观人数,即为展区和展品的被参观时长,对展区和展品的时长数据进行归一化处理,统计关注度。
在上述方法中,还包括步骤S7、人群密度分析与预警模块根据所述的观众定位结果生成地面热力体素模型,根据密度展示体素颜色,完成人群密度分析与预警;
所述步骤S7,包括以下的步骤:
S71、根据所述双足像素对应的地面体素生成地面热力体素模型;
S72、通过三维可视化界面,根据密度展示体素颜色;
S73、设置密度阈值,当体素内的人员数量超过所设阈值,则三维可视化界面中弹出聚集警报信息,点击该信息,三维视图定位到密度高于阈值的体素位置。
本发明还提供了一种基于BIM和视频监控的博物馆参观分析的系统,包括
BIM模型构建模块、视频流获取与标定模块、空间配准模块、观众检测与定位模块以及关注度分析模块,
BIM模型构建模块用于对博物馆内部进行激光点云扫描,完成博物馆BIM建模,生成体素模型,并将摄像头位姿拟合结果记录到BIM模型中;
视频流获取与标定模块用于调用视频流,截取对应的视频帧,对摄像头进行内参标定,获得各型号摄像头的内参K;
空间配准模块与所述的BIM模型构建模块,以及视频流获取与标定模块均连接,用于根据所述体素模型、摄像头位姿以及摄像头的内参K,计算视频流各像素坐标对应的三维体素坐标,获取像素与体素间的对应关系,完成监控视频图像与BIM模型的空间配准;
观众检测与定位模块,与所述的视频流获取与标定模块,以及空间配准模块连接,用于对视频帧中的人体关键点进行检测,保存人体关键点结果中的双足节点像素位置,并访问所述的像素与体素间的对应关系,确定观众双足所在的体素,对观众进行室内定位;
关注度分析模块与所述的观众检测与定位模块连接,根据所述的观众定位结果,获得所有展区和展品的被参观时长后,对展区和展品的被参观时长数据进行归一化处理,统计关注度。
本发明的有益效果是:基于点云和博物馆已有的展品与摄像头三维模型,构建博物馆BIM模型,对BIM模型和监控视频中的像素进行空间配准,检测观众的双足像素点,并将所测双足像素坐标映射到BIM模型三维空间坐标下,完成观众定位,基于定位结果,结合展品和展区的可达性,统计给定时段内的观众到访展区和观看展品的数量,以分析观众对各展品和展区的关注度;并且可实现实时人群密度监控和预警,博物馆工作人员可设立人群密度警报阈值,一旦存在体素或体素区域在一定时长下保持高人群密度,则可对观众进行适当的游览引导,避免人群聚集;利用博物馆已有的监控视频网络,无需安装架设新设备,不增加额外的设备成本,实现了低成本的博物馆观众密度分析和展品展区关注度分析,具有较高的实操性。
附图说明
附图1为本发明的一种基于BIM和视频监控的博物馆参观分析的方法的流程图。
附图2为本发明中双足坐标和BIM模型三维坐标系之间的对应关系示意图。
附图3为本发明中视频图像中的像素坐标和体素坐标之间的对应关系示意图。
具体实施方式
下面结合附图和实施例对本发明进一步说明。
以下将结合实施例和附图对本发明的构思、具体结构及产生的技术效果进行清楚、完整地描述,以充分地理解本发明的目的、特征和效果。显然,所描述的实施 例只是本发明的一部分实施例,而不是全部实施例,基于本发明的实施例,本领域的技术人员在不付出创造性劳动的前提下所获得的其他实施例,均属于本发明保护的范围。另外,专利中涉及到的所有联接/连接关系,并非单指构件直接相接,而是指可根据具体实施情况,通过添加或减少联接辅件,来组成更优的联接结构。本发明创造中的各个技术特征,在不互相矛盾冲突的前提下可以交互组合。
参照图1所示,本发明的一种基于BIM和视频监控的博物馆参观分析的方法,包括以下的步骤:
S1、BIM模型构建模块对博物馆内部进行激光点云扫描,完成博物馆BIM建模,生成体素模型,并将摄像头位姿拟合结果记录到BIM模型中;
具体的,所述步骤S1,包括以下的步骤:
S11、采用移动激光雷达扫描设备对博物馆内部进行激光点云扫描,为避免长轨迹扫描带来的定位漂移,采用分段扫描,并将各分段的水平面积规模控制在50平方米以内,以便于后续进行语义分割;
S12、使用RandLA-Net算法对各分段点云进行三维语义分割,划分出墙体、地面和台阶等不同的BIM模型要素,RandLA-Net即面向大尺度点云语义分割任务的基于随机点采样和局域特聚合的神经网络模型;
S13、调用Open3D的Registration接口将各分段点云配准到统一的空间坐标基准下,Open3D即三维数据开源算法库,Registration接口即点云注册接口;
S14、为降低后续点云和体素处理的精度损失,对博物馆的全局点云进行轴对齐操作,所述的轴对齐操作,即对点云的坐标系进行绕z轴(垂直方向)的旋转,使点云中绝大部分的垂直结构(墙体等)平行于新坐标系下的x和y轴。轴对齐的关键为旋转角度的计算,该旋转角度的计算步骤如下:S141、调用Open3D库中的EstimateNormal函数计算点云中所有点的法向量,并对法向量进行归一化,使各法向量的三维长度为1,EstimateNormal函数即法向量估计函数;S142、计算法向量在水平方向上的投影方向和长度,若长度大于阈值0.5,则判断该点属于垂直结构,需保留参与旋转角度的计算,若小于阈值0.5,则剔除该点,不参与旋转角计算;S143、建立优化目标函数:
Figure PCTCN2022084962-appb-000003
Δθ i为所拟合角度与某点的法向量水平投影角度之差,N为参与旋转角度计算的点数量;S144、采用无导数优化方法求解,调用nlopt库完成求解过程,获得旋转角度,nlopt库即非线性无导数优化算法库。
S15、完成轴对齐操作后,将博物馆已有的数字展品模型视作三维模板,在点云中进行模板匹配和三维空间位置拟合:对点云进行三维滑窗操作,在符合尺寸的滑窗中,计算模型在各位姿(包含角度和位置)下的平均点误差,平均点误差若小于阈值,则认为确定了数字展品模型在点云中的位姿,确定数字展品模型在点云中的位姿后,生成该数字展品的三维点云,使用数字展品点云替换扫描所得的展品点云;
S16、根据所拟合的展品三维模型及位姿,创建展品在博物馆BIM模型坐标系下的体素模型,生成体素模型后,博物馆工作人员在体素模型交互软件中标记各展区对应的地面体素,在本实施例中,分三部分存储体素模型:(1)独立体素模型:文件头记录体素模型的原点坐标和体素边长,各体素按(vid,x,y,z,tid,pid,rid)记录三维坐标和属性,其中,vid为体素的id,x,y和z为体素的三维坐标,均为整数,属性包括tid,表示体素的类型(墙体:0,地面:1,台阶:2,展品:3);pid,若体素为展品体素,则pid为对应的数字展品信息系统中的展品id;rid,若体素为地面体素且属于某展区,则rid为对应展区的id。独立体素文件存储在文本文件中,并可按需压缩为二进制文件;(2)数字展品关联存储:在博物馆已有的数字展品信息系统中,新增对应体素字段,将展品对应的体素vid以集合的方式记录到字段中;(3)展区关联存储:在博物馆已有的运营管理数据库中,新增展区表,或直接扩展原有的展区表,新增对应体素字段,将展区对应的体素vid以集合的方式记录到该字段中;
S17、对博物馆所使用的摄像头型号进行三维建模,模型使用绝对尺寸,以摄像头三维模型为模板,在点云中进行模板匹配操作和三维位姿拟合,匹配方法和数字展品三维模型的模板匹配类似,计算摄像头在BIM坐标系下的三维位置坐标和旋转角度,即摄像头的外参T(以下简称相机外参),其中,三维位置坐标可使用三维向量t表示,三维旋转角度可使用三维矩阵R表示,并可将此二者写为相机外参矩阵T=[R|t],并将摄像头位姿拟合结果(包括相机外参和相机型号)记录到BIM模型中。
S2、视频流获取与标定模块调用视频流,截取对应的视频帧,对摄像头的内参数进行标定,摄像头的内参数包括相机焦距、成像平面平移量以及畸变等,并将结果整合为矩阵相机内参矩阵K;
具体的,所述步骤S2,包括以下的步骤:
S21、确定博物馆中所使用的摄像头型号,在每个型号中选一个摄像头进行标定;
S22、通过摄像头厂商所提供的API(视频流获取接口)调用视频流,将张正友标定棋盘置于各标定摄像头前(即本实施例中采用了张正友标定法),选若干固定位置被摄像头拍摄,并在该视频流中截取对应的视频帧;
S23、利用opencv库中的findChessboardCorners和calibrateCamera函数,进行摄像头内参标定,获得各摄像头型号的内参K,内参K记录在BIM模型中,以支持后续的实时与批量解算,opencv库即计算机视觉开源算法库,findChessboardCorners即棋盘角点检测函数,calibrateCamera即相机参数标定函数。
S3、空间配准模块根据所述体素模型、摄像头位姿以及摄像头的内参K,计算视频流各像素坐标对应的三维体素坐标,获取像素与体素间的对应关系,完成监控视频图像与BIM模型的空间配准;
具体的,所述步骤S3中,计算视频流各像素坐标对应的三维体素坐标,参照图2所示,像素坐标P i(u,v)和P c的关系为:
Figure PCTCN2022084962-appb-000004
以相机光心作为原点,以相机正前方为z轴,以成像平面的水平和垂直方向分别为x和y轴,建立相机坐标系,K即摄像头内参。在相机坐标系中,被拍摄点坐标为P c(x c,y c,z c),z c为被拍摄点到相机光心的距离,被拍摄点坐标P c与该点在BIM模型坐标系下的坐标P w(x w,y w,z w)存在空间关系为P C=TP w,T为摄像头的外参,即相机坐标系相对BIM模型坐标系的旋转和平移量[R|t]。当仅有单目摄像头时,z c无法被直接确定。本方案借助已经建立的三维体素模型,针对逐个像素搜索不同z c在BIM模型中所能找到的最近体素,作为该像素对应的三维空间位置,即实现将该像素配准到BIM模型上。进一步地,将摄像头成功解算的各像素对应体素坐标记录到摄像头属性中,以支持后续的观众检测与定位模块对观众室内定位。
S4、观众检测与定位模块对视频帧中的人体关键点进行检测,保存人体关键点结果中的双足节点像素位置,并访问所述的像素与体素间的对应关系,确定观众双足所在的体素,对观众进行室内定位;
具体的,所述步骤S4,包括以下的步骤:
S41、采用计算机视觉处理库Detectron中的MaskR-CNN对视频帧中的人体关 键点进行检测,该架构能够较好处理遮挡,当视频帧中出现观众相互遮挡的情况时,也能效果较好地估算被遮挡的关键位置,Mask R-CNN架构即基于掩膜和卷积神经网络的实例分割算法;
S42、为使Mask R-CNN模型在博物馆摄像头视角下仍能获得较好的检测结果,制作含500帧视频图像的数据集,对数据集中的观众进行实例轮廓与人体关键点标注,并在Detectron库的预训练模型上进行训练;
S43、每间隔5s运行一次检测,保存人体关键点结果中的双足节点像素位置,并访问所述的像素与体素间的对应关系,确定观众双足所在的体素,对观众进行室内定位。
S5、关注度分析模块根据所述的观众定位结果,获得所有展区和展品的被参观时长后,对展区和展品的被参观时长数据进行归一化处理,统计关注度;
具体的,所述步骤S5,包括以下的步骤:
S51、在所述Mask R-CNN已有的三分支上添加一个和分类分支类似的是否观看展品分支,除了将输出转化为实数而非向量外,其他结构和Mask R-CNN原有的分类分支结构相同。
为和观众检测与定位模块协同训练是否观看展品分支,在所述观众检测与定位模块的数据集中新增是否看展标注,并和观众检测与定位模块的人体关键点检测分支同时进行训练;
S52、完成模型训练后,即在检测新的视频流时,同步输出观众的双足节点像素,判断该观众是否在观看展品,当判断为“不在观看展品”时,不计入观看展品的人数中;当判断为“观看展品”时,则通过所述的像素与体素之间的映射关系,获得所检测双足像素对应的体素;
S53、参照图3所示,对体素进行判断,当体素被划分为特定展品的参观区时,则将所检测观众计入到该展品在该帧的观看人数中;当体素被划分为特定展区,则将所检测观众计入到该展区在该帧的观看人数中,某展品或展区在某时间段内的被参观时长即为各帧所测得的该展品的参观人数,在获得所有展区和展品的被参观时长后,分别对展区和展品的时长数据进行归一化处理,以便统计最终的关注度。
观众停留在展品参观区中并不直接意味着观众在参观该展品,因此还需要对观众进行行为识别,因此可采用监督学习方法判断观众停留于展区中时是否正在参观对应的展品。进一步地,为进行观赏行为判断的监督学习,需标记相应的真值数据; 进一步地,采用基于深度学习的图像处理方法,在前沿分类卷积神经网络的基础上,使用真值标注数据,进行参数微调(Finetuning)。
进一步地,还包括步骤S6、关注度分析模块对展区和展品进行可达性分析,
具体的,所述步骤S6,包括以下的步骤:
S61、(1)展区路径长度:计算博物馆出入口到展区的地面体素区域的最短路径;(2)展品路径长度:计算博物馆出入口到展品中心点的最短路径;(3)展区规模:计算展区对应的地面体素数量;(4)展品规模:计算展品体素的外包长方体体积;(5)展品中心性:计算展品到墙体素之间最短距离(路径)的倒数,即离墙体越远,中心性越强;
S62、使用A*算法对所述步骤S61中的五个指标进行计算;
S63、对所述五个指标进行归一化处理,得到展区和展品的可达性指标为:
展区可达性=(1/展区路径长度)×展区规模
展品可达性=(1/展品路径长度)×展品规模×展品中心性。
根据观众检测和定位结果,记录展区和展品所对应体素区域在不同时间戳所录得的观众人数,并判断停留在展品前的观众是否在观赏展品。此外,博物馆中已有的展品展区位置分布导致了不同的空间可达性,而可达性的差别会在极大程度上影响展品展区被观众参观的可能性。本发明的方案在观众参观时间的基础上,结合展品展区的可达性,对关注度进行综合分析,为博物馆工作人员提供较为准确的策展参考。博物馆工作人员可分别对展区和展品的被参观时长和可达性进行分析,也可计算展品和展区的“净关注度”,即归一化后的被参观时长/可达性,以查看各展品和展区在剔除可达性影响后的关注度情况,有助于博物馆管理人员发现可达性较高,但观众反应却不热烈的展区或展品,或可达性虽然不高,但观众却仍被吸引的展品。
进一步地,还包括步骤S7、人群密度分析与预警模块根据所述的观众定位结果生成地面热力体素模型,根据密度展示体素颜色,完成人群密度分析与预警;
具体的,所述步骤S7,包括以下的步骤:
S71、根据所述双足像素对应的地面体素生成地面热力体素模型,单足落于某体素内,则该体素在某视频帧的人员数量+1;
S72、给管理人员提供三维可视化界面,根据密度展示体素颜色;
S73、设置密度阈值,当体素内的人员数量超过所设阈值,则三维可视化界面中弹出聚集警报信息,点击该信息,三维视图定位到密度高于阈值的体素位置,工 作人员可据此判断是否对该位置的观众进行路径引导。
本发明还提供了一种基于BIM和视频监控的博物馆参观分析的系统,包括BIM模型构建模块、视频流获取与标定模块、空间配准模块、观众检测与定位模块以及关注度分析模块,
BIM模型构建模块用于对博物馆内部进行激光点云扫描,完成博物馆BIM建模,生成体素模型,并将摄像头位姿拟合结果记录到BIM模型中;进一步地,考虑到部分博物馆已有现成的三维展品模型,本方案可利用试点博物馆现有的展品数字三维模型,在LiDAR点云中进行匹配和三维空间位置拟合,确定各个展品在博物馆中的三维位姿,即位置坐标和角度,该数据和相应的展品模型编号存储在BIM模型中;进一步地,本方案可根据所拟合的展品三维模型及位姿,创建各个展品在博物馆BIM模型坐标系下的体素模型;进一步地,考虑到展区的设置一般较为灵活且可变动,本方案由工作人员或建模人员在地面体素模型上进行圈选标记,可参照图3中展区1所划分的地面体素;进一步地,设置观看展品的距离阈值,并在地面体素中划分出各展品对应的参观区体素,可参照图3中展品A-D的地面体素划分;
视频流获取与标定模块用于调用视频流,截取对应的视频帧,对摄像头进行内参标定,获得各型号摄像头的内参K;
空间配准模块与所述的BIM模型构建模块,以及视频流获取与标定模块均连接,用于根据所述体素模型、摄像头位姿以及摄像头的内参K,计算视频流各像素坐标对应的三维体素坐标,获取像素与体素间的对应关系,完成监控视频图像与BIM模型的空间配准;
观众检测与定位模块,与所述的视频流获取与标定模块,以及空间配准模块连接,用于对视频帧中的人体关键点进行检测,保存人体关键点结果中的双足节点像素位置,并访问所述的像素与体素间的对应关系,确定观众双足所在的体素,对观众进行室内定位;
关注度分析模块与所述的观众检测与定位模块连接,根据所述的观众定位结果,获得所有展区和展品的被参观时长后,对展区和展品的被参观时长数据进行归一化处理,统计关注度。
本发明基于点云和博物馆已有的展品与摄像头三维模型,构建博物馆BIM模型,对BIM模型和监控视频中的像素进行空间配准,检测观众的双足像素点,并将所测双足像素坐标映射到BIM模型三维空间坐标下,完成观众定位,基于定位结果,结 合展品和展区的可达性,统计给定时段内的观众到访展区和观看展品的数量,以分析观众对各展品和展区的关注度;并且可实现实时人群密度监控和预警,博物馆工作人员可设立人群密度警报阈值,一旦存在体素或体素区域在一定时长下保持高人群密度,则可对观众进行适当的游览引导,避免人群聚集;利用博物馆已有的监控视频网络,无需安装架设新设备,不增加额外的设备成本,实现了低成本的博物馆观众密度分析和展品展区关注度分析,具有较高的实操性。
以上是对本发明的较佳实施进行了具体说明,但本发明创造并不限于所述实施例,熟悉本领域的技术人员在不违背本发明精神的前提下还可做出种种的等同变形或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。

Claims (10)

  1. 一种基于BIM和视频监控的博物馆参观分析的方法,其特征在于:包括以下的步骤:
    S1、BIM模型构建模块对博物馆内部进行激光点云扫描,完成博物馆BIM建模,生成体素模型,并将摄像头位姿拟合结果记录到BIM模型中;
    S2、视频流获取与标定模块调用视频流,截取对应的视频帧,对摄像头的内参数进行标定,并将结果整合为矩阵相机内参矩阵K;
    S3、空间配准模块根据所述体素模型、摄像头位姿以及摄像头的内参K,计算视频流各像素坐标对应的三维体素坐标,获取像素与体素间的对应关系,完成监控视频图像与BIM模型的空间配准;
    S4、观众检测与定位模块对视频帧中的人体关键点进行检测,保存人体关键点结果中的双足节点像素位置,并访问所述的像素与体素间的对应关系,确定观众双足所在的体素,对观众进行室内定位;
    S5、关注度分析模块根据所述的观众定位结果,获得所有展区和展品的被参观时长后,对展区和展品的被参观时长数据进行归一化处理,统计关注度。
  2. 如权利要求1所述的一种基于BIM和视频监控的博物馆参观分析的方法,其特征在于:还包括步骤S6、关注度分析模块对展区和展品进行可达性分析,所述步骤S6包括以下的步骤:S61、计算博物馆出入口到展区的地面体素区域的最短路径;计算博物馆出入口到展品中心点的最短路径;计算展区对应的地面体素数量;计算展品体素的外包长方体体积;计算展品到墙体素之间最短距离的倒数;S62、使用A*算法对所述步骤S61中的五个指标进行计算;S63、对所述五个指标进行归一化处理,得到展区和展品的可达性指标为:
    展区可达性=(1/展区路径长度)×展区规模
    展品可达性=(1/展品路径长度)×展品规模×展品中心性。
  3. 如权利要求1所述的一种基于BIM和视频监控的博物馆参观分析的方法,其特征在于:所述步骤S1,包括以下的步骤:
    S11、采用移动激光雷达扫描设备对博物馆内部采用分段扫描方式进行激光点云扫描;
    S12、使用RandLA-Net算法对各分段点云进行三维语义分割,划分出不同的 BIM模型要素;
    S13、调用Open3D的Registration接口将各分段点云配准到统一的空间坐标基准下;
    S14、对全局点云进行轴对齐操作;
    S15、将博物馆已有的数字展品模型作为三维模板,在点云中进行模板匹配和三维空间位置拟合,确定数字展品模型在点云中的位姿,生成该数字展品的三维点云,使用数字展品点云替换扫描所得的展品点云;
    S16、根据所拟合的展品三维模型及位姿,创建展品在博物馆BIM模型坐标系下的体素模型;
    S17、对博物馆所使用的摄像头型号进行三维建模,以摄像头三维模型为模板,在点云中进行模板匹配操作和三维位姿拟合,计算摄像头在BIM坐标系下的三维位置坐标和旋转角度,即摄像头的外参T,三维位置坐标使用三维向量t表示,三维旋转角度使用三维矩阵R表示,并将此二者写为相机外参矩阵T=[R|t],并将摄像头位姿拟合结果记录到BIM模型中。
  4. 如权利要求3所述的一种基于BIM和视频监控的博物馆参观分析的方法,其特征在于:所述步骤S14中的轴对齐操作,即对点云的坐标系进行绕z轴的旋转,该旋转角度的计算步骤如下:
    S141、调用Open3D库中的EstimateNormal函数计算点云中所有点的法向量,并对法向量进行归一化,使各法向量的三维长度为1;
    S142、计算法向量在水平方向上的投影方向和长度,若长度大于阈值0.5,则判断该点属于垂直结构,需保留参与旋转角度的计算,若小于阈值0.5,则剔除该点,不参与旋转角计算;
    S143、建立优化目标函数:
    Figure PCTCN2022084962-appb-100001
    Δθ i为所拟合角度与某点的法向量水平投影角度之差,N为参与旋转角度计算的点数量;
    S144、采用无导数优化方法求解,调用nlopt库完成求解过程,获得旋转角度。
  5. 如权利要求4所述的一种基于BIM和视频监控的博物馆参观分析的方法,其特征在于:所述步骤S2,包括以下的步骤:
    S21、确定博物馆中所使用的摄像头型号,在每个型号中选一个摄像头进行标定;
    S22、通过摄像头厂商所提供的API调用视频流,将张正友标定棋盘置于各标 定摄像头前,摄像头拍摄选取的固定位置,并在该视频流中截取对应的视频帧;
    S23、利用opencv库中的findChessboardCorners和calibrateCamera函数,进行摄像头内参标定,获得各摄像头型号的内参K。
  6. 如权利要求5所述的一种基于BIM和视频监控的博物馆参观分析的方法,其特征在于:所述步骤S3中,计算视频流各像素坐标对应的三维体素坐标,像素坐标P i(u,v)和P c的关系为:
    Figure PCTCN2022084962-appb-100002
    以相机光心作为原点,以相机正前方为z轴,以成像平面的水平和垂直方向分别为x和y轴,建立相机坐标系,K即摄像头内参。在相机坐标系中,被拍摄点坐标为P c(x c,y c,z c),z c为被拍摄点到相机光心的距离,被拍摄点坐标P c与该点在BIM模型坐标系下的坐标P w(x w,y w,z w)存在空间关系为P c=TP w,T为摄像机的外参,即相机坐标系相对BIM模型坐标系的旋转和平移量[R|t]。
  7. 如权利要求6所述的一种基于BIM和视频监控的博物馆参观分析的方法,其特征在于:所述步骤S4,包括以下的步骤:
    S41、采用计算机视觉处理库Detectron中的Mask R-CNN对视频帧中的人体关键点进行检测;
    S42、制作视频图像的数据集,对数据集中的观众进行实例轮廓与人体关键点标注,并在Detectron库的预训练模型上进行训练;
    S43、间隔性的运行检测,保存人体关键点结果中的双足节点像素位置,并访问所述的像素与体素间的对应关系,确定观众双足所在的体素,对观众进行室内定位。
  8. 如权利要求7所述的一种基于BIM和视频监控的博物馆参观分析的方法,其特征在于:所述步骤S5,包括以下的步骤:
    S51、在所述Mask R-CNN中添加是否观看展品分支,在所述数据集中新增是否看展标注,并与所述的人体关键点检测分支同时进行训练;
    S52、在检测新的视频流时,同步输出观众的双足节点像素,判断该观众是否在观看展品,当判断为“不在观看展品”时,不计入观看展品的人数中;当判断为“观看展品”时,则通过所述的像素与体素之间的映射关系,获得所检测双足像素对应的体素;
    S53、对体素进行判断,当体素被划分为特定展品的参观区时,则将所检测观众计入到该展品在该帧的观看人数中;当体素被划分为特定展区,则将所检测观众 计入到该展区在该帧的观看人数中,各帧所测得的参观人数,即为展区和展品的被参观时长,对展区和展品的时长数据进行归一化处理,统计关注度。
  9. 如权利要求8所述的一种基于BIM和视频监控的博物馆参观分析的方法,其特征在于:还包括步骤S7、人群密度分析与预警模块根据所述的观众定位结果生成地面热力体素模型,根据密度展示体素颜色,完成人群密度分析与预警;
    所述步骤S7,包括以下的步骤:
    S71、根据所述双足像素对应的地面体素生成地面热力体素模型;
    S72、通过三维可视化界面,根据密度展示体素颜色;
    S73、设置密度阈值,当体素内的人员数量超过所设阈值,则三维可视化界面中弹出聚集警报信息,点击该信息,三维视图定位到密度高于阈值的体素位置。
  10. 一种基于BIM和视频监控的博物馆参观分析的系统,其特征在于:包括BIM模型构建模块、视频流获取与标定模块、空间配准模块、观众检测与定位模块以及关注度分析模块,
    BIM模型构建模块用于对博物馆内部进行激光点云扫描,完成博物馆BIM建模,生成体素模型,并将摄像头位姿拟合结果记录到BIM模型中;
    视频流获取与标定模块用于调用视频流,截取对应的视频帧,对摄像头进行内参标定,获得各型号摄像头的内参K;
    空间配准模块与所述的BIM模型构建模块,以及视频流获取与标定模块均连接,用于根据所述体素模型、摄像头位姿以及摄像头的内参K,计算视频流各像素坐标对应的三维体素坐标,获取像素与体素间的对应关系,完成监控视频图像与BIM模型的空间配准;
    观众检测与定位模块,与所述的视频流获取与标定模块,以及空间配准模块连接,用于对视频帧中的人体关键点进行检测,保存人体关键点结果中的双足节点像素位置,并访问所述的像素与体素间的对应关系,确定观众双足所在的体素,对观众进行室内定位;
    关注度分析模块与所述的观众检测与定位模块连接,根据所述的观众定位结果,获得所有展区和展品的被参观时长后,对展区和展品的被参观时长数据进行归一化处理,统计关注度。
PCT/CN2022/084962 2022-03-24 2022-04-02 一种基于bim和视频监控的博物馆参观分析的方法及系统 WO2023178729A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210302636.6 2022-03-24
CN202210302636.6A CN114820924A (zh) 2022-03-24 2022-03-24 一种基于bim和视频监控的博物馆参观分析的方法及系统

Publications (1)

Publication Number Publication Date
WO2023178729A1 true WO2023178729A1 (zh) 2023-09-28

Family

ID=82530871

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/084962 WO2023178729A1 (zh) 2022-03-24 2022-04-02 一种基于bim和视频监控的博物馆参观分析的方法及系统

Country Status (2)

Country Link
CN (1) CN114820924A (zh)
WO (1) WO2023178729A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115362A (zh) * 2023-10-20 2023-11-24 成都量芯集成科技有限公司 一种室内结构化场景三维重建方法
CN117596367A (zh) * 2024-01-19 2024-02-23 安徽协创物联网技术有限公司 一种低功耗视频监控摄像头及其控制方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115901621A (zh) * 2022-10-26 2023-04-04 中铁二十局集团第六工程有限公司 一种高层建筑外表面混凝土病害数字化识别方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192321A (zh) * 2019-12-31 2020-05-22 武汉市城建工程有限公司 目标物三维定位方法与装置
CN111967443A (zh) * 2020-09-04 2020-11-20 邵传宏 基于图像处理与bim的档案馆内感兴趣区域分析方法
CN113538373A (zh) * 2021-07-14 2021-10-22 中国交通信息科技集团有限公司 一种基于三维点云的施工进度自动检测方法
US20220005332A1 (en) * 2018-10-29 2022-01-06 Hexagon Technology Center Gmbh Facility surveillance systems and methods
CN114137564A (zh) * 2021-11-30 2022-03-04 建科公共设施运营管理有限公司 一种室内物体自动标识定位方法和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110703665A (zh) * 2019-11-06 2020-01-17 青岛滨海学院 一种博物馆室内解说机器人及工作方法
WO2022040970A1 (zh) * 2020-08-26 2022-03-03 南京翱翔信息物理融合创新研究院有限公司 一种同步实现三维重建和ar虚实注册的方法、系统及装置
CN112085534B (zh) * 2020-09-11 2023-01-06 中德(珠海)人工智能研究院有限公司 一种关注度分析方法、系统及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220005332A1 (en) * 2018-10-29 2022-01-06 Hexagon Technology Center Gmbh Facility surveillance systems and methods
CN111192321A (zh) * 2019-12-31 2020-05-22 武汉市城建工程有限公司 目标物三维定位方法与装置
CN111967443A (zh) * 2020-09-04 2020-11-20 邵传宏 基于图像处理与bim的档案馆内感兴趣区域分析方法
CN113538373A (zh) * 2021-07-14 2021-10-22 中国交通信息科技集团有限公司 一种基于三维点云的施工进度自动检测方法
CN114137564A (zh) * 2021-11-30 2022-03-04 建科公共设施运营管理有限公司 一种室内物体自动标识定位方法和装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115362A (zh) * 2023-10-20 2023-11-24 成都量芯集成科技有限公司 一种室内结构化场景三维重建方法
CN117115362B (zh) * 2023-10-20 2024-04-26 成都量芯集成科技有限公司 一种室内结构化场景三维重建方法
CN117596367A (zh) * 2024-01-19 2024-02-23 安徽协创物联网技术有限公司 一种低功耗视频监控摄像头及其控制方法

Also Published As

Publication number Publication date
CN114820924A (zh) 2022-07-29

Similar Documents

Publication Publication Date Title
WO2023178729A1 (zh) 一种基于bim和视频监控的博物馆参观分析的方法及系统
Zollmann et al. Augmented reality for construction site monitoring and documentation
CN111462135A (zh) 基于视觉slam与二维语义分割的语义建图方法
JP4185052B2 (ja) 拡張仮想環境
WO2020228766A1 (zh) 基于实景建模与智能识别的目标跟踪方法、系统及介质
CN104330074B (zh) 一种智能测绘平台及其实现方法
CN103198488B (zh) Ptz监控摄像机实时姿态快速估算方法
WO2023093217A1 (zh) 数据标注方法、装置、计算机设备、存储介质和程序
CN106204656A (zh) 基于视频和三维空间信息的目标定位和跟踪系统及方法
CN110400352A (zh) 利用特征识别的摄像机校准
CN111192321B (zh) 目标物三维定位方法与装置
CN107481291B (zh) 基于标识虚线物理坐标的交通监控模型标定方法及系统
CN115375779B (zh) 相机ar实景标注的方法及系统
CN112802208B (zh) 一种航站楼内三维可视化方法及装置
CN115035162A (zh) 基于视觉slam的监控视频人员定位跟踪方法及系统
CN114399552B (zh) 一种室内监护环境行为识别及定位方法
Shalaby et al. Algorithms and applications of structure from motion (SFM): A survey
JP2013051477A (ja) 映像監視装置、映像監視方法およびプログラム
CN112991534A (zh) 一种基于多粒度物体模型的室内语义地图构建方法及系统
CN107862713A (zh) 针对轮询会场的摄像机偏转实时检测预警方法及模块
CN113627005B (zh) 一种智能视觉监控方法
CN114202819A (zh) 一种基于机器人的变电站巡检方法、系统及计算机
CN117711130A (zh) 基于3d建模的厂区安全生产监管方法及系统、电子设备
CN112509110A (zh) 一种陆地对抗智能体的图像数据集自动采取与标注框架
CN109740458B (zh) 一种基于视频处理的体貌特征测量方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22932790

Country of ref document: EP

Kind code of ref document: A1