CN115908691A - Layout data display system and layout data display method - Google Patents

Layout data display system and layout data display method Download PDF

Info

Publication number
CN115908691A
CN115908691A CN202210579786.1A CN202210579786A CN115908691A CN 115908691 A CN115908691 A CN 115908691A CN 202210579786 A CN202210579786 A CN 202210579786A CN 115908691 A CN115908691 A CN 115908691A
Authority
CN
China
Prior art keywords
layout data
viewpoint position
display system
data display
building model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210579786.1A
Other languages
Chinese (zh)
Inventor
鸟海渉
藤原正康
羽鸟贵大
斋藤太地
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Publication of CN115908691A publication Critical patent/CN115908691A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a layout data display system and a layout data display method. A screen for bringing a desired observation area of a building model into view can be efficiently generated. A layout data display system displays layout data (111) of a building model, and is provided with: and a viewpoint position determination unit (122) that determines a viewpoint position when the layout data (111) of the building model is displayed, wherein the viewpoint position determination unit (122) selects the viewpoint position from the candidate coordinates (701, 702) based on the number of times of interference between a line segment connecting a point (400) on the observation area set in the layout data (111) and the candidate coordinates (701, 702) of the viewpoint position and the object arranged in the layout data (111).

Description

Layout data display system and layout data display method
Technical Field
The present invention relates to a layout data display system and a layout data display method for displaying layout data of a building or the like.
Background
In recent years, techniques for promoting decision-making of system users by integrating all Information of buildings into a three-dimensional model and displaying the three-dimensional model on a screen, such as a Building Information Model (BIM), have been increasing. In these techniques, attribute data is embedded in each three-dimensional model that is a component of a building. As an example thereof, patent document 1 can be cited.
For example, patent document 1 describes "a BIM system simulating an environment of a comprehensive BIM model while changing an environmental object that affects environments inside and outside the BIM model on a layout screen. Here, the BIM system generates a layout screen "including a lighting environment and a lighting environment around the elevator installation position based on the simulation result.
Documents of the prior art
Patent literature
Patent document 1: JP 2014-123229 publication
However, in the planning of the installation of an elevator, the sufficiency of its waiting space is often cited in the discussion. In order to discuss the waiting space, it is desirable to generate a screen for bringing the crowded area, which is the waiting space of the three-dimensional model of the building, into view.
In general, such a screen can be obtained by making an object unnecessary for display translucent or non-display from a three-dimensional model of the entire building, or by manually adjusting the viewpoint, distance, and angle of a camera. However, when the number of components of the three-dimensional model increases or when there are a plurality of waiting spaces, it is not desirable to perform the above adjustment every time in terms of man-hours. The prior art also does not mention a method of efficiently generating a picture.
In light of the above situation, a method for efficiently generating a screen for bringing a desired region to be observed, such as a waiting space, into view is required.
Disclosure of Invention
In order to solve the above problem, a layout data display system according to an aspect of the present invention displays layout data of a building model, and includes a viewpoint position determination unit that determines a viewpoint position when the layout data of the building model is displayed. The viewpoint position determination unit selects the viewpoint position from the candidate coordinates based on the number of times of interference between a line segment connecting a point on the observation area set in the layout data and the candidate coordinates of the viewpoint position and the object arranged in the layout data.
ADVANTAGEOUS EFFECTS OF INVENTION
According to at least one aspect of the present invention, a screen for bringing a region desired to be observed into view can be efficiently generated from layout data of a building model.
Problems, structures, and effects other than those described above will be apparent from the following description of the embodiments.
Drawings
Fig. 1 is a block diagram showing a configuration example of a screen generating device as a layout data display system according to embodiment 1 of the present invention.
Fig. 2 is a diagram showing an example of the building model data and the congestion range.
Fig. 3 is a diagram illustrating camera information.
Fig. 4 is a diagram illustrating a method of determining a fixation point.
Fig. 5 is a diagram illustrating a method of determining the camera distance.
Fig. 6 is a flowchart showing an example of a procedure of the process of determining the camera angle.
Fig. 7 is a diagram illustrating interference confirmation processing when determining a camera angle.
Fig. 8 is a diagram showing an example of a viewpoint selection screen.
Fig. 9 is a diagram showing an example of a display screen in which a human object is arranged in the building model data.
Fig. 10 is a block diagram showing a configuration example of the screen generating device according to embodiment 2 of the present invention.
Fig. 11 is a diagram illustrating cut surface information.
Fig. 12 is a diagram showing an example of a display screen in which the building model data is cut by the cut surface.
Description of reference numerals
100. 100A screen generating device (layout display system), 110 storage unit, 111 building model data, 112 congestion range, 113 camera information, 120 arithmetic unit, 121 congestion range acquisition unit, 122 camera information determination unit, 122a gaze point coordinate determination unit, 122c camera angle determination unit, 123 screen generation unit, 130 input/output unit, 300 camera object (viewpoint), 301 gaze point, 302 camera angle, 303 camera distance, 400 visualization point, 501 visualization point, 502 margin, 503 visualization sphere, 504 view angle, 505 distance, 506 view angle range, 701, 702 camera object (viewpoint candidate), 800 viewpoint selection screen, 810, 820 screen, 900 screen, 910 human object, 1001 section information, 1002 section generation unit, 1010 section area, 1111 hall, 1112 access, 1200 screen, 1201 to 1203 floor object, 1210 elevator hall, 1220 access
Detailed Description
Examples of modes for carrying out the invention are described below with reference to the accompanying drawings. In the present specification and the drawings, components having substantially the same function or configuration are denoted by the same reference numerals, and redundant description thereof is omitted.
< embodiment 1 >
First, a layout data display system and a layout data display method for displaying layout data (building model data) of a building model according to embodiment 1 of the present invention will be described.
[ Structure of Screen Generation device ]
Fig. 1 is a block diagram showing a configuration example of a screen generating device as a layout data display system according to embodiment 1 of the present invention. The illustrated screen generating apparatus 100 includes a storage unit 110, an arithmetic unit 120, an input/output unit 130, and a bus 140. The storage unit 110, the operation unit 120, the input/output unit 130, and the bus 140 constitute a computer system (an example of a computer).
The storage unit 110 is configured by a main storage device such as a DRAM (Dynamic Random Access Memory) or an SRAM (Static Random Access Memory), and an auxiliary storage device such as a hard disk drive or a flash Memory. The storage unit 110 holds building model data 111, a congestion area 112, and camera information 113. Details of these data and information are described later. Further, a program (computer program) of software that realizes each function of the screen generation device 100 according to the present embodiment is stored in the storage unit 110. The storage unit 110 is used as an example of a non-transitory computer-readable recording medium storing a program executed by a computer.
The arithmetic Unit 120 is mainly composed of a Central Processing Unit (CPU), and executes a plurality of processes therein. The CPU reads and executes the program stored in the storage unit 110 to realize a predetermined process. When this processing is roughly divided, the processing is divided into a congestion area acquisition unit 121, a camera information determination unit 122, and a screen generation unit 123. Alternatively, another processor such as an MPU may be used instead of the CPU.
The congestion range acquisition unit 121 executes a process of acquiring information of a congestion range (for example, a congestion range 112 in fig. 2 described later) in the building model. The details of the congestion area acquiring unit 121 will be described later. In the present embodiment, the congestion area is described as an example of an area (observation area) displayed on the screen, but the observation area is not limited to this example.
The camera information determination unit 122 (an example of the viewpoint position determination unit) executes a process of determining information (hereinafter referred to as "camera information") related to the viewpoint position when the observation region (for example, the visual sphere 503 in fig. 5) of the layout data (the building model data 111) of the building model is displayed. The viewpoint is information such as "where to look" and "where to look" in the three-dimensional space of the building model data 111. The camera information determination section 122 has functions mainly divided into a gazing point coordinate determination section 122a, a camera distance determination section 122b, and a camera angle determination section 122c. Details of these functions of the camera information determination unit 122 will be described later.
The screen generating unit 123 performs a process of generating a display screen by performing projection conversion on the observation area of the layout data of the building model (building model data 111) based on the determined viewpoint position. The range (observation area) of the building model data 111 displayed on the display screen does not necessarily coincide with the aforementioned visual sphere 503.
The input/output unit 130 includes an input unit 131 and an output unit 132. The input unit 131 is constituted by an input device such as a mouse or a keyboard operated by a user. The output unit 132 is constituted by an output device such as a display or a printer that displays a screen. The input/output unit 130 also functions as a communication interface. For example, the communication Interface uses a NIC (Network Interface Card) or the like. The communication interface is configured to be able to transmit and receive various data to and from an external device via a communication network such as a LAN or the internet, a dedicated line, or the like to which the terminal is connected.
The bus 140 is a common line for performing data communication between the functional blocks of the screen generating apparatus 100. The storage unit 110, the calculation unit 120, and the input/output unit 130 are configured to be able to communicate with each other via a bus 140.
The computer system constituting the screen generation device 100 may be a computer system in which a plurality of computer systems are connected via communication. For example, the storage unit 110, the calculation unit 120, and the input/output unit 130 may be implemented by different computer systems, and communication means for connecting the computer systems may be the bus 140.
[ Explanation of data ]
Next, data used in the screen generation apparatus 100 will be described. First, the building model data 111 (fig. 1) will be described with reference to fig. 2.
Fig. 2 is a diagram showing an example of the building model data 111 and the congestion range 112. In fig. 2, the horizontal direction is defined as an X axis, the vertical direction is defined as a Y axis, and a direction perpendicular to the XY plane is defined as a Z axis. The X axis is in the same direction as the opening and closing directions of the hall doors 203L and 203R. The Z-axis is a direction directly opposite (orthogonal to) the hall doors 203L, 203R. Fig. 2 shows an example of a screen in which an area including the congestion area 112 of the building model data 111 is viewed from a certain viewpoint.
The building model data 111 is model data including at least shape information of a building model (building). As representative Building model data, BIM (Building Information Modeling) data can be cited. For example, the building model data 111 includes data on objects such as steps, windows, columns, ceilings, escalators, and various spaces, in addition to the floor 201, the wall 202, and elevators (e.g., the hoistway 204 and the hall doors 203L and 203R). In the case where the hall doors 203L and 203R are not distinguished, they are described as hall doors 203.
The congestion range 112 (fig. 1) is object data having information of at least a congestion range in the building model between the observation area including the gaze point 301 and the viewpoint position. The "crowded area" specifically refers to an area of a waiting queue for an elevator such as an elevator or an escalator. Fig. 2 shows an example in which a congestion area 112 is set in an elevator hall (elevator hall). The congestion area 112 may be three-dimensional data as shown by a dotted line in fig. 2, or may be two-dimensional data (plane) in which definition in the height direction is omitted. The congestion range 112 may be included as an attribute in the building model data 111. The congestion area 112 may include information indicating the size and position of an expected congestion area, as well as the cause of congestion of an elevator, an escalator, or the like, predicted waiting time, the number of expected persons in the congestion area, and the like.
Next, the camera information 113 (fig. 1) will be described with reference to fig. 3.
Fig. 3 is a diagram illustrating the camera information 113. The camera information 113 is information for projecting a three-dimensional object onto a two-dimensional screen. Specifically, the camera information 113 includes at least 3 pieces of information of a gazing point 301, a camera distance 303, and a camera angle 302. When a two-dimensional screen is generated by the perspective projection method, information on the angle of view of the camera object 300 can be held in addition to the information. The camera object 300 is an object on the building model data 111 created to determine the viewpoint position.
In fig. 3, a camera object 300 (viewpoint position) is set so that the hall door 203 provided in the hoistway 204 of the building model data 111 can be seen. There is a front wall 202F between the gaze point 301 and the camera object 300. Further, a rear wall 202R is provided on the rear surface of the elevating path 204. The distance from the gazing point 301 to the camera object 300 (viewpoint position) becomes a camera distance 303.
The horizontal and vertical angles of a line segment connecting the gazing point 301 and the camera object 300 (viewpoint position) are the camera angle 302. In fig. 3, the camera angle 302 appears as one angle because it is displayed in a flat plane, but actually has information of 2 angles, that is, a camera angle around the Y axis (around the vertical axis) representing an angle in the horizontal direction and a camera angle around the X axis (around the horizontal axis) representing an angle in the vertical direction. The camera angle 302 of fig. 3 represents the camera angle for the vertical direction. By performing projection processing on a three-dimensional object based on these pieces of information, a two-dimensional screen is uniquely generated.
[ Explanation of treatment ]
Next, processing performed by each functional block of the arithmetic unit 120 of the screen generating apparatus 100 will be described.
(section for acquiring Congestion Range)
First, the process of the congestion range acquisition unit 121 will be described.
The congestion range acquisition unit 121 predicts the congestion range 112 of the building model represented by the building model data 111 and acquires the predicted congestion range as a congestion range object. As an example of a method for acquiring the congestion range 112 of the building model by the congestion range acquiring unit 121, there is a method of traffic calculation using an elevator (elevator in the present embodiment). First, the congestion area obtaining unit 121 calculates the predicted waiting passenger number of the elevator from the difference between the transportation capacity for a predetermined time (for example, 5 minutes) calculated by the traffic calculation of the elevator and the predicted traffic demand for a predetermined time zone. Next, by arranging the occupied areas of the corresponding number of people predicted to wait for the number of people from the queuing head position input in advance, the outline of the occupied area can be estimated as the space (congestion area 112) necessary for predicting the number of waiting people. The congestion range acquisition unit 121 stores the estimated congestion range 112 in the storage unit 110.
In the prediction of the congestion area 112, the result of the traffic simulation may be used. In this case, the congestion range acquisition unit 121 may set the range of the waiting queue calculated by the traffic simulator and the region in which the density of people exceeds the threshold as the congestion range 112.
Instead of obtaining the predicted congestion range by traffic calculation or traffic flow simulation, the congestion range obtaining unit 121 may obtain the congestion range 112 by analyzing image data captured by a monitoring camera attached to an actual building corresponding to the building model. In this case, the real-time traffic can be reflected in the congestion area 112. The congestion area acquisition unit 121 has been described above.
The congestion range 112 of the target elevator is obtained by the traffic calculation of the elevator, the traffic simulation, or the analysis result of the actual camera image as described above, and is reflected on the display screen, whereby a screen effective in explaining to the owner of the building or the like can be created.
(Camera information determination unit)
Next, the processing of the camera information determination unit 122 will be described with reference to fig. 4 to 6.
The camera information determination unit 122 determines a fixation point 301, a camera distance 303, and a camera angle 302 to be used when projecting the three-dimensional object onto the screen.
Determination of the coordinates of the fixation point
A method of determining the gaze point coordinates by the gaze point coordinate determination unit 122a will be described with reference to fig. 4.
Fig. 4 is a diagram illustrating a method of determining the coordinates of the fixation point. First, a point desired to be included in the visual field on the screen finally obtained by projection is set as a visualization point 400, and the visualization point 400 is determined. The type of the visualization point 400 may be predetermined or may be selectable by the user. For example, when it is desired to create a screen for displaying congestion around an elevator, as shown in fig. 4, the position of the hall door 203 (elevator) in the building model data 111 and the end point of the congestion area 112 of the elevator may be set as the visualization point 400. For example, the end point of the congestion range 112 is an end point (e.g., a vertex) of a line segment constituting an area (space) representing the congestion range 112. In this embodiment, the visualization point 400 includes at least the end point of the congestion range 112.
Fig. 4 shows an example of a shuttle elevator that performs a rapid operation with a reference floor (normal floor 1), an arbitrary floor (floor 201), and a floor (floor 211) above the reference floor as service floors, and an elevator that has an upper floor (floor 211) as a service floor. The floor on the upper side (floor 211) is called an aerial hall or the like. In the right shuttle elevator, the hoistway 204-1 ends at the upper floor (floor 211). In the left elevator, the hoistway 204-2 extends upward from the upper floor (floor 211). In fig. 4, a plurality of visualization points 400 are set for the lower floor (floor 201) and the upper floor (floor 211).
When the visual point 400 is determined, the visual point 400 only in the elevator (hall door 203) and the congestion area 112 on the departure floor can be extracted by using the departure floor information of the elevator. This becomes an effective process when it is desired to display a screen in a crowded state in a working time period that tends to cause congestion at a departure floor.
When the visualization point 400 is determined, the congestion ranges 112 of a plurality of elevator groups (banks) may be set as the visualization points 400 at the same time. This is related to the final creation of a viewpoint that aggregates multiple elevator groups for look around. A combination of elevator groups that can be set to be displayed simultaneously can be input to the picture creation apparatus 100 in advance. Further, the screen generation device 100 (e.g., the gazing point coordinate determination unit 122 a) may automatically combine groups so that groups having common parts in the service floors of the elevators are simultaneously displayed.
When determining the coordinates of the gaze point 301, the gaze point coordinate determination unit 122a may calculate the barycentric coordinates of the set of the visualization points 400, for example. Note that the gaze point 301 may be determined by different means such as a user input to the screen generating apparatus 100.
Determination of camera distance
Next, a method of determining the camera distance 303 by the camera distance determination unit 122b will be described with reference to fig. 5. The building model data 111 desired to be displayed on the screen includes a plurality of visualization points 400 shown in fig. 3.
Fig. 5 is a diagram illustrating a method of determining the camera distance. In this method, the view angle 504 of the camera object 300 is determined in advance. First, a visualization point 501 farthest from the previously determined gaze point 301 is searched for among the visualization points 400, and a visualization sphere 503 (an example of an observation region) having a radius of a value r obtained by adding a margin 502 (margin) to the distance between the gaze point 301 and the visualization point 501 is formed. Visualization sphere 503 is three-dimensional.
Then, a distance 505 at which the visual sphere 503 is exactly tangent to the view angle range 506 may be obtained as a camera distance (the camera distance 303 in fig. 3). When the viewing angle 504 is "F" and the radius of the visualization sphere 503 is "r", such a distance 505 can be obtained by r/sin (F/2). This makes it possible to obtain a camera distance at which all the visible points 400 are within the field of view. An area including at least the visual sphere 503 is displayed on a two-dimensional screen.
Determination of camera angle
Next, a method of determining the camera angle 302 by the camera angle determination unit 122c will be described with reference to fig. 6.
Fig. 6 is a flowchart illustrating a procedure example of the process of deciding the camera angle 302.
Determining the camera angle 302 is related to determining the viewpoint position of the building model data 111.
First, the camera angle determination unit 122c generates the viewpoint candidates v while changing the camera angle under the determined camera distance 303, and repeats the processing from S602 to S604 for each viewpoint candidate v (S601). In the process S601, first, the camera angle determination unit 122c generates one viewpoint candidate v in which the camera angle around the Y axis and the camera angle around the X axis are set. Here, the viewpoint refers to the position of the camera object 300, and is uniquely determined if the gazing point 301, the camera distance 303, and the camera angle 302 are determined.
The change width of each camera angle around the Y axis and around the X axis may be set in advance and stored in the storage unit 110, or may be set in a manner that can be input by the user. For example, a value of 5 degrees may be used for the amplitude of the change.
Further, as for each camera angle around the Y axis and around the X axis, a mode in which the maximum value and the minimum value are input by the user may be adopted. Particularly, by appropriately setting the maximum value and the minimum value of the camera angle around the X axis (vertical direction), it is possible to prevent in advance the generation of a viewpoint having an excessively steep camera angle.
For example, the minimum value of the camera angle around the X axis is 0 degrees, and the maximum value is 90 degrees in absolute value. The camera angle around the X axis is 0 degrees, which means a state where the gaze point 301 is viewed from the horizontal direction. In an excessively steep camera angle, the gaze point 301 is viewed from a viewpoint directly above or close to the gaze point, and the building model data 111 displayed on the screen becomes difficult to see. In addition, since the right below the gazing point 301 for the building model data 111 is the floor, a viewpoint from a position right below or close to right below is not generally adopted. In this manner, it is desirable that the camera angle determination unit 122c uses a weighted evaluation function in which the evaluation value decreases as the camera angle in the vertical direction approaches 0 degree or 90 degrees when determining the camera angle 302.
Conversely, when the camera angle around the Y axis (horizontal direction) is the minimum value (0 degrees), the gaze point 301 is seen from the right side and is difficult to see, and therefore the camera angle is desirably 10 degrees or more, for example. In this way, it is desirable that the camera angle determination unit 122c uses a weighted evaluation function in which the evaluation value is higher as the camera angle in the horizontal direction is closer to the positive direction of the drawing of the building model data 111 when determining the camera angle 302. The positive direction of the drawing is set for the two-dimensional drawing, and registered as attribute information in the building model data 111. In general, the upward direction in the plan view is set to the positive direction. In addition, the orientation mark may be used to represent the relationship between the positive direction of the plane of the plan view and the orientation.
Next, the camera angle determination unit 122c performs processing S602. In the processing S602, the camera angle determination unit 122c performs processing S603 of the set of visual point candidates v by the number of visual points P generated when the camera distance 303 is determined for the set P of visual points P (the visual points 400 in fig. 5) (S602).
In the processing S603, the camera angle determination unit 122c counts the number of times that a line segment connecting the visualization point P and the viewpoint candidate v interferes with another object for each set P of visualization points P, and adds the number of times to the total interference number of the viewpoint candidates v (S603).
Fig. 7 is a diagram illustrating interference confirmation processing when the camera angle 302 is determined. In fig. 7, the camera angle determination unit 122c connects the visualization point 400 and the camera object 703 of the viewpoint candidate by a straight line, and counts the number of times n that a line segment between these 2 points interferes with another object in the building model data 111. Next, the camera angle determination unit 122c connects the visualization point 400 and the camera object 704 of the viewpoint candidate by a straight line, and counts the number of times n that a line segment between these 2 points interferes with other objects in the building model data 111. In the example of fig. 7, a line segment connecting the visualization point 400 and the camera object 702 interferes with the front wall 202F at the interference portion In. Therefore, by selecting the camera object 701 located further upward as the viewpoint position, the visualization point 400 is prevented from being occluded by the front wall 202F.
The number of times of interference may be counted for all of the number of times of interference between the line segment and the polygon constituting the object, or may be counted for 1 object.
The counted values are added together to become the total number of interferences associated with the current viewpoint candidate v. In step S602, since the process of repeating the number of visualization points p by the corresponding amount is performed, the total number of disturbances is added together by the corresponding amount of the full visualization points p.
In this case, when the total number of times of interference exceeds a predetermined threshold value, a predetermined sufficiently large value may be added to the total number of times of interference, and the interference confirmation process at the viewpoint candidate v may be ended. In the case where the number of the visualization points p and the object is large and it takes time to interfere with the confirmation, it is expected that the processing time can be shortened by this method.
Further, as for the object to be subjected to the interference check, a process of giving a dedicated attribute to the building model data 111 in advance or performing the interference check only on the object having a specific attribute in the building model data 111 may be performed. By performing such processing, it is possible to avoid the disturbance of the confirmation to the object which is not displayed in advance, such as a ceiling, an air duct, or the like. In addition, the interference confirmation of the object which is not the display object layer is avoided.
Next, when the processing performed on each of the visualization points P in the set P is completed, the camera angle determination unit 122c ends the repetition processing on the set P (S604).
Next, when the processing for all the viewpoint candidates v calculated by changing the camera angle is finished, the camera angle determination unit 122c finishes the repetition processing (S605).
Next, the camera angle determination unit 122c searches for and acquires a viewpoint candidate v whose evaluation value is the smallest, using a weighted evaluation function whose evaluation value is lower as the total number of interferences recorded for each viewpoint candidate v is larger (S606). This makes it possible to obtain a viewpoint with less number of visual points being blocked by other objects. Therefore, the screen generating unit 123 described later can smoothly create a screen for bringing the congestion area 112 of the elevator into view.
Further, evaluation items other than the total number of disturbances may be added to the weighted evaluation function. For example, a term may be added such that the evaluation value decreases as the direction of the line of sight from the viewpoint to the gaze point 301 is farther from the positive direction of the drawing, so that the viewpoint in the positive direction of the drawing in the building model data 111 can be easily obtained. Thereby, a picture can be generated that is easy to pass also to the person who is advancing the discussion with the figure.
In order to avoid generation of a viewpoint with a steep camera angle as much as possible, a term with a lower evaluation value as the camera angle around the X axis (vertical direction) is further away from 45 degrees, for example, may be added to the weighted evaluation function.
Further, the following method may be used: when determining the camera angle using the weighted evaluation function, evaluation is performed using a plurality of evaluation functions having different weights, and after determining the optimal viewpoint candidate using each of the evaluation functions, the final selection is given to the user. In this case, for example, as shown in fig. 8, the user may select the final camera angle through the viewpoint selection screen.
Fig. 8 is a diagram showing an example of the viewpoint selection screen. In the viewpoint selection screen 800 shown in the figure, two screens 810 and 820 are displayed in the left-right direction. The viewpoint position of the left screen 810 is substantially the same as that of fig. 2. The viewpoint position of the screen 820 is an example in which the camera angle around the Y axis is large. A message prompting the user to select "please select any viewpoint" is displayed on the viewpoint selection screen 800. After selecting the screens 810 and 820 through the input unit 131, the user specifies the screen to be displayed by pressing the "OK" button. The screen selected once can be canceled by the cancel button. In fig. 8, a screen 810 is selected as indicated by a thick line.
The determined gazing point 301, camera distance 303, and camera angle 302 are collectively recorded in the storage unit 110 as camera information 113.
The above is the content of the processing by the camera information determination unit 122.
(Screen generating part)
The process of the screen generating unit 123 will be described last.
The screen generating unit 123 performs projective transformation of the building model data 111 based on the camera information 113, generates a display screen, and outputs the display screen to the output unit 132. For example, the projection processing may use a general perspective projection method. The screen generating unit 123 may generate the display screen in a state where the human object 910 is placed on the building model data 111 at the position of a human obtained from the image data of the simulation or the actual monitoring camera, or the like, at the time of generating the screen. Fig. 9 shows an example of a display screen in which a human object 910 is arranged in the building model data 111. On the screen 900, a human object 910 is displayed using image data of a monitoring camera, not shown, in an actual building. From screen 900, it can be seen that the current elevator lobby is more crowded than when planning (e.g., screens 810, 820 of fig. 8).
When the screen generation unit 123 generates a screen, if there is interference with another object between the line segments connecting the visualized points determined by the camera information determination unit 122 and the cameras, the interfering object may be displayed semi-transparently or non-displayed.
In addition, the screen generating unit 123 may perform processing for increasing the transparency as the number of times of disturbance increases when determining the transparency in the semi-transparent display. In this way, by adjusting the transparency in accordance with the number of times of disturbance, the accumulated transparency can be displayed in the same manner in the portion where the disturbance of the object is large and the portion where the disturbance is small. For example, even a portion of the object that is much disturbed can display a person or an arbitrary object that is reflected on the opposite side of the object in a semi-transparent manner.
The screen generation unit 123 may perform control to make objects such as ceilings and air ducts, which are not related to the passage of people, non-displayed by using the attribute information of each object registered in advance in the building model data 111 at the time of screen generation. The screen generating unit 123 may perform control to make the object in the layer other than the display object layer where the visualized point 400 is provided non-display.
Embodiment 1 of the present invention has been described above. As described above, the layout display system (screen generating apparatus 100) according to embodiment 1 is a layout data display system that displays layout data of a building model (building model data 111), and includes a viewpoint position determining unit (camera information determining unit 122) that determines a viewpoint position (camera object 300) at the time of displaying the layout data of the building model. The viewpoint position determination unit selects a viewpoint position from candidate coordinates based on the number of times of interference between a line segment connecting a point (visualized point 400) on the observation area set in the layout data and the candidate coordinates of the viewpoint position (viewpoint candidates v: camera objects 703 and 704) and an object (for example, front wall 202F) arranged in the layout data.
According to the layout display system according to embodiment 1 configured as described above, the viewpoint position is selected from the candidate coordinates based on the number of times of interference between the object arranged in the layout data and the line segment connecting the point on the observation area set in the layout data of the building model and the candidate coordinates of the viewpoint position. Therefore, a viewpoint position with less disturbance of the object can be selected. Therefore, a screen for including the observation region in the field of view can be automatically generated from the layout data of the building model. Thus, the man-hour for the user to perform the viewpoint search can be shortened.
For example, in the case where an elevator is installed in a building model, a screen in which a crowded area generated by the elevator enters a field of view as an observation area can be automatically generated efficiently. Therefore, in the installation planning and operation of the elevator using the layout data of the building model, the man-hours required for the user to manually search the viewpoint position can be reduced.
In the layout display system (screen generating apparatus 100) according to embodiment 1 described above, the viewpoint position determining unit (camera information determining unit 122) is configured to determine the distance (camera distance 303) from the gaze point to the viewpoint position based on the input field angle (field angle 504) and the gaze point (gaze point 301) in the observation area so that the observation area (visual sphere 503) includes a visual point group formed by a plurality of visual points (visual points 400) extracted from the layout data (building model data 111).
In the layout display system (screen generating apparatus 100) according to embodiment 1 described above, the viewpoint position determining unit (camera information determining unit 122) is configured to calculate the number of times of interference between a line segment connecting a plurality of visualized points (visualized points 400) in the observation area and the candidate coordinates of the viewpoint position (camera objects 703 and 704) and an object in the layout data (building model data 111), select candidate coordinates corresponding to a line segment having a smaller number of times of interference as the viewpoint position, and determine the camera angle (camera angle 302) representing the angles in the horizontal direction and the vertical direction of the line segment connecting the gaze point (gaze point 301) and the viewpoint position.
< embodiment 2 >
In embodiment 2, an example is shown in which a screen is automatically generated in which a portion not related to the passage of a person is cut from the building model data 111 and is not displayed. Embodiment 2 will be described mainly focusing on differences from embodiment 1.
Fig. 10 is a block diagram showing a configuration example of the screen generating device according to embodiment 2 of the present invention. The illustrated screen generation device 100A includes a storage unit 110, an arithmetic unit 120, an input/output unit 130, and a bus 140. The contents of the input/output unit 130 and the bus 140 are the same as those of embodiment 1.
The storage unit 110 holds the cutting plane information 1001 in addition to the building model data 111, the congestion range 112, and the camera information 113.
The processing performed by the arithmetic unit 120 is roughly divided into a congestion range acquisition unit 121, a cut surface generation unit 1002, a camera information determination unit 122, and a screen generation unit 123.
[ Explanation of data ]
Next, data used in the screen generating apparatus 100A will be described. Only the differences from embodiment 1 will be described here.
Fig. 11 is a diagram illustrating cut surface information 1001. As shown in fig. 11, the cut surface information 1001 is information representing an area for cutting and displaying the building model data 111, and includes information on a cross section (position, direction of the cross section, angle) to cut the building model data 111. The region may be closed or may be a partially open half space. In fig. 11, the representative building model data 111 (building) is cut in a cutting area 1010 based on the cutting plane information 1001. The cutting area 1010 cuts the uppermost layer of the building model data 111 from the lowermost layer thereof to cross. In the cutting area 1010, objects of an elevator lobby 1111 of each floor and a passage 1112 connected to the elevator lobby 1111 are included. Elevator lobby 1111 is a portion of the hoistway.
[ Explanation of treatment ]
Next, a process performed by each functional block of the arithmetic unit 120 of the screen generating apparatus 100A will be described.
(section for acquiring Congestion Range)
The congestion range acquisition unit 121 may perform the same processing as in embodiment 1.
(section generating section)
The cut surface generation unit 1002 generates cut surface information 1001 for cutting the building model data 111 in order to improve visibility of the congestion area 112. The generation of the cut surface information 1001 corresponds to the determination of the area to be finally left as display data. As a method of generating the cut surface information 1001, for example, a method of obtaining the outline of a region having a passage attribute with reference to the attribute of the space object in the building model data 111 is known.
As another method of generating the cut surface information 1001, there is a method of performing a human flow simulation on the building model data 111 to form an area so as to include an area where a person passes. This is achieved, for example, by taking the shape of the passage area of the person. The outline of the person passage area can be obtained by recording the route through which the person passes and arranging the occupied areas of the person on the route.
Alternatively, the section information 1001 may be generated by a method of acquiring the outline of the set of the visualization points 400.
The processing of the cut surface generating unit 1002 is described above.
(Camera information determination unit)
The camera information determination unit 122 determines the gaze point 301, the camera distance 303, and the camera angle 302 necessary for configuring the screen, as in embodiment 1. In this case, the fixation point 301 and the camera distance 303 may be determined by the same method as that of embodiment 1.
As described in embodiment 1, the determination of the camera angle is performed by calculating the total number of times of interference between the line segment connecting the visualization point 400 and the camera object and the other object for each viewpoint candidate v, and using a weighted evaluation function in which the evaluation value increases when the total number of times of interference is small. However, when the number of times of interference is calculated, when the interference point on the object is located in the region (outside the cutting region 1010 in fig. 11) cut by the cutting surface information 1001, it is not necessary to count the number of times of interference. Before the interference check is performed, the building model data 111 may be cut by the cut surface information 1001 to construct new three-dimensional data. This makes it possible to determine the camera angle in consideration of the state of the cut-off building model data 111.
The camera information determination unit 122 has been described above.
(Screen generating part)
The screen generating unit 123 projects each object of the building model data 111 based on the camera information 113, and generates and outputs a screen. When each object is projected, a process may be performed in which a portion existing in a region (outside the cutting region 1010 in fig. 11) cut by the cut surface information 1001 is not projected. By performing this processing, a screen 1200 as shown in fig. 12 can be generated in which each object is cut.
Fig. 12 is a diagram showing an example of a display screen in which the building model data 111 is cut by a cutting plane. On a screen 1200 shown in fig. 12, a floor object 1201 having a passage attribute on 1 floor, a floor object 1202 having a passage attribute on 2 floors, and a floor object 1203 having a passage attribute on 3 floors are displayed. Each of the floor objects 1201 to 1203 is composed of an elevator hall 1210 and a passage 1220 connected to the elevator hall 1210. The elevator lobby 1210 on floor 1 shows the hoistway doors 203L, 203R and cars 205L, 205R, with both cars 205L, 205R stopping on floor 1. Further, hall doors 203L and 203R are displayed in elevator hall 1210 at 2 floors, and hall doors 203L and 203R are displayed in elevator hall 1210 at 3 floors. Other objects in the building model data 111 are not displayed by the cut surface information 1001.
Embodiment 2 of the present invention has been described above. As described above, the layout display system (screen generating device 100A) according to embodiment 2 includes the cut surface generating unit (cut surface generating unit 1002) that generates the cut surface information (cut surface information 1001) indicating the cut surface of the layout data in order to display the observation area of the layout data (building model data 111). The screen generating unit (screen generating unit 123) performs projective transformation of the object located inside the region (cut region 1010) formed by the cut surfaces of the layout data into the observation region based on the cut surface information.
According to the layout display system according to embodiment 2 configured as described above, it is possible to automatically generate a screen in which a portion not related to the passage of a person is cut from the layout data of the building model and is not displayed. Therefore, the number of steps required for setting the display, the cut-off, and the like of the object by the user can be reduced. Further, by removing the area that obstructs visual recognition of the crowded area 112, the visual recognition of the crowded area 112 can be improved.
In the above-described embodiments 1 and 2, the elevator is described as an example, but may be another elevator such as an escalator.
Furthermore, the present invention is not limited to the above-described embodiments 1 and 2, and it is needless to say that other various application examples and modifications can be adopted without departing from the gist of the present invention described in the claims. For example, the above-described embodiments have been described in detail and specifically with respect to the configuration of the layout data display system ( screen generating apparatuses 100 and 100A) in order to explain the present invention easily, but the present invention is not necessarily limited to the configuration including all the components described. Further, a part of the structure of one embodiment may be replaced with a component of another embodiment. Further, the constituent elements of the other embodiments may be added to the structure of one embodiment. Further, it is also possible to add, replace, or delete another component to or from a part of the configuration of each embodiment.
In addition, a part or all of the above-described structures, functions, processing units, and the like may be implemented in hardware by designing them with, for example, an integrated circuit. As hardware, a general processor device such as an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit) can be used.
Note that each of the components of the screen generating apparatuses 100 and 100A according to embodiments 1 and 2 described above may be installed in any hardware as long as the hardware can transmit and receive information to and from each other via a network. The processing performed by a certain processing unit in the arithmetic unit may be realized by 1 piece of hardware, or may be realized by distributed processing performed by a plurality of pieces of hardware.
In the present specification, when the terms such as "parallel" and "orthogonal" are used, each term does not mean only a strict "parallel" and "orthogonal" but also includes "parallel" and "orthogonal" in a strict sense, and further includes "substantially parallel" and "substantially orthogonal" in a range in which the function thereof can be exhibited.

Claims (14)

1. A layout data display system for displaying layout data of a building model, comprising:
a viewpoint position determination unit that determines a viewpoint position when the layout data of the building model is displayed,
the viewpoint position determination unit selects the viewpoint position from the candidate coordinates based on the number of times of interference between a line segment connecting a point on the observation area set in the layout data and the candidate coordinates of the viewpoint position and the object arranged in the layout data.
2. The layout data display system of claim 1,
the viewpoint position determination unit determines a distance from the gaze point to the viewpoint position so that a visualized point group composed of a plurality of visualized points extracted from the layout data is included in the observation area, based on the input view angle and the gaze point in the observation area.
3. The layout data display system of claim 2,
the viewpoint position determination unit calculates the number of times of interference between a line segment connecting the plurality of visualization points in the observation area and the candidate coordinates of the viewpoint position and the object in the layout data, selects the candidate coordinates corresponding to the line segment having the smaller number of times of interference as the viewpoint position, and determines a camera angle representing the horizontal and vertical angles of the line segment connecting the gaze point and the viewpoint position.
4. The layout data display system of claim 3,
the viewpoint position determination unit uses a weighted evaluation function in which the evaluation value decreases as the camera angle in the vertical direction approaches 0 degree or 90 degrees when determining the camera angle.
5. The layout data display system of claim 3,
the viewpoint position determining unit uses a weighted evaluation function in which the evaluation value is higher as the camera angle in the horizontal direction is closer to the positive direction of the drawing of the layout data when determining the camera angle.
6. The layout data display system of claim 3,
the layout data display system includes:
and a screen generating unit that generates a display screen by projectively converting the observation region of the layout data based on the determined viewpoint position.
7. The layout data display system of claim 6,
the screen generating unit may cause the corresponding object to be semi-transparent or non-displayed when the object interfering with the line segment connecting the observation area and the viewpoint position exists.
8. The layout data display system of claim 7,
the screen generating unit changes the transparency of the corresponding object in accordance with the number of times of interference between the object and the line segment connecting the observation area and the viewpoint position.
9. The layout data display system of claim 6,
the layout data display system includes:
a cut surface generating section for generating cut surface information representing a cut surface of the layout data so as to display the observation region of the layout data,
the screen generating unit performs projective transformation of the object located inside the region formed by the cut surfaces of the layout data into the observation region, based on the cut surface information.
10. The layout data display system according to claim 2 or 3,
a congestion area that is object data including information of at least a congestion area in the building model between the viewpoint position and the observation area,
the visualization point includes at least an end point of the crowded coverage.
11. The layout data display system of claim 10,
the layout data display system includes:
a congestion range acquisition unit that acquires the congestion range in the building model,
the congestion range acquisition unit calculates a predicted number of waiting people, which is calculated from a difference between a transportation capacity of an elevator in the building model calculated by traffic calculation of the elevator and a predicted traffic demand of the building model, and estimates the congestion range based on an occupied area of people corresponding to the predicted number of waiting people.
12. The layout data display system of claim 10,
the layout data display system includes:
a congestion range acquisition unit that acquires the congestion range in the building model,
the congestion range acquisition unit sets, as the congestion range, a range of the waiting queue obtained by the traffic simulation using the building model and a region in which the density of people exceeds a threshold.
13. The layout data display system of claim 10,
the layout data display system includes:
a congestion range acquisition unit that acquires the congestion range in the building model,
the congestion area acquisition unit acquires the congestion area by analyzing imaging data of a camera installed in an actual building corresponding to the building model.
14. A layout data display method performed by a layout data display system for displaying layout data of a building model,
the layout data display system includes:
a viewpoint position determination unit that determines a viewpoint position at which layout data of the building model is displayed,
the viewpoint position determination unit selects the viewpoint position from the candidate coordinates based on the number of times of interference between a line segment connecting a point on the observation area set in the layout data and the candidate coordinates of the viewpoint position and the object arranged in the layout data.
CN202210579786.1A 2021-08-18 2022-05-25 Layout data display system and layout data display method Pending CN115908691A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-133221 2021-08-18
JP2021133221A JP2023027883A (en) 2021-08-18 2021-08-18 Layout data display system and layout data display method

Publications (1)

Publication Number Publication Date
CN115908691A true CN115908691A (en) 2023-04-04

Family

ID=85331789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210579786.1A Pending CN115908691A (en) 2021-08-18 2022-05-25 Layout data display system and layout data display method

Country Status (2)

Country Link
JP (1) JP2023027883A (en)
CN (1) CN115908691A (en)

Also Published As

Publication number Publication date
JP2023027883A (en) 2023-03-03

Similar Documents

Publication Publication Date Title
US20160371882A1 (en) Method and system for displaying and navigating an optimal multi-dimensional building model
WO2018163804A1 (en) Information processing system, information processing device, information processing method, and program for causing computer to execute information processing method
US20220198709A1 (en) Determining position of an image capture device
JP2010250769A (en) Device, method and program for visualization of analysis result
US20150029188A1 (en) Method and system for displaying and navigating building facades in a three-dimensional mapping system
US10565786B1 (en) Sensor placement interface
JP2008083112A (en) Network data generating device, network data generating program, mobile terminal, and data structure
US20220035974A1 (en) Movement route prediction system, movement route prediction method, and recording medium
KR101659782B1 (en) Method for providing of 3D indoor space information in user view point considering wall-collision
Zhou et al. Route choice in the pedestrian evacuation: Microscopic formulation based on visual information
WO2019087730A1 (en) In-building traffic prediction system, and method and program for generating elevator platform layout in in-building traffic prediction system
JP2009294887A (en) Construction facility control system and program
CN115908691A (en) Layout data display system and layout data display method
JP7199288B2 (en) Operation status display device for elevator, operation status display system, and operation status display method
JPWO2020026325A1 (en) Evaluation device, derivation device, evaluation method, and computer program
JP2017224201A (en) Simulation program, simulation method and simulation apparatus
Seer A unified framework for evaluating microscopic pedestrian simulation models
JP7303149B2 (en) Installation support device, installation support method, and installation support program
WO2024121934A1 (en) Elevator system and equipment plan creation device
US20240119201A1 (en) Artificial intelligence determination of building smoke and indoor air quality management
EP4082955A1 (en) Architectural model data assistance system and architectural model data assistance method
Hoshino et al. Optimizing simulation of movement in buildings by using people flow analysis technology
Wu Testing the International Standards Organization Verification and Validation protocol for evacuation simulations-An application to the FDS+ Evac model
WO2024107779A1 (en) Artificial intelligence determination of building smoke and indoor air quality management
KR20160139978A (en) System and method for measuring congestion using image analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination