JP4975673B2 - Camera installation simulator program - Google Patents

Camera installation simulator program Download PDF

Info

Publication number
JP4975673B2
JP4975673B2 JP2008086032A JP2008086032A JP4975673B2 JP 4975673 B2 JP4975673 B2 JP 4975673B2 JP 2008086032 A JP2008086032 A JP 2008086032A JP 2008086032 A JP2008086032 A JP 2008086032A JP 4975673 B2 JP4975673 B2 JP 4975673B2
Authority
JP
Japan
Prior art keywords
camera
top view
indicator
displaying
view image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2008086032A
Other languages
Japanese (ja)
Other versions
JP2009239821A (en
Inventor
雄三 小川
Original Assignee
ティーオーエー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ティーオーエー株式会社 filed Critical ティーオーエー株式会社
Priority to JP2008086032A priority Critical patent/JP4975673B2/en
Publication of JP2009239821A publication Critical patent/JP2009239821A/en
Application granted granted Critical
Publication of JP4975673B2 publication Critical patent/JP4975673B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To simulate which areas are to be blind areas by walls, in a situation with no camera installed, if the cameras are installed in a location having the walls. <P>SOLUTION: Wall indicators for indicating walls are displayed in desired areas on a top view of a given area. Camera indicators for indicating the cameras are displayed on the area image. A shooting scope of the camera from the camera installing positions is determined. If the shooting scope includes some or all of the wall indicators, blind areas 47a by the walls are determined, and excluding them, a horizontal viewing field indicator 18, which is a projection view of the shooting scope, is displayed on the top view. <P>COPYRIGHT: (C)2010,JPO&amp;INPIT

Description

  The present invention relates to a program for simulating a shooting range of how a picture is taken when a camera is installed in a predetermined area.

  Conventionally, as a technique related to the above simulation, there is one disclosed in Patent Document 1. In this technology, the layout diagram of the monitoring target location on the display screen, the installation position of the camera, the center position of the video frame indicating the range of the video obtained by this camera on the layout diagram, and this video The size of the frame is given as data, a layout diagram is displayed on the display screen, and the position of the camera and the video frame are displayed on the layout diagram.

JP 2006-148406 A

  The above technology displays a video frame indicating which range on the layout diagram is captured for the camera after it is actually installed, so that the range of the range captured by the camera in the layout drawing is displayed. It shows the positional relationship roughly. This technology is intended to make the operation after the camera is installed efficiently, and cannot be used for knowing in advance what the image will look like.

  That is, in the past, it was impossible to know how the camera would appear before the camera was installed. In particular, when a camera is installed inside a building, there are many shielding objects such as walls or partitions in the building. I could not know in advance. For this reason, the position, angle, and angle of view of the camera must be changed by trial and error at the time of installation, and in the worst case, it may be found that desired shooting cannot be performed depending on the installation position. In addition, in the case of a surveillance camera or the like, the installation position is often high, and this work may be accompanied by great difficulty.

  Therefore, the present invention displays, on a two-dimensional image, which range can be photographed when the camera is installed before actually installing the camera, and further, can capture the above-mentioned area that becomes a blind spot by a shielding object. An object of the present invention is to provide a simulation program that is displayed without being included in the scope.

The camera installation simulator program of one aspect of the present invention is a camera installation simulator program for simulating the arrangement of cameras in a predetermined area and the shooting range of the camera. The computer that executes the camera installation simulator program performs processing for causing the display device to display a top view image representing a part or all of the predetermined area based on the top view data of the predetermined area. Processing is also performed in which a division body indicator representing a division body is arranged and displayed in a desired range of the top view image. For example, the partition body partitions a space such as a wall in a building, a partition, or an outer wall that partitions the building from the outside. In addition to displaying one compartment indicator, a plurality of compartment indicators can be displayed in succession or separately. The computer also performs processing for arranging and displaying a camera indicator representing the camera on the top view image. A conversion coefficient between the top view image and the actual size of the object corresponding to the top view image, an arrangement position of a camera indicator on the top view image, a height of the camera, a horizontal and vertical installation angle and an angle of view, the section The computer also performs processing for arbitrarily setting the position of the body indicator and the height of the partition. Based on these setting information and the top view data, the computer performs a process of displaying a visual field indicator corresponding to the photographing range of the camera on the top view image. When a part or all of the section is included in the shooting range, the process of displaying the visual field indicator is different from a blind spot range generated by the partition in the shooting range and a portion other than the blind spot range. To display the visual field indicator. For example, when the blind spot range is excluded from the horizontal field indicator, or the blind spot range indicator indicating the blind spot range is displayed in the horizontal field indicator. There is something to do.

The process of displaying the visual field indicator further includes changing the blind spot range caused by the height, horizontal and vertical installation angles and angle of view of the camera, and a portion other than the blind spot range, so that the poetry or the indicator is displayed. Can also be displayed. Alternatively, the computer further superimposes a camera indicator on the elevation image in accordance with a process of displaying the elevation image in addition to the top view image and an arbitrarily set height of the camera. The process of displaying can be performed. In this case, the process of displaying the visual field indicator further displays the visual field indicator corresponding to the shooting range of the camera on the elevation image based on the setting information, and the height of the camera. The visual field indicator is displayed by making a blind spot range caused by horizontal and vertical installation angles and angle of view different from a portion outside the blind spot range.

The computer further sets a conversion coefficient between the top view image and the actual size of the object corresponding to the top view image, the scale indicator displayed on the top view image, and the object corresponding to the top view image. It is possible to perform this by setting the length of the scale indicator to be recognized by overlapping with any part of the above. Alternatively, the computer may further perform a process of displaying a plurality of cameras in a list format in the vicinity of the top view image and the elevation view image. In this case, in response to an operation of dragging and dropping the camera in the list onto the top view image, a process of arranging and displaying a camera indicator representing the camera on the top view image, and the computer Also performs a process of displaying a camera indicator superimposed on the elevation image in accordance with the arbitrarily set camera height.

  In addition, the blind spot range by the division body includes a line segment formed in contact with a surface corresponding to the top view image in the area through the upper end of the division body from the position of the camera in the area, and the division body. It can be prescribed | regulated by the area | region which connects the line segment by contact with the surface corresponding to the said top view image, and the lower end of the said division body. This region may be entirely within the shooting range, or only a part may be within the shooting range.

  The computer that executes the camera installation simulator program, when a part or all of the partition body is included in the shooting range, based on the setting information and the top view data, the computer of the camera including the partition body It is also possible to generate a three-dimensional graphic that is a virtual photographed image and display it on the display device.

  With this configuration, it is possible to confirm the range that becomes a blind spot by the partition on the top view, and it is possible to more realistically know which range becomes the blind spot by the partition even with the virtual image. it can.

  In addition to this, it is possible to perform a process of arranging and displaying a photographing object indicator representing a photographing object on the top view image and a process of setting the light transmittance of the partition. In this case, the setting is performed when the shooting range includes a part or all of the shooting target and the partition, and the partition overlaps in front of the shooting target in the three-dimensional graphics. Depending on the light transmittance, it is also possible to perform processing for generating and displaying three-dimensional graphics so that the photographing object can be seen through the partition.

  If comprised in this way, the blind spot range by a division body can be known on a top view, and when the division body is a light-transmitting material, the light transmittance according to the light transmittance of the division body It is also possible to simulate how the object to be photographed can be seen through the partition body having.

  As described above, according to the present invention, before the camera is actually installed, what range can be taken in what state once the camera is installed, particularly like a wall or a partition. When an object is arranged, the user can easily know on the top view image including which range is a blind spot range that is not photographed by the camera. Further, the user can easily know which range is a blind spot range that is not captured by the camera even in the virtual captured video. Moreover, as long as there are two-dimensional data that is relatively easy to obtain, after that, it is only necessary to perform the relatively simple work of setting the placement position and height of the compartment, and which range is not captured by the camera. The user can easily know whether the range is reached.

  A camera installation simulation program according to an embodiment of the present invention is executed by a computer, for example, a personal computer, and displays an image as shown in FIG. 1 on a screen of a display unit of the personal computer, for example, a liquid crystal display device. . Although not shown, the personal computer includes a CPU as a calculation means, a memory as a storage means, a hard disk, a keyboard as an instruction means, a mouse and a tablet as pointing means, and the like. This program is stored in a recording medium such as a CD-ROM or DVD-ROM, and transferred from the recording medium to a memory or a hard disk. It is also possible to download from the server via the network.

  A window 2 serving as a base is displayed on the screen of the liquid crystal display device. A rectangular top view window 4 is displayed at the upper left of the window 2. In the top view window 4, a top view 6 representing a predetermined area, for example, a region where a camera is to be installed, for example, the inside or outside of a building, is displayed. The image data for displaying the top view 6 is two-dimensional image data. Although not shown, the image data is read by selecting a file reading tab displayed on the base window 2 with a mouse or the like. Is. The image data to be read corresponds to various image formats. For example, the image data of the floor in the building or the image of the architectural drawing received from the manager of the building is taken in, or the top view printed on paper is scanned by the scanner. Can be read and captured as image data. As the image data, for example, image data in a bitmap format or JPEG format can be used. The top view window 4 displays only a part of the top view 6, and when displaying other parts of the top view 6 on the top view window 4, a slider 4 a at the bottom of the top view window 4 is provided. Operate or drag the pointer on the top view window 4. Further, by setting a desired magnification in a magnification changing window (not shown), a part of the top view is displayed according to the magnification. When the magnification is set to 1, the entire top view 6 is shown in the top view window. 4 is displayed.

  A camera selection window 8 is displayed obliquely below the top view window 4. In the camera selection window 8, a model number of a plurality of cameras, for example, video cameras, a place where the cameras are used (for indoor use, for outdoor use), a minimum field angle, and a maximum field angle are displayed in a list format. Yes. The type of the camera can be specified from the model number of the camera and the place of use.

  When the contents of these lists are determined, for example, a model number corresponding to a desired camera is selected with a mouse or the like, and dragged and dropped onto the top view 6, a camera indicator which is a symbol representing the camera, for example, a camera top view indicator 10 is displayed on the top view 6 in an overlapping manner. At the same time, another camera indicator which is a symbol representing a camera, for example, a camera elevation indicator 16 is also displayed in the elevation 14 in the elevation window 12 disposed below the top view window 4. Is done.

  Simultaneously with this display, a horizontal field indicator 18 extending from the camera top view indicator 10 is displayed on the top view 6, and a vertical field indicator 20 extending from the camera elevation view indicator 16 is displayed on the elevation view 14. The The horizontal visual field indicator 18 represents a horizontal visual field obtained by projecting a photographing range that can be photographed by the camera onto a horizontal plane, and the vertical visual field indicator 20 is a vertical plane through which the optical axis passes through the photographing range that can be photographed by the camera. It represents the vertical field of view projected onto.

  A scale indicator, for example, a scaler 22 is displayed in the top view window 4 by, for example, a solid line. The scaler 22 can be moved and expanded by a user's mouse operation. The user can select a portion of a known length displayed in the top view window 4 in advance, for example, the length of the building, and the scaler 22. It is possible to set how many meters the scaler 22 is recognized by performing an overlapping operation. That is, if the known length is, for example, 10 m, the scale 22 which is a conversion factor between the top view image and the actual size is set by setting the scaler 22 superimposed on the known length portion on the screen to 10 m. Is set. On the other hand, how many meters is recognized as the height from the ground indicator 24 of the height scale indicator 26 drawn by a broken line above the reference surface drawn on the elevation 14, for example, the ground indicator 24 representing the ground surface? Can also be set in advance by the user operating a mouse or the like. Further, in the elevation view 14, a camera position indicator 30 displayed vertically from the camera elevation indicator 16 to the ground indicator 24, and scale indicators 32, 32... Representing the distance from the camera position indicator 30. • is displayed.

  The horizontal visual field indicator 18 and the vertical visual field indicator 20 are arranged so that the camera is arranged at a predetermined height, a predetermined depression angle, and a predetermined direction when the camera upper surface indicator 10 is arranged in the top view 6. Is set and displayed as being in the maximum angle of view.

  The horizontal visual field indicator 18 is displayed in a trapezoidal shape as will be described later, and an area inside the trapezoid represents a range that can be actually photographed. The vertical visual field indicator 20 shows a photographing range of the camera projected on a plane perpendicular to the ground through the optical axis indicator indicated by reference numeral 34 in the elevation view 14, and a near indicator 20 a close to the ground indicator 24. And a far indicator 20b far from the ground indicator 24. A range surrounded by the near indicator 20a, the far indicator 20b, and the ground indicator 24 represents a range that can be actually photographed.

Here, the imaging range of the camera in the three-dimensional space is considered. The top view is two-dimensional data. Two-dimensional data can be considered as a plane in a three-dimensional space. Using this idea, a three-dimensional space stretched by three orthogonal axes of Xw, Yw, and Zw so that the top view, which is two-dimensional data, becomes a zero-height plane is constructed as shown in FIG. Further, in this three-dimensional space, the lower left corner point of the top view shown in FIG. 1 is the origin 0 of the three-dimensional space, the right direction of the top view is + Xw, the top direction of the top view is + Yw, from the back of the top view (from the top view) The direction from the back side to the front side of the paper is + Zw. In the top view, the right direction is + u, and the top direction is + v. The scale of the three-dimensional space is the same as that of the top view.

  As shown in FIG. 2, the imaging range 46 of the camera in the three-dimensional space is a square pyramid having a bottom surface on the Xw-Yw plane when the camera is a CCD camera facing the Xw-Yw plane side. This is because the area covered by the lens of the camera has a conical shape, but the CCD is cut into a quadrangular shape inside the conical shape. In this simulator, the horizontal field of view displayed on the top view is a shape obtained by projecting the imaging range 46 of FIG. 2 onto the Xw-Yw plane, so that a trapezoidal bottom surface in contact with the ground surface of the quadrangular pyramid is drawn, A horizontal field indicator 18 is obtained by drawing a line segment connecting the trapezoid and the camera.

  Further, as shown in FIG. 3, the vertical field of view includes a camera optical axis 48 in the three-dimensional space shown in FIG. 2 and a plane S perpendicular to the Xw-Yw plane, and the imaging range 46 is projected onto this plane S. Will be. That is, the side surface 50 close to the camera and the surface S in the photographing range 46 intersect with each other to form a line segment 52 corresponding to the near indicator 20a, and the side surface 54 far from the camera in the photographing range 46 intersects the surface S and the far indicator. A line segment 56 corresponding to 20b is formed, and a triangular shape composed of these line segments 52 and 56 and the ground corresponds to the vertical visual field indicator 20.

  A blind area between the side surface 50 and the camera is a non-photographable dead angle range, and when this is projected onto the Xw-Yw plane is shown in the top view 6, the short edge of the horizontal field indicator 18 on the camera top indicator 10 side is shown. It is the part shown with the broken line between the camera upper surface indicators 10. FIG. Further, when the blind spot range projected onto the plane S is represented in an elevation view 14, it is a hatched portion between the near indicator 20 a and the camera position indicator 30.

The camera top surface indicator 10 can be changed by clicking on it with a mouse or the like and dragging it in any direction, and the positions of the camera top surface indicator 10 and the horizontal field indicator 18 can be changed accordingly. It can be automatically recalculated and redisplayed. Further, by clicking and rotating the horizontal field indicator 18 to change the horizontal installation angle of the camera, the directions of the camera upper surface indicator 10 and the horizontal field indicator 18 are automatically recalculated accordingly. , Can be redisplayed.

The camera elevation indicator 16 can be changed in the height position of the camera by clicking it with the mouse or the like and dragging it in the vertical direction. Accordingly, the horizontal view indicator 18 and the camera elevation indicator 16 can be changed. The display position of the vertical visual field indicator 20 can be automatically recalculated and redisplayed. Further, the vertical installation angle of the camera can be changed by rotating the vertical visual field indicator 20 while clicking with a mouse or the like, and the display range of the horizontal visual field indicator 18 and the direction of the vertical visual field indicator 20 are also automatically set. Can be recalculated and redisplayed.

In the above description, the case where the horizontal and vertical angles of view of the camera are maximum has been described. However, the edge of the horizontal field indicator 18 or the vertical field indicator 20 is clicked with a mouse or the like to move inward. Accordingly, the horizontal and vertical angles of view of the camera can be continuously changed to the minimum angle of view. Conversely, the horizontal field indicator 18 or the vertical field indicator 20 can be changed from the state where the horizontal and vertical angles of view are minimized. By clicking with the mouse or the like and moving it toward the outside, the camera can be continuously changed to the state where the horizontal and vertical field angles of the camera are set to the maximum field angle. At this time, the display ranges of the horizontal field indicator 18 and the vertical field indicator 20 are automatically recalculated and redisplayed according to the changed angle of view of the camera.

  As shown in FIG. 1, below the camera selection window 8, a shooting target icon, for example, a person icon 36 and a car icon 38 are displayed. When the person icon 36 or the car icon 38 is clicked and dragged and dropped onto the top view 6, a photographing target indicator, for example, a person top face indicator 40 or a car top face indicator 42 is displayed. The arrangement position and direction of the person upper surface indicator 40 or the vehicle upper surface indicator 42 can be changed by dragging. When these arrangement positions are in the horizontal visual field indicator 18, the person elevation indicator 44 or the vehicle elevation indicator (not shown) is displayed in the elevation view 14. In addition, the user can input the height of the person represented by the person upper surface indicator 40 and the length of the car represented by the vehicle upper surface indicator 42 on the person upper surface indicator 40 or the vehicle upper surface indicator 42. The width and depth dimensions of the person represented by the person upper surface indicator 40 are set in proportion to the input height. Similarly, the width and height dimensions of the vehicle represented by the vehicle upper surface indicator 42 are also set in advance in proportion to the length of the vehicle. Therefore, by setting the height and the length of the car as described above, all the three-dimensional data of the person and the car are obtained.

A wall input tab 45 is displayed in the upper part of the upper surface display window 4. By clicking the wall input tab 45 with a mouse or the like and clicking two positions indicating both ends of a desired region on the top view 6, for example, a region where a partition body such as a wall is originally placed, As shown by a one-dot chain line in FIG. 6, a partition body indicator, for example, a wall display line 47 is displayed. This wall display line 47 can be displayed in any number of places. By this click, the positions of both ends of the wall display line 47 are acquired. Further, as the wall display line 47 is displayed, the height information of the wall represented by this wall display line 47 is set by default. For example, the height information is selected by selecting a height setting unit from the menu displayed by right-clicking the mouse on the wall display line 47 and inputting the desired height information. Can be set at any height. Therefore, by setting the wall height as described above, all the three-dimensional data of the compartments are prepared. The arrangement position of the wall display line 74 can be changed. Moreover, the light transmittance setting part displayed by right-clicking can be selected with a mouse | mouth, and a desired light transmittance can be set to this. This is to cope with the case where the wall is made of transparent glass or translucent glass, for example.

  When the wall display line 47 is displayed in this way and this wall display line 47 is in the horizontal visual field indicator 18, the image is taken by the camera installed at the position represented by the camera upper surface indicator 10 and the camera elevation indicator 16. Then, the horizontal visual field indicator 18 is displayed except for the range in the horizontal visual field indicator 18 that is not captured by the wall represented by the wall display line 47, that is, the portion of the wall dead angle range 47a that represents the range of the dead angle. The As a result, when there is a wall, the user can immediately know where the blind spot is due to the wall.

  In the 3D image display window 43 above the camera selection window 8, a 3D image corresponding to the photographing range determined by the horizontal visual field indicator 18 and the vertical visual field indicator 20, that is, the camera upper surface indicator 10 and the camera vertical surface indicator 16 is displayed. A virtual 3D video representing a virtual video obtained by shooting with a camera installed at a position represented by is displayed. This virtual 3D image shows a state where the ground is viewed from the position and direction of the camera upper surface indicator 10 and the position of the three-dimensional coordinates of the camera determined from the position of the camera elevation indicator 16. An image obtained by coordinate conversion of the figure is displayed as a range defined by the horizontal visual field indicator 18 and the vertical visual field indicator 20. In this case, if the person, car, and wall corresponding to the person upper surface indicator 40, the car upper surface indicator 42, and the wall display line 47 are within the shooting range, these are also displayed.

  However, since the three-dimensional data is prepared for the person, the car, and the wall corresponding to the person upper surface indicator 40, the car upper surface indicator 42, and the wall display line 47 as described above, the person upper surface indicator 40, When the vehicle upper surface indicator 42 and the wall display line 47 are all or partly in the horizontal visual field indicator 18, images corresponding to these are displayed on the 3D video display unit 43 in three dimensions. In addition, for example, when the person upper surface indicator 40 and the vehicle upper surface indicator 42 are all or partly located on the opposite side of the camera upper surface indicator 10 across the wall display line 47, that is, the person upper surface indicator 40, the vehicle When all or part of the person or vehicle represented by the upper surface indicator 42 is blocked by the wall represented by the wall display line 47, the 3D image is displayed in the blocked state.

  However, when the light transmittance of the wall display line 47 is set to 0, it is displayed completely hidden as shown in FIG. 4 (a), but shown in FIG. 4 (b). Thus, when the light transmittance is 1, virtual 3D images corresponding to the person upper surface indicator 40 and the vehicle upper surface indicator 42 are displayed in a state where the light transmittance is not completely hidden, and the light transmittance is between 0 and 1 4C, a virtual 3D image corresponding to the person upper surface indicator 40 and the vehicle upper surface indicator 42 is displayed in a state corresponding to the transmittance. The video displayed on the 3D video display unit 46 is displayed as corresponding to the size seen from the camera.

Therefore, for example, when the camera is to be used for monitoring, the person upper surface indicator 40 and the vehicle upper surface indicator 42 are arranged at a place to be monitored, and the wall display line 47 is arranged at the position where the wall is actually arranged. Display, set the wall height and light transmittance, change the position and direction of the camera by operating the camera upper surface indicator 10 and the horizontal field indicator 18, and the camera elevation indicator 16 and vertical By adjusting the height and vertical installation angle of the camera by operating the visual field indicator 20, it is possible to recognize from the virtual 3D video of the 3D video display window 43 how a person or a car can be photographed. In particular, if there is a wall, the wall is also displayed as equivalent to the actual height, and if all or part of the person or car is hidden by the wall, it is displayed in the hidden state. However, when the wall is transparent or translucent, it is also displayed how a person or a car hidden in the wall looks through the wall. In addition, by dragging the person upper surface indicator 40 or the car upper surface indicator 42, it is possible to recognize how the moving person or the car is displayed from the virtual 3D image on the 3D image display window 43. In this way, based on the top view of a building or the like that can be obtained relatively easily, when there are partitions such as walls and partitions around the object to be imaged, the entire object to be imaged, Alternatively, it is possible to display a simulation in which a part is blocked by the partition.

When the camera arrangement position or direction is changed or the camera height or vertical installation angle is adjusted, the horizontal visual field indicator 18 changes in consideration of the wall dead angle range 47a in the top view 6 according to this adjustment. Therefore, in order to reduce the wall dead angle range 47a, at what position and at what height the camera is arranged and in which direction the camera can be directed, the wall dead angle range 47a can be reduced. By changing the arrangement position and direction of the camera upper surface indicator 10 and the height and vertical installation angle of the camera elevation indicator 16, it can be easily known while viewing the horizontal field indicator 18.

  The above description is for the case where only one camera upper surface indicator 10 is displayed. However, a plurality of camera upper surface indicators 10 are simultaneously displayed on the top view 6 and the horizontal field indicator 18 of the plurality of cameras is simultaneously displayed. Can also be displayed on the top view 6. When one of the plurality of camera top view indicators 10 is selected with a mouse or the like, the elevation view 14 shows a vertical field indicator 20 corresponding to the horizontal field indicator 18 clicked with the mouse or the like in the top view. Only displayed. Alternatively, a plurality of elevation views 14 may be prepared, and the camera rate surface indicator 16 and the vertical field indicator 20 may be displayed in elevation views that are different for each camera.

  Hereinafter, a generation algorithm of a virtual 3D image and a display algorithm of the horizontal visual field indicator 18 and the vertical visual field indicator 20 in a state where the wall display line 47 is not arranged will be described first. Next, a display algorithm of the blind spot range by the wall corresponding to the wall display line 47 when the wall display line 47 is included in the horizontal visual field indicator 18 will be described.

  First, display of a virtual 3D video generation algorithm will be described. Although the top view is two-dimensional data, it can be considered as a plane in a three-dimensional space. Using this idea, as shown in FIG. 5, a three-dimensional space stretched by three orthogonal axes of Xw, Yw, and Zw is constructed such that the top view is a zero-height plane in the three-dimensional space. 1A is the origin 0 of the three-dimensional space, the right direction of the top view is + Xw, the top direction of the top view is + Yw, and from the back of the top view (FIG. 1A). The three-dimensional space is configured so that the direction from the back side to the front side of the sheet is + Zw. The scale of the three-dimensional space is configured to be the same as that in the top view.

  Here, it is assumed that the installation information of the camera is given as follows by the user's operation on the top view and the elevation view as described above.

Camera information Value Camera coordinates on top view (uc, vc) Set on top view Camera height h (mm) Set on top view Camera horizontal installation angle θ (rad) Set on top view Camera Vertical installation angle of ρ (rad) Set on elevation view Conversion factor between top view and actual size k (pix / mm) Set with scaler Horizontal field angle of camera θh (rad) Set on top view Camera vertical field angle θv (Rad) Set on elevation

In the 3D video creation program, for example, when OpenGL is used as a library for creating 3D graphics, the 3D arrangement position of the camera and the gazing point 0O ( the object to be photographed by the tip of the optical axis 48 described above) If the angle of view) and the angle of view are set, an image in the three-dimensional space viewed from the camera can be drawn. Of the camera positions (xc, yc, zc) in the three-dimensional space under the above conditions, xc, yc are derived from the camera coordinates uc, vc on the top view 6 xc = uc, yc = vc
Is required. For zc, the height h (mm) set on the elevational view 14 is used. However, since the scale of the three-dimensional space is configured to be the same as the top view as described above, zc must be obtained in units of pixels. For this reason, the scaler 22 set on the top view is used. Using the conversion coefficient k (pix / mm) obtained by the scaler 22 set on the top view, zc is
zc = h * k
Is required. As a result, as shown in FIG. 5, the camera position that maintains the positional relationship with the top view can be calculated in the three-dimensional space. Here, the scale of the three-dimensional space is set in units of pixels, but by using the setting of the scaler 22, it can be set in units of millimeters.

The gazing point O of the camera in the three-dimensional space is calculated using the camera positions xc, yc, zc, the horizontal installation angle θ, and the vertical installation angle ρ. Here, if the gazing point O is on the Xw-Yw plane, the distance r from the point (xc, yc, 0) to the gazing point O is
r = zc / tanρ
Is required. Of the coordinates (xo, yo, 0) of the gazing point O, xo and yo are
xo = xc + r · cos θ
yo = yc + r · sin θ
Is required.

Further, the horizontal field angle θh and the vertical field angle θv of the camera are also set. By the way, the horizontal angle of view and the vertical angle of view of the normal camera are in a relationship in which one is determined and the other is also subordinately determined. Therefore, it is only necessary to set one of the horizontal field angle and the vertical field angle of the camera.
Therefore, by using the camera position, the gazing point, and the angle of view calculated in these three-dimensional spaces, it is possible to generate a camera simulation image that accurately reflects the installation conditions specified by the user.

Here, it is assumed that the user sets a wall using the wall display line 47 on the top view 6. Since the wall can be considered as a plane in the three-dimensional space, the wall can be displayed in the three-dimensional space by giving the three-dimensional coordinates of the four corners of the wall. Here, if the coordinates of both ends of the wall display line 47 on the top view 6 are given as (uw1, vw1), (uw2, vw2), it is assumed that the wall is in contact with the ground. The three-dimensional spatial coordinates (xw1, yw1, 0), (xw2, yw2, 0) on the ground plane are
(Xw1, yw1, 0) = (uw1, vw1, 0)
(Xw2, yw2, 0) = (uw2, vw2, 0)
Can be calculated.
Furthermore, it can be assumed that the wall stands vertically from the ground. When the height of the wall in the three-dimensional space is zw, the three-dimensional space coordinates of the remaining two points constituting the wall are (xw1, yw1, zw), (xw2, yw2, zw) as shown in FIG. It becomes. In order to calculate this zw, the wall height hw (mm) set by the user is used. However, since the scale of the three-dimensional space is configured to be the same as that of the top view as described above, zw must be obtained in units of pixels. For this reason, the scaler 22 set on the top view is used. Using the conversion coefficient k (pix / mm) obtained by the scaler 22 set on the top view, zw is
zw = hw * k
Is required. As a result, the three-dimensional spatial coordinates of the four corners constituting the wall are obtained, so that the 3D model of the wall can be displayed in the camera simulation image described above.

  Next, the display of the vertical visual field indicator 20 will be described. Further, as shown in FIG. 6, in the plane S described above, the intersection point A between the line segment 52 and the Xw-Yw plane and the intersection point P between the perpendicular line 60 and the Xw-Yw plane that are lowered from the camera position to the Xw-Yw plane. If the distance rg1, the line segment 56, the distance rg2 between the intersection point B and the intersection point P of the ground, and the lengths rs1, rs2 of the line segments 52, 56 are found, the near indicator corresponding to the line segments 52, 56 20a and the far indicator 20b can be displayed in an elevation view. Since the vertical angle of view θv is bisected by the optical axis 48, the angle CAP is ρ + θv / 2 and the angle CBP is ρ−θv / 2. Therefore, rg1 and rg2 are represented by the expressions (1) and (2), and rs1 and rs2 are represented by the expressions (3) and (4).

rg1 = zc / (tan (ρ + (θv / 2)) (1)
rg2 = zc / (tan (ρ− (θv / 2)) (2)
rs1 = zc / (sin (ρ + (θv / 2)) (3)
rs2 = zc / (sin (ρ− (θv / 2))) (4)

Accordingly, in the elevational view 14, from the camera position indicator 30 to the position of rg1, rg2 along the ground indicator 24, from the position corresponding to the height of zc from the ground on the camera position indicator 30, the length rs1, The near indicator 20a and the far indicator 20b can be displayed by drawing a straight line having a length corresponding to rs2. Each time any of the camera installation height zc, vertical installation angle ρ, and field angle is changed, rg1 and rg2 are calculated based on the equations (1) and (2), and the near indicator 20a Each indicator 20b is displayed.

  However, depending on the change in the vertical installation angle and the angle of view, the far indicator 20b may come above the horizontal as shown in FIG. That is, there is a case where θv / 2> ρ. At this time, rg2 cannot be calculated. In this case, if the distance rz2 between the intersection D of the perpendicular line and the line segment 56 and the intersection E of the perpendicular line and the height reference line 58 established from the camera position indicator 30 at a predetermined fixed distance rge is found. Can be displayed. Since the angle DCE is θv / 2−ρ, rz2 is expressed by Equation 3.

  rz2 = rge · tan (θv−ρ) (5)

  Accordingly, in the elevation view 14, the far indicator is drawn by drawing a straight line from the position corresponding to zc on the camera position indicator 30 at a position corresponding to zc + rz2 on the vertical line of the position corresponding to rge from the camera position indicator 30. 20b can be displayed.

  Next, the display of the horizontal visual field indicator 18 will be described. In FIG. 3, the intersection A described above is the midpoint between both ends Ar and Al of the upper side on the camera side of the trapezoidal bottom projected onto the Xw-Yw plane of the camera visual field 46. Similarly, the intersection point B is the midpoint between both ends Br and Bl of the lower side on the far side from the camera on the trapezoidal bottom projected onto the Xw-Yw plane of the camera field of view 46. In order to draw the horizontal field indicator 18, it is necessary to calculate the coordinates of Ar, Al, Br, and Bl. For this purpose, the distance rw1 from the intersection A to one end of the upper side, and one of the lower side from the intersection B The distance rw2 to the end and the above-described rg1 and rg2 are required. rg1 and rg2 are calculated by equations (1) and (2).

  Next, rw1 and rw2 are calculated. In order to do so, an angle ArCA and an angle BrCB are required. FIG. 8 is a diagram showing the relationship between the camera lens principal point O and the CCD, where f is the focal length, W is the lateral size of the CCD, and θh is the horizontal field angle. The horizontal angle of view θh is an angle formed by the principal point O and both ends of a horizontal line segment W passing through the center of the CCD. The horizontal size of the CCD at the upper end of the CCD in FIG. 9A is constant as W without changing, but the distance ft from the lens principal point O to the middle point at the upper end of the CCD is longer than the focal length f. The horizontal viewing angle θht is smaller than the horizontal field angle θh. Similarly in FIG. 9B, the horizontal size of the CCD at the lower end of the CCD is W without change, but the distance fb from the lens principal point O to the middle point of the lower end of the CCD is longer than the focal length f. The horizontal viewing angle θhb at the lower end is smaller than the horizontal field angle θh. These viewing angles θht and θhb can be calculated if ft and fb are obtained.

  Therefore, a vertical section is considered as shown in FIG. At this time, if the optical axis 48 passes through the center of the CCD, f is obtained from Equation 6.

  f = ft · cos (θv / 2) (6)

  Next, the relationship of Formula 7 and Formula 8 is calculated | required from FIG. 8 and FIG. 9 (a).

W / f = 2 tan (θh / 2) (7)
W / ft = 2 tan (θht / 2) (8)

  Here, by substituting equation (6) into equation (7), the relationship of (9) is obtained.

  W / ft = 2 tan (θh / 2) cos (θv / 2) (9)

  By substituting equation (8) into equation (9), the relationship of equation (10) is obtained.

  tan (θht / 2) = tan (θh / 2) cos (θv / 2) (10)

  Equation (11) is obtained from Equation (10).

θht = 2tan −1 {tan (θh / 2) cos (θv / 2)} (11)

  Here, since ft = fb, θht = θhb. In FIG. 3, rw1 and rw2 that are desired to be calculated can be calculated as in Expression (12) and Expression (13) using Expression (10).

rw1 = rs1 · tan (θhb / 2) = rs1 · tan (θh / 2) cos (θv / 2) (12)
rw2 = rs2 · tan (θht / 2) = rs2 · tan (θh / 2) cos (θv / 2) (13)

  Since rg1, rg2, rw1, and rw2 have been calculated in this manner, the coordinates of the points Ar, Al, Br, and Bl can be calculated.

  The above description is given when the horizontal installation angle of the camera upper surface indicator 10 is 0 [rad] in the upper surface view 6 as shown in FIG. Here, the horizontal installation angle is an angle on the Xw-Yw plane, and the rotation from Xw to the Yw axis direction is positive. At this time, if the point on the top view 6 of the camera position is Cm, the coordinates of the points Amr, Aml, Bmr, and Bml serving as the boundary points of the horizontal field indicator 18 are expressed by the following equations (14) to (17). It is required as follows.

Aml = Cm + k · (rg1, −rw1) (14)
Amr = Cm + k (rg1, rw1) (15)
Bml = Cm + k · (rg2, −rw2) (16)
Bmr = Cm + k (rg2, rw2) (17)
Here, the points Aml, Amr, Bmr, Bml, and Cm are two-dimensional coordinates, and are represented by vectors in the equations (14) to (17). k is a conversion coefficient between the top view and the actual size as described above, and is a scalar quantity.

  Expressions (14) to (17) are cases where the horizontal installation angle of the camera is 0 [rad]. As shown in FIG. 12, when the horizontal installation angle of the camera is θ [rad] as described above, the rotation matrix R represented by the equation (18) is considered.

R = | cos θ−sin θ |
| Sinθ cosθ | (18)

  By multiplying the second term of Equations (14) to (17) by the two-dimensional rotation matrix of the horizontal installation angle θ represented by Equation (18), the coordinates of the points Aml, Amr, Bml, and Bmr are expressed by Equation (19). ) To (22).

Aml = Cm + k · R (rg1, −rw1) (19)
Amr = Cm + k · R (rg1, rw1) (20)
Bml = Cm + k · R (rg2, −rw2) (21)
Bmr = Cm + k · R (rg2, rw2) (22)

  By using these coordinates, it is possible to draw the horizontal camera field indicator 18 on the top view 6, and whenever the horizontal installation angle θ of the camera top indicator 10 is changed on the top view 6, θ at that time Are used to calculate new coordinates of Aml, Amr, Bml, and Bmr using equations (19) to (22), and the horizontal visual field indicator 18 is redrawn.

  When the vertical installation angle ρ of the camera is small, rg2 becomes a large value, and the horizontal view indicator 18 may not be entirely drawn in the top view 6. In some cases, it is desirable to limit the depth of the horizontal visual field indicator 18 even if the vertical installation angle is not small. In these cases, drawing can be performed with the depth of field of view being limited to a predetermined rd as shown in FIG. In this case, when rd is used instead of rg2, the length rst of the line segment CBt is determined by Expression (23).

  rst = rd / (cos (ρ− (θv / 2))) (23)

  Here, when rs2 = rd is set in formula (13) and formula (23) is substituted, rw2 is obtained by formula (24).

  rw2 = rd · tan (θh / 2) · cos (θv / 2) / (cos (ρ− (θv / 2))) (24)

  If this is used and calculation is performed in the same manner as described above, the horizontal visual field indicator 18 can be drawn on the top view 6 with the depth limited.

  Next, as shown in FIG. 14, the blind spot when one rectangular wall 70 assumed by the length and the height information of the wall display line 47 stands within the imaging range 46 in the above three-dimensional space. The determination of the range 47a will be described. In FIG. 14, when the optical axis direction of the camera is a half line BL projected onto the Xw-Yw plane, the half line BL faces the Xw axis direction (the horizontal installation angle θ in FIG. 5 is 0 [rad]). Assume that At this time, the two vertices of the Xw-Yw plane of the wall 70 are defined as Wb1 and Wb2, and the two vertices on the tip side of the wall 70 are defined as Wt1 and Wt2. The coordinates of these vertices are such that Wb1 is (xw1, yw1, 0), Wb2 is (xw2, yw2, 0), and Wt11 is (xw1) according to the arrangement of the wall display line 47 on the top view 6 and the wall height information. , Yw1, zw) and wt2 are calculated to be (xw2, yw2, zw). By setting the wall 70, a point Wg1 passing through the camera coordinate C through Wt11 and intersecting the Xw-Yw plane and a point Wg2 passing through the camera coordinate C and passing through Wt2 and intersecting the Xw-Yw plane are determined, and Wt1, Wt2 , Wb1, Wb2, Wg1, and Wg2 are shaded areas surrounded by the walls. Since Wb1 and Wb2 are calculated as described above, if Wg1 and Wg2 can be calculated, this blind spot range can be determined. Further, by drawing the area surrounded by the points corresponding to Wb1, Wb2, Wg1, and Wg2 on the top view 6 by removing from the horizontal visual field indicator 47, it is possible to display a horizontal visual field that represents the wall dead angle range 47a.

  Therefore, first, as shown in FIG. 15, a distance r between a point P lowered from the camera position C to the ground and the vertex Wb1 of the wall is calculated. The distance r is expressed by equation (25) as is apparent from FIG.

r = {(xw1-xc) 2 + (yw1-yc) 2 } 1/2 (25)

  Next, as shown in FIG. 15, if the intersection of the perpendicular line extending from the vertex Wt1 to the line segment 60 connecting the camera position C and the point P and the line segment 60 is P ′, the triangle CP′Wt1 and the triangle CPWg1 are Since they are similar, the relationship of equation (26) is established.

  r: rd1 = zc−zw: zc (26)

  When formula (26) is transformed, rd1 is expressed by formula (27).

  rd1 = zc · r / (zc−zw) (27)

  By the way, when the angle formed by the line segment Wg1P shown in FIG. 14 and the half line BL is θ1, θ1 is expressed by Expression (28).

θ1 = tan −1 {(yw1-yc) / (xw1-xc)} (28)

  Therefore, when rd1 and θ1 are used, xg1 and yg1 in the coordinates of Wg1 are expressed by equations (29) and (30).

xg1 = xc + rd1 · cos (θ1) (29)
yg1 = yc + rd1 · sin (θ1) (30)

  In the above example, the wall height zw is shorter than the camera height zc, but zw may be longer than zc. In this case, rd1 is not calculated by Expression (27), but a sufficiently large value is substituted, and xg1 and yg1 are calculated by Expressions (29) and (30).

  Assuming that the angle formed by the line segment wg2P and the half line BL is θ2, xg2 and yg2 of the coordinates of Wg2 are expressed by equations (31) and (32) in the same manner as described above.

xg2 = xc + rd2 · cos (θ2) (31)
yg2 = yc + rd2 · sin (θ2) (32)

  However, the distance rd2 between the PWg2 is expressed by the equation (33) (when zw is shorter than zc, if it is long, a sufficiently large value is used as described above), and θ2 is expressed by the equation (34). The

rd2 = {zc / (zc-zw)} {(xw2-xc) 2 + (yw2-yc) 2 } 1/2 (33)
θ2 = tan −1 {(yw2-yc) / (xw2-xc)} (34)

  In this way, the vertices Wg1 and Wg2 are obtained, and the range surrounded by these known Wb1, Wb2, Wt1, and Wt2 is the blind spot range. As described above, in the three-dimensional space, the lower left corner of the top view shown in FIG. 1 is the origin 0 of the three-dimensional space, the right direction of the top view is + Xw, the top direction of the top view is + Yw, and the top view is from the back to the front. Since the direction is + Zw and the scale is the same pixel unit as in the top view, the coordinates corresponding to Wb1, Wb2, Wg1, and Wg2 on the top view 6 have the coordinates obtained as described above as they are. This region can be used as a wall dead angle range 47a.

  The wall dead angle range obtained as described above was calculated on the assumption that the horizontal installation angle of the camera is 0 [rad]. Here, consider a case where the horizontal installation angle of the camera is θ [rad]. Even in this case, the coordinates of Wb1 and Wb2 are known because they are given by the user. However, the coordinates of Wg1 and Wg2 are obtained by using equations (35) and ( 36).

Wg1 = (xc, yc) + R (rd1 · cos (θ1), rd1 · sin (θ1)) (35)
Wg2 = (xc, yc) + R (rd2 · cos (θ2), rd2 · sin (θ2)) (36)

  In this way, regardless of the horizontal installation angle of the camera, it is possible to calculate the wall dead angle range 47a on the top view and remove the area from the horizontal visual field indicator 18 for display. .

  The above description is for the case where the blind spot range by the wall 70 is entirely within the imaging range, but the camera and the wall as shown in FIGS. 16A to 16E are in various positional relationships. However, the blind spot range can be obtained by a similar method, and the blind spot range can be displayed on the top view 6. 16A shows a case where the wall is high and penetrates the blind spot range upward, and FIG. 16B shows a case where the wall is long on one side and penetrates one side of the blind spot range. c) The case where the wall is long on both sides and penetrates both sides of the blind spot range. FIG. 4D is the case where the wall is high and penetrates above the blind spot range, and the walls are long on both sides and penetrate both sides of the blind spot range. FIG. 5E shows the case where the wall is located obliquely with respect to the camera.

  In addition, the above description is for a case where only one wall is present. However, even when there are a plurality of walls as shown in FIG. 17, the blind spot range is obtained for each wall in the same manner as described above. The field of view in consideration of all the walls can be displayed in the top view 6. Further, even when the walls are continuously connected, the blind spot range can be obtained in the same manner as when there are a plurality of walls.

  In the above embodiment, the wall is expressed by the wall display line. However, the wall is not limited to the wall. For example, a partition or the like can be simulated by inputting appropriate height information. Furthermore, even for an object having a thickness such as a pillar, a region that becomes a blind spot can be calculated and displayed by simulating the four sides as walls. In the above embodiment, the virtual 3D video is displayed on the 3D video display unit 43. However, the 3D video display unit 43 may be removed depending on the case.

  In the above-described embodiment, the horizontal visual field indicator 18 is generated and displayed by removing the overlapping wall blind spot range 47a from the imaging range. However, the wall blind spot range can be distinguished from other horizontal visual field indicator parts. It can also be displayed by a wall blind spot range indicator which is highlighted such as surrounding the range with a display line, blinking the range, changing the color tone, or displaying diagonal lines.

It is a figure which shows the image which the computer is displaying based on the program for camera installation simulators of one Embodiment of this invention. It is a figure which shows the visual field of the camera which is going to simulate with the program of FIG. It is a figure which shows the relationship between the camera visual field of FIG. It is a figure which shows the 3D image in the image of FIG. It is explanatory drawing of the calculation method of the installation position of a camera and the gaze point in the three-dimensional space of FIG. It is explanatory drawing of a display of the range of a vertical angle of view when a vertical installation angle is large. It is explanatory drawing of the display of a vertical angle of view when a vertical installation angle is small. It is a figure which shows the relationship between a horizontal field angle, a lens principal point, and CCD. It is explanatory drawing of the horizontal viewing angle in the upper end and lower end of CCD. It is a figure which shows the relationship of the focal distance in the center of CCD, an upper end, and a lower end. It is a figure which shows the display state of a horizontal visual field indicator in case a horizontal installation angle on a top view is 0 [rad]. It is a figure which shows the display state of a horizontal visual field indicator when a horizontal installation angle on a top view is (theta) [rad]. It is explanatory drawing of the camera visual field at the time of restrict | limiting the depth of a visual field. It is explanatory drawing of the camera view of the state in which a visual field is shielded by the wall. It is explanatory drawing of the state in which a visual field is shielded by the wall. It is explanatory drawing of the state by which the visual field in the state which has arrange | positioned a different wall is shielded. It is explanatory drawing of the camera blind spot in the state where a visual field is shielded by a plurality of walls.

Explanation of symbols

4 Top View Display Window 6 Top View 10 Camera Top Indicator 12 Elevation View Display Window 14 Elevation View 16 Camera Elevation Indicator 18 Horizontal View Indicator 20 Vertical View Indicator 47 Wall Display Line (Partition Indicator)
47a Wall blind spot range (dead zone range)

Claims (8)

  1. A camera installation simulator program for simulating the arrangement of cameras in a predetermined area and the shooting range of the camera,
    A computer that executes the camera installation simulator program,
    Processing for displaying a top view image representing a part or all of the predetermined area on a display device based on the top view data of the predetermined area;
    A process of arranging and displaying a compartment indicator representing a compartment in a desired range of the top view image; and
    A process of arranging and displaying a camera indicator representing the camera on the top view image; and
    A conversion coefficient between the top view image and the actual size of the object corresponding to the top view image, an arrangement position of a camera indicator on the top view image, a height of the camera, a horizontal and vertical installation angle and an angle of view, the section A process of arbitrarily setting the arrangement position of the body indicator and the height of the partition,
    Based on these setting information and the top view data, processing to display a field indicator corresponding to the shooting range of the camera on the top view image,
    Done
    When a part or all of the section is included in the shooting range, the process of displaying the visual field indicator is different from a blind spot range generated by the partition in the shooting range and a portion other than the blind spot range. A camera installation simulator program for displaying the visual field indicator.
  2. The process of displaying the visual field indicator further includes making the visual field indicator different from a blind spot range caused by a height, a horizontal and vertical installation angle and an angle of view of the camera, and a portion other than the blind spot range. The camera installation simulator program according to claim 1 to be displayed .
  3. The computer further includes:
    In addition to the top view image, a process of displaying an elevation image, and a process of displaying a camera indicator superimposed on the elevation image according to the arbitrarily set camera height,
    The process of displaying the visual field indicator further displays the visual field indicator corresponding to the shooting range of the camera on the elevation image based on the setting information, and the height and horizontal and vertical of the camera. The visual field indicator is displayed by making a blind spot range caused by an installation angle and an angle of view different from a portion outside the blind spot range.
    The camera installation simulator according to claim 1 .
  4. The computer further includes:
    The conversion coefficient between the top view image and the actual size of the object corresponding to the top view image is set by a scale indicator displayed on the top view image and an arbitrary part of the object corresponding to the top view image. Overlap, by setting how long the scale indicator recognizes
    The camera installation simulator according to claim 1 .
  5. The computer further includes:
    A process of displaying a plurality of cameras in a list format in the vicinity of the top view image and the elevation view image,
    In response to an operation of dragging and dropping a camera in the list to the top view image, a process of arranging and displaying a camera indicator representing the camera on the top view image, and an arbitrarily set camera And a process of displaying a camera indicator superimposed on the elevation image according to the height of
    The camera installation simulator program according to claim 3 .
  6. When a part or all of the partition body is included in the shooting range, the computer that executes the camera installation simulator program is based on the setting information and the top view data, and the virtual of the camera including the partition body is included. The camera installation simulator program according to any one of claims 1 to 5 , wherein processing for generating three-dimensional graphics as a photographed image and displaying the three-dimensional graphics on the display device is performed.
  7. A computer that executes the camera installation simulator program,
    A process of displaying a shooting target indicator that represents a shooting target in an overlapping manner on the top view image; and
    A process of setting the light transmittance of the compartment;
    If the shooting range includes a part or all of the shooting target and the partition body, and the three-dimensional graphics display that the partition body overlaps the front of the shooting target, the set light The camera installation simulator program according to claim 6 , wherein processing for generating and displaying three-dimensional graphics so that the photographing object can be seen through the partition according to transmittance is performed.
  8. The blind spot range by the partition body passes through the upper end of the partition body from the position of the camera in the area and is in contact with the surface corresponding to the top view image in the area, and the section of the partition body The camera installation simulator program according to any one of claims 1 to 5 , defined by a region connecting a surface corresponding to a top view image and a line segment by contact with a lower end of the partition body.
JP2008086032A 2008-03-28 2008-03-28 Camera installation simulator program Active JP4975673B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008086032A JP4975673B2 (en) 2008-03-28 2008-03-28 Camera installation simulator program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008086032A JP4975673B2 (en) 2008-03-28 2008-03-28 Camera installation simulator program

Publications (2)

Publication Number Publication Date
JP2009239821A JP2009239821A (en) 2009-10-15
JP4975673B2 true JP4975673B2 (en) 2012-07-11

Family

ID=41253196

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008086032A Active JP4975673B2 (en) 2008-03-28 2008-03-28 Camera installation simulator program

Country Status (1)

Country Link
JP (1) JP4975673B2 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6018894B2 (en) * 2012-12-04 2016-11-02 株式会社日立国際電気 Wireless transmission system
JP6022330B2 (en) * 2012-12-05 2016-11-09 セコム株式会社 Camera system
WO2014087621A1 (en) * 2012-12-06 2014-06-12 日本電気株式会社 Program, method, and system for displaying image recognition processing suitability
JP6265131B2 (en) * 2012-12-06 2018-01-24 日本電気株式会社 Visual field display system, visual field display method, and visual field display program
WO2014087622A1 (en) * 2012-12-06 2014-06-12 日本電気株式会社 Appearance presentation system, method, and program
JP6126501B2 (en) * 2013-09-03 2017-05-10 Toa株式会社 Camera installation simulator and its computer program
JP6174968B2 (en) * 2013-10-29 2017-08-02 セコム株式会社 Imaging simulation device
JP6132811B2 (en) * 2014-06-05 2017-05-24 三菱電機ビルテクノサービス株式会社 Program and information processing apparatus
WO2017134786A1 (en) * 2016-02-04 2017-08-10 三菱電機株式会社 Installation position determination device, installation position determination method and installation position determination program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06243266A (en) * 1993-02-16 1994-09-02 Toyota Motor Corp Perspective drawing generating device
JP3834967B2 (en) * 1997-11-25 2006-10-18 日産自動車株式会社 Blind spot range display device
JP2008146113A (en) * 2006-12-06 2008-06-26 Oki Electric Ind Co Ltd Cad system for building

Also Published As

Publication number Publication date
JP2009239821A (en) 2009-10-15

Similar Documents

Publication Publication Date Title
US4685070A (en) System and method for displaying, and interactively excavating and examining a three dimensional volume
JP6436628B2 (en) System and method for navigating a three-dimensional environment using a multi-input interface
US7161606B2 (en) Systems and methods for directly generating a view using a layered approach
US6046745A (en) Three-dimensional model making device and its method
US6798412B2 (en) Occlusion reducing transformations for three-dimensional detail-in-context viewing
DE69817634T2 (en) Device and method for displaying windows
JP5791976B2 (en) image processing apparatus, image processing method, and program
US8042056B2 (en) Browsers for large geometric data visualization
US20030058241A1 (en) Method and system for producing dynamically determined drop shadows in a three-dimensional graphical user interface
US20050262447A1 (en) Fisheye lens graphical user interfaces
US7492362B2 (en) Virtual space rendering/display apparatus and virtual space rendering/display method
US7739623B2 (en) Interactive 3D data editing via 2D graphical drawing tools
JP3697276B2 (en) Image display method, image display apparatus, and image scaling method
US20140049617A1 (en) Image information output method
US7995078B2 (en) Compound lenses for multi-source data presentation
EP2245528B1 (en) Projection of graphical objects on interactive irregular displays
US20130300740A1 (en) System and Method for Displaying Data Having Spatial Coordinates
US6677939B2 (en) Stereoscopic image processing apparatus and method, stereoscopic vision parameter setting apparatus and method and computer program storage medium information processing method and apparatus
US6229542B1 (en) Method and apparatus for managing windows in three dimensions in a two dimensional windowing system
US6064399A (en) Method and system for panel alignment in panoramas
Furmanski et al. Augmented-reality visualizations guided by cognition: Perceptual heuristics for combining visible and obscured information
DE69534331T2 (en) Method and device for highlighting the detail of a tree structure
Bane et al. Interactive tools for virtual x-ray vision in mobile augmented reality
US6400364B1 (en) Image processing system
US20040196282A1 (en) Modeling and editing image panoramas

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20100212

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110704

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110726

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110926

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120321

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120411

R150 Certificate of patent or registration of utility model

Ref document number: 4975673

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150420

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150420

Year of fee payment: 3

S533 Written request for registration of change of name

Free format text: JAPANESE INTERMEDIATE CODE: R313533

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150420

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250