WO2013129188A1 - Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image - Google Patents
Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image Download PDFInfo
- Publication number
- WO2013129188A1 WO2013129188A1 PCT/JP2013/054028 JP2013054028W WO2013129188A1 WO 2013129188 A1 WO2013129188 A1 WO 2013129188A1 JP 2013054028 W JP2013054028 W JP 2013054028W WO 2013129188 A1 WO2013129188 A1 WO 2013129188A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- camera
- unit
- dimensional
- icon
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
Definitions
- the present invention relates to an image processing apparatus, an image processing method, and an image processing program.
- Patent Document 1 discloses a technique for displaying a camera image of a region where a certain event has occurred together with the above-described three-dimensional model.
- Patent Document 2 a monitoring event is detected and recorded together with an image, and the occurrence and end time of the monitoring event and a feature amount representing the monitoring event are recorded, and a plurality of displayed images or monitoring images are recorded.
- the structure which synchronizes event information according to a user's browsing operation is disclosed.
- recorded camera images can be played back, but the viewpoint position and line-of-sight direction of the camera during recording, the location of the monitoring event that occurred during recording, and the nearby camera where the event occurred It is difficult for the user to intuitively determine the information accompanying the recorded camera image such as
- the present invention has been made in view of the above, and an image processing apparatus and an image processing method that make it possible to intuitively grasp information incidental to a recorded camera image when a recorded image is reproduced.
- An object of the present invention is to provide an image processing program.
- An image processing apparatus is a three-dimensional model that stores three-dimensional model data indicating a model of a three-dimensional area and a position of a camera arranged in the three-dimensional area as a camera position in the three-dimensional model.
- a storage unit An acquisition unit that acquires a captured image captured by the camera and an imaging direction at the time of imaging; an image storage unit that stores the captured image acquired by the acquisition unit and the imaging direction in association with each other; An operation unit that receives a reproduction instruction for the captured image stored in the image storage unit, and when the operation unit receives a reproduction instruction, the viewpoint for the three-dimensional model is determined according to the camera position, and the determined
- the secondary image generated by generating a two-dimensional projection image obtained by projecting the three-dimensional model onto a two-dimensional surface according to the viewpoint and the imaging direction stored in the image storage unit.
- a synthesizing unit which generates a synthesized image by synthesizing the captured image by the image storing unit stores, and an outputting unit which outputs the composite image.
- An image processing apparatus includes an acquisition unit that acquires a captured image acquired by a camera in a monitoring area, and an imaging time when the captured image is acquired, and the acquisition unit acquires An image storage unit that associates and stores the captured image and the imaging time, an event acquisition unit that acquires event information of an event detected by a sensor in the monitoring area, and an occurrence time of the event; An event storage unit that stores the event information acquired by the event acquisition unit in association with the generation time, an operation unit that receives an instruction to reproduce the captured image stored in the image storage unit, and the operation unit Receives a reproduction instruction, a two-dimensional image of the monitoring area including a camera icon associated with the camera and a sensor icon associated with the sensor is generated.
- a combining unit that generates a combined image by combining the captured image stored in the image storage unit in a predetermined area of the two-dimensional image, and an output unit that outputs the combined image combined by the combining unit.
- the sensor included in the two-dimensional image based on event information whose generation time belongs to the same time zone as the imaging time associated with the captured image combined with the two-dimensional image.
- the display mode of the icon is changed.
- An image processing method stores three-dimensional model data indicating a model of a three-dimensional region and a position of a camera arranged in the three-dimensional region as a camera position in the three-dimensional model.
- a storage step an acquisition step of acquiring a captured image captured by the camera and an imaging direction at the time of imaging, an image that stores the captured image acquired in the acquisition unit step and the imaging direction in association with each other
- a storage step an operation step for receiving a reproduction instruction for the captured image stored in the image storage step, and a viewpoint for the three-dimensional model according to the camera position when the reproduction instruction is received in the operation step.
- the three-dimensional model is determined according to the determined viewpoint and the imaging direction stored in the image storing step.
- a two-dimensional projection image is generated by projecting the image onto a two-dimensional surface, and a composite image is generated by combining the captured image stored in the image storage step with a predetermined region on the generated two-dimensional projection image A synthesizing step, and a step of outputting the synthesized image.
- An image processing program stores three-dimensional model data indicating a model of a three-dimensional region and a camera position arranged in the three-dimensional region as a camera position in the three-dimensional model.
- a storage function to be stored in a unit an acquisition function for acquiring a captured image captured by the camera and an imaging direction at the time of imaging, the captured image acquired by the acquisition unit function, and the imaging direction in association with each other
- a viewpoint for the three-dimensional model is determined, and the three-dimensional model is determined according to the determined viewpoint and the imaging direction stored in the image storage unit.
- a composite function for generating a composite image by generating a two-dimensional projection image projected onto a two-dimensional plane and combining the captured image stored in the image storage unit with a predetermined region on the generated two-dimensional projection image And an output function for outputting the composite image.
- FIG. 1 is a diagram illustrating a configuration example of a monitoring system according to the first embodiment.
- FIG. 2 is a diagram illustrating a configuration example of the imaging apparatus according to the first embodiment.
- FIG. 3 is a diagram illustrating a configuration example of the image processing apparatus according to the first embodiment.
- FIG. 4 is a diagram illustrating an example of information stored by the three-dimensional model storage unit.
- FIG. 5 is a diagram for explaining the three-dimensional model data.
- FIG. 6 is a diagram illustrating an example of information stored in the viewpoint information storage unit.
- FIG. 7 is a flowchart illustrating an example of the flow of overall processing according to the first embodiment.
- FIG. 8 is a flowchart illustrating an example of the flow of the initial image generation process according to the first embodiment.
- FIG. 9 is a diagram illustrating an example of an initial image.
- FIG. 10 is a flowchart illustrating an example of the flow of image processing according to the first embodiment.
- FIG. 11 is a diagram illustrating an example of a two-dimensional projection image.
- FIG. 12 is a diagram illustrating an example of a composite image.
- FIG. 13 is a diagram illustrating an example of changing the position of the predetermined area.
- FIG. 14A is a diagram illustrating an example of a two-dimensional projection image before transmitting a predetermined object.
- FIG. 14B is a diagram illustrating an example of a two-dimensional projection image after transmitting a predetermined object.
- FIG. 15A is a diagram illustrating an example of a two-dimensional projection image before adjusting the size of the pop-up.
- FIG. 15B is a diagram illustrating an example of a two-dimensional projection image after adjusting the size of the pop-up.
- FIG. 16 is a diagram illustrating a configuration example of an image processing apparatus according to the fourth embodiment.
- FIG. 17A is a diagram illustrating an example of a composite image before a drag operation is performed.
- FIG. 17B is a diagram illustrating an example of a composite image when a drag operation is performed in the right direction.
- FIG. 17C is a diagram illustrating an example of a composite image when a drag operation is performed in the upward direction.
- FIG. 18 is a flowchart illustrating an example of the flow of overall processing according to the fourth embodiment.
- FIG. 19 is a flowchart illustrating an example of the flow of image processing according to the fourth embodiment.
- FIG. 19 is a flowchart illustrating an example of the flow of image processing according to the fourth embodiment.
- FIG. 20 is a flowchart illustrating an example of the flow of image processing according to a modification of the fourth embodiment.
- FIG. 21 is a flowchart illustrating an example of the flow of image processing according to the fifth embodiment.
- FIG. 22 is an image diagram showing an example of determining a base point according to the sixth embodiment.
- FIG. 23 is an image diagram showing an example of setting the camera operating speed according to the drag distance according to the sixth embodiment.
- FIG. 24 is a flowchart illustrating an example of the flow of image processing according to the sixth embodiment.
- FIG. 25 is an image diagram showing an example of determining a base point according to the seventh embodiment.
- FIG. 26 is an image diagram showing an example of setting the camera operation speed according to the drag speed according to the seventh embodiment.
- FIG. 27 is a flowchart illustrating an example of the flow of image processing according to the seventh embodiment.
- FIG. 28 is a diagram illustrating an example of information stored in the three-dimensional model storage unit.
- FIG. 29 is an explanatory diagram of the hierarchical structure of the monitoring area.
- FIG. 30 is a diagram illustrating an example of information stored in the viewpoint information storage unit.
- FIG. 31 is an explanatory diagram showing monitoring areas or camera icons that can be selected from the selected image.
- FIG. 32 is a flowchart illustrating an example of the flow of overall processing according to the eighth embodiment.
- FIG. 33 is a flowchart illustrating an example of the flow of selected image generation processing according to the eighth embodiment.
- FIG. 34 is a diagram for explaining processing of the image processing apparatus according to the ninth embodiment.
- FIG. 28 is a diagram illustrating an example of information stored in the three-dimensional model storage unit.
- FIG. 29 is an explanatory diagram of the hierarchical structure of the monitoring area.
- FIG. 30 is
- FIG. 35 is a diagram illustrating an example of information stored in the three-dimensional model storage unit.
- FIG. 36A is a diagram illustrating a two-dimensional projection image drawn when the viewpoint position is changed.
- FIG. 36B is a diagram showing a two-dimensional projection image drawn when the viewpoint position is changed.
- FIG. 36C is a diagram showing a two-dimensional projection image drawn when the viewpoint position is changed.
- FIG. 37 is a diagram for explaining the viewpoint range with the y-axis as the rotation axis for all monitoring areas.
- FIG. 38A is a diagram illustrating a two-dimensional projection image drawn when the viewpoint position is changed.
- FIG. 38B is a diagram showing a two-dimensional projection image drawn when the viewpoint position is changed.
- FIG. 39 is a flowchart of the image processing apparatus.
- FIG. 39 is a flowchart of the image processing apparatus.
- FIG. 40 is a diagram illustrating a configuration example of an image processing device according to the tenth embodiment.
- FIG. 41 is a diagram showing an example of a captured image management table held by the image storage unit shown in FIG.
- FIG. 42 is a diagram illustrating a configuration example of a sensor according to the first to eleventh embodiments.
- FIG. 43 is a diagram showing an example of an event management table held by the event storage unit shown in FIG.
- FIG. 44 is a diagram illustrating an example of a sensor management table managed by the three-dimensional model storage unit in the tenth embodiment.
- FIG. 45 is a flowchart showing the recording operation of the control unit according to the tenth embodiment.
- FIG. 46 is a flowchart showing the event recording operation of the control unit according to the tenth embodiment.
- FIG. 41 is a diagram showing an example of a captured image management table held by the image storage unit shown in FIG.
- FIG. 42 is a diagram illustrating a configuration example of a sensor according to the first to
- FIG. 47 is a flowchart showing an example of a recording / playback operation by the control unit according to the tenth embodiment.
- FIG. 48 is a diagram illustrating an example of a composite image according to the tenth embodiment.
- FIG. 49 is a diagram showing an example of the operation screen shown in FIG.
- FIG. 50 is a diagram illustrating an example of a composite image according to the eleventh embodiment.
- FIG. 51 is a diagram illustrating that an image processing program is realized using a computer.
- FIG. 1 is a diagram illustrating a configuration example of a monitoring system according to the first embodiment.
- an image processing device 10 As shown in FIG. 1, in the monitoring system 1, an image processing device 10, an imaging device 20a1, a sensor 40a1, an imaging device 20b1, and a sensor 40b1 are connected to a network 30. Such a monitoring system 1 is used for monitoring a predetermined area.
- the image processing apparatus 10 may be installed in, for example, a management room in a store or a security room of a security company.
- the entire area to be monitored by the monitoring system 1 may be referred to as “all monitoring areas”.
- the imaging device 20a1 and the sensor 40a1 are arranged in an area 50a that is a part of the entire monitoring area.
- the imaging device 20b1 and the sensor 40b1 are arranged in an area 50b that is a part of the entire monitoring area.
- a plurality of imaging devices and sensors may be arranged in the area 50a.
- a plurality of imaging devices and sensors may be arranged in the area 50b.
- the imaging device 20a1 and the imaging device 20b1 may be referred to as “imaging device 20”.
- the sensor 40a1 and the sensor 40b1 may be referred to as “sensor 40”.
- the area 50a1 and the area 50b1 may be referred to as “area 50”.
- the imaging device 20 includes a camera that captures an image of a subject within the imageable range included in the area 50. Then, the imaging device 20 transmits a captured image obtained by compressing an image captured by the camera to the image processing device 10. At this time, the imaging device 20 also transmits zoom information and imaging direction information, which are information on the focal length of the zoom lens in the camera at the time of imaging, to the image processing device 10.
- imaging direction information is, for example, pan information that is the horizontal angle of the imaging direction and tilt information that is the vertical angle of the imaging direction.
- the image processing apparatus 10 stores 3D model data representing the 3D model of all monitoring areas.
- Such three-dimensional model data is information indicating the shape, size, and layout (position information) of an object (object) such as a building in all monitoring areas.
- the three-dimensional model data is generated in advance by, for example, rendering using information such as the shape and viewpoint of an object obtained by imaging with the imaging device 20. Alternatively, it may be generated from a floor plan of all monitoring areas.
- the image processing apparatus 10 stores a camera position that is a position in a three-dimensional model of a camera (corresponding to the imaging apparatus 20) arranged in all the monitoring areas.
- the image processing apparatus 10 acquires the captured image captured by the imaging apparatus 20, the focal length of the zoom lens at the time of imaging, the imaging direction, and the like from the imaging apparatus 20. Subsequently, the image processing apparatus 10 determines a viewpoint for generating a two-dimensional image from the three-dimensional model based on the camera position, and a two-dimensional projection image corresponding to the determined viewpoint, imaging direction, and focal length of the zoom lens. Is generated. Thereafter, the image processing apparatus 10 generates a composite image by combining the captured image with a predetermined region on the generated two-dimensional projection image, and outputs the generated composite image.
- the sensor 40 is, for example, a human sensor or an opening / closing sensor that is appropriately disposed in the area 50 and detects an opening / closing of a person or a door and issues an alarm, and transmits detection data to the image processing apparatus 10. .
- the image processing apparatus 10 synthesizes and outputs the captured image captured by the imaging apparatus 20 to a predetermined region of the two-dimensional projection image generated according to the imaging direction of the imaging apparatus 20 and the like.
- the person can intuitively grasp which area in the entire monitoring area is the area captured by the camera. Detailed configurations of the image processing device 10 and the imaging device 20 will be described later.
- FIG. 2 is a diagram illustrating a configuration example of the imaging device 20 according to the first embodiment.
- the imaging device 20 includes an imaging unit 201, a compression unit 202, a communication processing unit 203, a control unit 204, a pan head driving unit 205, an angle sensor 206, and a zoom driving unit 207. And a zoom sensor 208.
- the imaging device 20 is a network camera connected to the image processing device 10 via the network 30 and can change the imaging direction according to a control signal received via the network 30.
- the imaging unit 201 captures a subject within the imageable range and outputs the captured image to the compression unit 202.
- the captured image may be a still image or a moving image.
- the compression unit 202 compresses the captured image captured by the imaging unit 201 with a standard such as JPEG (Joint Photographic Experts Group) or MPEG-4 (Moving Picture Experts Group phase 4), and communicates the compressed digital image as a captured image.
- the data is output to the processing unit 203.
- the compression unit 202 also receives pan information, tilt information, and zoom information from the control unit 204 and adds them to the header or footer of the captured image. As a result, a captured image to which pan information, tilt information, and zoom information are added is sent to the communication processing unit 203.
- the communication processing unit 203 transmits the captured image to which the pan information, tilt information, and zoom information output by the compression unit 202 are added to the image processing apparatus 10 via the network 30. Further, the communication processing unit 203 outputs a control signal received from the external device to the control unit 204. Such a control signal is, for example, a control signal related to pan, tilt, and zoom. That is, the communication processing unit 203 receives a control signal from an external device in order to perform camera control, as in a general network camera. The communication processing unit 203 according to the present embodiment receives a control signal from the image processing apparatus 10.
- the control unit 204 performs control for causing the pan / tilt head driving unit 205 to perform panning and tilting and causing the zoom driving unit 207 to zoom the zoom lens. This control is based on a control signal input via the communication processing unit 203. As another example, the control unit 204 may perform the control according to a preset algorithm. Further, the control unit 204 receives the pan and tilt angles detected by the angle sensor 206 and sends them to the compression unit 202. Further, the control unit 204 receives the focal length of the zoom lens detected by the zoom sensor 208 and sends it to the compression unit 202.
- the pan head drive unit 205 changes the imaging direction of the imaging unit 201 in accordance with, for example, control from the control unit 204.
- the pan head drive unit 205 is provided with an angle sensor 206 that detects an angle in the pan direction or the tilt direction.
- the angle sensor 206 detects, for example, the pan and tilt angles and outputs them to the control unit 204.
- the zoom drive unit 207 is attached to the zoom lens of the imaging unit 201.
- the zoom driving unit 207 changes the focal length of the zoom lens according to the focal length of the zoom lens instructed by the control unit 204.
- the zoom lens is provided with a zoom sensor 208 that detects the focal length of the zoom lens.
- the zoom sensor 208 detects the focal length of the zoom lens and outputs it to the control unit 204. Based on the detection results of the angle sensor 206 and the zoom sensor 208, the control unit 204 controls these while appropriately checking the imaging direction of the imaging unit 201 and the focal length of the zoom lens.
- FIG. 3 is a diagram illustrating a configuration example of the image processing apparatus 10 according to the first embodiment.
- the image processing apparatus 10 includes a three-dimensional model storage unit 111, a viewpoint information storage unit 112, an operation unit 121, a communication processing unit 122, an acquisition unit 123, a control unit 124, a synthesis unit 125, and an output unit 126. And have.
- the image processing apparatus 10 is an information processing apparatus such as a PC (Personal Computer) connected to a network camera such as the imaging apparatus 20 or the sensor 40 via the network 30.
- PC Personal Computer
- the 3D model storage unit 111 stores 3D model data and the like.
- FIG. 4 is a diagram illustrating an example of information stored in the three-dimensional model storage unit 111.
- the three-dimensional model storage unit 111 associates three-dimensional model data representing a three-dimensional model of all monitoring areas, a device ID (identifier), an icon type ID, and an icon position.
- the device ID is device-specific identification information for identifying a device arranged in the three-dimensional model represented by the three-dimensional model data.
- the icon type ID is a type of icon that represents a device in the three-dimensional model.
- a different type of icon is assigned for each type of device, and the icon type ID is information for identifying the type of icon.
- the icon position is a position (coordinate) where the icon is arranged in the three-dimensional model.
- the 3D model storage unit 111 has 3D model data “all monitoring areas”, a device ID “# 01” indicating the camera 1, and an icon indicating a camera as an icon type.
- the type ID “A001” and the icon position “(x1, y1, z1)” are stored in association with each other. Therefore, by referring to the three-dimensional model storage unit 111, the arrangement position of each device can be specified.
- the coordinates “(x1, y1, z1)” in the three-dimensional model of “all monitoring areas” are used as the arrangement positions of “camera 1”, “camera 2”, and “sensor 1”.
- “(X2, y2, z2)”, “(x3, y3, z3)” can be specified.
- the 3D model storage unit 111 stores 3D part models of various icons in association with icon type IDs.
- the three-dimensional part model of the icon corresponding to the imaging device 20 has a portion corresponding to a lens.
- the lens direction of the three-dimensional part model of the imaging device 20 is arranged according to the actual lens direction (imaging direction) of the imaging device 20. can do.
- FIG. 5 is a diagram for explaining the three-dimensional model data.
- FIG. 5 shows an overhead view of the three-dimensional model observed from the obliquely upper viewpoint of the three-dimensional model.
- the three-dimensional model is a model in which objects such as buildings, grounds, and trees in the entire monitoring area are laid out according to their sizes, and the three-dimensional model data includes the shapes of these objects. Or data indicating size, position, etc.
- a two-dimensional projection image that is a two-dimensional image obtained by projecting a three-dimensional model onto a two-dimensional surface, as shown in an overhead view shown in FIG. 5, by determining a viewpoint, a gazing point, and an angle of view with respect to the three-dimensional model data. Can be generated.
- the viewpoint information storage unit 112 stores a viewpoint position used when a synthesis unit 125 described later generates a two-dimensional projection image from the three-dimensional model data.
- FIG. 6 is a diagram illustrating an example of information stored in the viewpoint information storage unit 112. Specifically, the viewpoint information storage unit 112 generates the two-dimensional projection image of the area observed from the viewpoint position used when generating the two-dimensional projection image of the entire monitoring area and the camera arrangement position. Are stored in association with the viewpoint position ID for identifying the viewpoint position.
- the viewpoint information storage unit 112 stores the viewpoint position ID “B001” indicating the entire monitoring area and the viewpoint position “(x10, y10, z10)” in association with each other. .
- the viewpoint information storage unit 112 stores a viewpoint position ID “B011” indicating the camera 1 and a viewpoint position “(x11, y11, z11)” in association with each other.
- the viewpoint position associated with the viewpoint position ID indicating the camera is the camera position.
- the viewpoint position ID of the viewpoint position indicating the camera position may be the same value as the device ID of the camera (hereinafter referred to as camera ID). Thereby, the viewpoint position can be specified directly from the camera ID.
- the operation unit 121 includes an input device such as a mouse or a touch panel, and accepts various instructions by user operations of the user of the monitoring system 1. For example, the operation unit 121 receives a camera selection instruction from the user. In addition, when a predetermined camera (imaging device 20) is selected by a user operation, the operation unit 121 receives an instruction to output a captured image of the selected imaging device 20. In addition, the operation unit 121 receives pan, tilt, and zoom setting instructions for a predetermined imaging device 20 by a user operation. Note that the pan, tilt, and zoom setting instructions are transmitted to the imaging apparatus 20 via the control unit 124 and the communication processing unit 122 as control signals related to the above-described pan, tilt, and zoom.
- the communication processing unit 122 controls communication with the imaging device 20 and the sensor 40 connected via the network 30. For example, the communication processing unit 122 receives a captured image and pan information, tilt information, and zoom information from the imaging device 20. In addition, the communication processing unit 122 transmits the pan, tilt, and zoom setting instructions received by the operation unit 121 to the imaging device 20 as control signals related to pan, tilt, and zoom.
- the acquisition unit 123 acquires panned information, tilt information, and zoom information as a captured image and an imaging direction from the imaging device 20 via the communication processing unit 122.
- the acquisition unit 123 acquires detection data from the sensor 40.
- the detection data may include information related to the detection direction of the sensor 40 (pan information, tilt information, etc.).
- the control unit 124 controls the image processing apparatus 10 as a whole. For example, the control unit 124 outputs the captured image acquired by the acquisition unit 123, pan information at the time of imaging, tilt information, and zoom information to the synthesis unit 125 in accordance with an instruction received by the operation unit 121. The control unit 124 also acquires the three-dimensional model and the viewpoint position from the three-dimensional model storage unit 111 according to the instruction received by the operation unit 121, and outputs these to the synthesis unit 125.
- control unit 124 also acquires the camera icon corresponding to each imaging device 20 and the image of the sensor icon corresponding to the sensor 40 and the icon position of these icons from the three-dimensional model storage unit 111, The icon position of the icon is output to the composition unit 125. Further, the control unit 124 receives not only the captured image captured by the imaging device 20 corresponding to the camera icon selected by the user but also the captured image corresponding to each camera icon from the acquisition unit 123, and each captured image. Is also output to the combining unit 125.
- the synthesizing unit 125 generates a two-dimensional projection image based on the viewpoint and the imaging direction of the three-dimensional model, and a captured image that is a two-dimensional image captured by the imaging device 20 in a predetermined region on the generated two-dimensional projection image. Is synthesized.
- the predetermined area is an area having a predetermined size set at a predetermined position on the two-dimensional projection image.
- the synthesizing unit 125 generates a two-dimensional projection image according to the viewpoint position read out from the three-dimensional model storage unit 111 by the control unit 124 and the pan, tilt, and zoom of the imaging device 20 that captures the captured image to be synthesized. Generate. Subsequently, the synthesizing unit 125 generates a synthesized image by synthesizing the captured image output from the control unit 124 with a predetermined area such as near the center of the generated two-dimensional projection image.
- the combining unit 125 also arranges icons such as camera icons and sensor icons output by the control unit 124 in the two-dimensional projection image based on the icon positions. Further, the combining unit 125 combines the captured images captured by the respective image capturing devices 20 with regions (for example, pop-ups) set in the vicinity of the respective camera icons corresponding to the respective image capturing devices 20. Note that the image pop-up displayed for each camera icon may be a thumbnail of the captured image. The area set in the vicinity of each camera icon is, for example, an area smaller than the predetermined area described above.
- the combining unit 125 displays not only the captured image corresponding to the predetermined area but also each captured image captured by the imaging device 20 in order to pop up the captured image in the area set near each camera icon.
- comments such as the camera number and name and the location to be imaged are stored in the 3D model storage unit 111, and the comment is superimposed on the captured image displayed in the pop-up. Or you may display only the comment as a popup.
- the output unit 126 outputs a composite image.
- the output unit 126 may display an icon combined with the two-dimensional projection image by the combining unit 125, a captured image combined with an area near the camera icon, or a captured image combined with a predetermined region of the two-dimensional projection image. Output the composite image including it.
- the output unit 126 may be a display device that displays a composite image, or may output the composite image to a display device connected to the image processing device 10.
- FIG. 7 is a flowchart illustrating an example of the flow of overall processing according to the first embodiment.
- the composition unit 125 when the operation unit 121 of the image processing apparatus 10 receives an instruction to display an initial image (Yes at Step S101), the composition unit 125 generates an initial image (Step S102). And the output part 126 displays the initial image produced
- the process waits for an instruction to display an initial image.
- FIG. 9 is a diagram showing an example of the initial image.
- the initial image is a two-dimensional projection image of the entire monitoring area.
- the initial image includes icons of various devices stored in the 3D model storage unit 111 in association with the 3D model data of all monitoring areas.
- the two-dimensional projection image shown in FIG. 9 includes a camera icon corresponding to each imaging device 20 as a device. Further, in the two-dimensional projection image shown in FIG. 9, the captured image captured by each imaging device 20 is displayed in a pop-up near the camera icon. In FIG. 9, each camera icon is arranged in the air, but the camera is actually attached to a pole several meters above the ground. When monitoring is in a building, the camera is actually mounted at a high place such as a ceiling.
- the display of the camera icon is suppressed from being hidden by a shielding object such as a building, thereby selecting the camera icon. Because of the improved operability.
- the icon of the sensor 40 is also displayed.
- the image synthesizing unit 125 selects the selected camera.
- a composite image is generated by synthesizing the captured image of the imaging device 20 corresponding to the selected camera icon with the two-dimensional projection image with the icon arrangement position as a viewpoint (step S105).
- the output unit 126 displays the composite image on the display screen (step S106).
- the camera unit waits for selection of the camera icon.
- FIG. 8 is a flowchart illustrating an example of the flow of the initial image generation process according to the first embodiment.
- the initial image generation process according to the first embodiment refers to the process in step S102.
- the composition unit 125 of the image processing apparatus 10 acquires the three-dimensional model data of all the monitoring areas from the three-dimensional model storage unit 111 (step S201).
- the synthesizing unit 125 displays the arrangement positions of the devices associated with the three-dimensional models in all the monitoring areas in the three-dimensional model storage unit 111, that is, the icons and the icon positions of the icons. Obtained from 111 (step S202).
- the synthesis unit 125 acquires the imaging direction transmitted by each imaging device 20 via the communication processing unit 122, the acquisition unit 123, and the control unit 124 (step S203).
- the combining unit 125 identifies the direction of each camera icon from the acquired imaging direction (step S204). Then, the synthesizing unit 125 synthesizes each device icon at each icon position of the three-dimensional model (step S205). At this time, for the camera icon, the combining unit 125 arranges the camera icon in the three-dimensional model with the camera icon lens facing the direction specified in step S204. However, for a device having no orientation such as the sensor 40, the icon is synthesized at the corresponding icon position in the three-dimensional model without special consideration of the orientation.
- the synthesizing unit 125 acquires the viewpoint positions associated with the viewpoint position IDs of all the monitoring areas from the viewpoint information storage unit 112 (Step S206). Subsequently, the synthesizing unit 125 performs two-dimensional projection by performing rendering using an arbitrary method such as projecting a three-dimensional model on a projection plane based on the acquired viewpoint position, a preset gazing point, and a field angle. An image is generated, and the generated two-dimensional projection image is set as an initial image (step S207). Note that, as shown in FIG. 9, the initial image is combined with a captured image captured by the imaging device 20 corresponding to each camera icon in a pop-up area set in the vicinity of each camera icon in the two-dimensional projection image. Has been.
- FIG. 10 is a flowchart illustrating an example of the flow of image processing according to the first embodiment. Note that the image processing according to Embodiment 1 refers to the processing in step S105.
- the acquisition unit 123 captures a captured image captured by the imaging device 20 corresponding to the camera icon and an imaging direction ( Pan information, tilt information) and zoom information are acquired (step S301). Further, the synthesizing unit 125 acquires the captured image, pan information, tilt information, and zoom information acquired by the acquiring unit 123 via the control unit 124 and sets the three-dimensional model, icon, and icon position (camera position). Obtained from the three-dimensional model storage unit 111. Then, the synthesis unit 125 determines the viewpoint, the gazing point, and the angle of view of the three-dimensional model based on the camera position, pan information, tilt information, and zoom information.
- the synthesizing unit 125 generates a two-dimensional projection image from the three-dimensional model data based on the determined viewpoint, gazing point, and angle of view (Step S302). At this time, the synthesizing unit 125 arranges each icon in the two-dimensional projection image based on the icon position, and the captured image captured by the imaging device 20 corresponding to the camera icon is displayed in a pop-up set near the camera icon. Is synthesized.
- FIG. 11 is a diagram illustrating an example of a two-dimensional projection image generated by the process of step S302.
- the two-dimensional projection image includes a camera icon arranged in the three-dimensional model and an image captured by the imaging device 20 corresponding to the camera icon.
- the synthesizing unit 125 generates a synthesized image by synthesizing the captured image with a predetermined region on the two-dimensional projection image generated in step S302 (step S303).
- FIG. 12 is a diagram illustrating an example of a composite image generated by the process of step S303.
- the captured image actually captured by the imaging device 20 is superimposed on the two-dimensional projection image.
- the processing in step S303 will be described in detail with reference to FIG.
- the synthesis unit 125 superimposes the captured image on the two-dimensional projection image so that the center position of the captured image coincides with the gazing point of the two-dimensional projection image. .
- the center position of the captured image and the gazing point of the two-dimensional projection image both coincide with the center position of the synthesized image.
- the captured image and the two-dimensional projection image are displayed at the same magnification in the composite image.
- the combining unit 125 adjusts the angle of view when generating the two-dimensional projection image according to the zoom information so that the captured image and the two-dimensional projection image are displayed at the same magnification.
- the combining unit 125 matches the center position of the captured image with the point of interest of the two-dimensional projection image in the composite image, and further displays the captured image and the two-dimensional projection image at the same magnification. In this way, the angle of view of the two-dimensional projection image is adjusted. However, it is sufficient that at least the imaging direction of the captured image and the direction (observation direction) connecting the viewpoint and the gazing point of the two-dimensional projection image match. That is, as another example, the synthesis unit 125 may generate a synthesized image in which the center position of the captured image does not match the gaze point of the two-dimensional projection image. For example, in the synthesized image shown in FIG.
- the captured image is arranged at the center of the synthesized image, but may be arranged at a lower right position instead. Further, the magnification of the captured image and the two-dimensional projection image may be different. Thus, if the imaging direction and the observation direction match, the imaging direction of the captured image does not match the gaze point of the two-dimensional projection image, or the magnification of the captured image and the two-dimensional projection image is different. Even so, the viewer can intuitively grasp the correspondence between the captured image and the two-dimensional projection image.
- buttons image are further displayed at the lower left of the composite image shown in FIG. This button image is for “returning to the previous image”.
- the button image is returned to the selection screen for an initial image or the like.
- the acquisition unit 123 acquires the imaging direction and zoom information from the imaging device 20 corresponding to the selected camera icon each time a camera icon is selected.
- the acquisition unit 123 may acquire the changed imaging direction or zoom information from the imaging device 20 each time the camera direction or zoom of the imaging device 20 is changed, and store the acquired imaging direction or zoom information. In this case, since the acquisition unit 123 always stores the imaging direction and zoom information corresponding to the actual state of the imaging device 20, in step S302, the imaging direction and zoom information stored in the acquisition unit 123 are stored. Will be used.
- the image processing apparatus 10 acquires a captured image in the imaging apparatus 20 and an imaging direction at the time of imaging, generates a two-dimensional projection image based on the viewpoint and the imaging direction of the three-dimensional model, and generates the two-dimensional projection image.
- the captured image is synthesized and output in a predetermined area of the two-dimensional projection image.
- the image processing apparatus 10 generates a two-dimensional projection image in accordance with the captured image actually captured by the imaging apparatus 20 and the imaging direction at the time of imaging, and places the captured image in a predetermined region of the two-dimensional projection image.
- the viewer of the composite image is captured by the camera. Can intuitively grasp the area.
- FIG. 13 is a diagram illustrating an example of changing the position of the predetermined area.
- the position of the predetermined area on the two-dimensional projection image may be arranged at the lower right position of the screen. In this way, by arranging the predetermined area in a place different from the vicinity of the center, the corresponding camera icon appears and the direction of the camera can also be understood, so that the area captured by the camera can be grasped intuitively. The effect that can be maintained.
- FIG. 14A is a diagram illustrating an example of a two-dimensional projection image before transmitting a predetermined object.
- FIG. 14B is a diagram illustrating an example of a two-dimensional projection image after transmitting a predetermined object.
- a camera icon “camera 4” and a pop-up of the camera icon are hidden behind a building located near the center. In such a state, when the user selects an icon, it may be difficult to select, or even not selectable.
- the synthesis unit 125 transmits a model such as a building, a tree, or the ground that is not selected, thereby facilitating selection of the camera icon “camera 4”. Further, the transmission of the predetermined object may be performed in the initial image. Further, what is to be transmitted is not limited to the above, and for example, a predetermined area on a pop-up or a two-dimensional projection image may be transmitted. As a result, it is possible to improve the visibility of icons and pop-ups that may be hidden in buildings, predetermined areas, etc., and to improve the operability of camera icon selection operations.
- FIG. 15A is a diagram showing an example of a two-dimensional projection image before adjusting the size of the pop-up.
- FIG. 15B is a diagram illustrating an example of a two-dimensional projection image after adjusting the size of the pop-up.
- the pop-up of the camera icon “camera 3” is displayed particularly small.
- the pop-up displays a picked-up image in the image pickup device 20 corresponding to “camera 3”, but if the display is too small, it is not preferable in terms of visibility.
- the combining unit 125 adjusts the size of the pop-up of the camera icons “Camera 3” and “Camera 4” according to the pop-up of the camera icon “Camera 1”. Adjust the size to ensure the sex.
- the size of the pop-up can be adjusted according to the size of the pop-up positioned closest to the front in the two-dimensional projection image, or can be adjusted according to the size determined appropriately according to the display screen size. good.
- the size of the pop-up may be adjusted in the initial image. Although the size of the camera icon may be adjusted, it is not necessary to adjust the camera icon so that the positional relationship of the camera in the two-dimensional projection image can be understood. As a result, the visibility of the pop-up can be improved.
- the imaging direction of the imaging device 20 is determined according to a predetermined user operation on the composite image. It is also possible to generate a composite image based on the captured image captured by the imaging device 20 and the imaging direction received according to the instruction. Therefore, in the fourth embodiment, the imaging direction of the imaging device 20 is instructed according to a predetermined user operation on the composite image, and the captured image captured by the imaging device 20 and the imaging direction received in response to the instruction are displayed.
- a composite image is generated based on FIG.
- FIG. 16 is a diagram illustrating a configuration example of an image processing apparatus according to the fourth embodiment.
- FIG. 16 detailed description of the configuration having the same function as that of the image processing apparatus 10 according to Embodiment 1 may be omitted.
- the image processing apparatus 10 includes a three-dimensional model storage unit 111, a viewpoint information storage unit 112, an operation unit 121, a communication processing unit 122, an acquisition unit 123, and a control unit 124.
- the switching instruction unit 127 calculates the imaging direction or the focal length of the zoom lens according to a predetermined user operation on the composite image, and instructs the imaging apparatus 20 to switch to the calculated imaging direction or the focal length of the zoom lens. Do.
- the processing by the operation unit 121 and the control unit 124 related to the processing by the switching instruction unit 127 will be described.
- the operation unit 121 accepts a drag operation with a mouse or a touch panel in a state where a camera icon is selected by a user operation, that is, in a state where a composite image is displayed.
- the drag operation refers to a drag operation by a mouse as a pointing device and a wheel operation by a mouse wheel from an arbitrary position in a two-dimensional projection image or a predetermined region of the two-dimensional projection image.
- the drag operation refers to a drag operation or a pinch operation on the touch panel from an arbitrary position in a two-dimensional projection image or a predetermined area of the two-dimensional projection image.
- the control unit 124 outputs the direction and distance of the drag operation received by the operation unit 121 to the switching instruction unit 127.
- the moving direction of the imaging direction is the direction of the drag operation on the composite image.
- the focal length of the zoom lens is obtained by converting the wheel operation or pinch operation on the composite image into the focal length of the zoom lens.
- the movement angle in the imaging direction is obtained by converting the distance (pixel) of the drag operation on the composite image into the movement angle (degree) in the imaging direction.
- the movement angle in the imaging direction is 0.05 degrees when one pixel of the composite image is converted into the movement angle in the imaging direction.
- the switching instruction unit 127 corresponds to the selected camera icon via the communication processing unit 122 with an instruction to switch according to the calculated focal length of the zoom lens and the moving direction and moving angle of the imaging direction.
- the imaging device 20 to be performed.
- a two-dimensional projection image corresponding to the pan information, tilt information, and zoom information received from the imaging device 20 is generated as in the first embodiment, and a predetermined region of the two-dimensional projection image is generated.
- the synthesized image is generated by synthesizing the captured image.
- FIG. 17A is a diagram illustrating an example of a composite image before a drag operation is performed.
- FIG. 17B is a diagram illustrating an example of a composite image when a drag operation is performed in the right direction.
- FIG. 17C is a diagram illustrating an example of a composite image when a drag operation is performed in the upward direction.
- the composite image before the drag operation is performed as a two-dimensional projection image obtained by the imaging device 20 corresponding to the camera icon “camera 2” described in the first embodiment.
- the composite image shown in FIG. 17A screen transition when the user performs a drag operation in the right direction is shown in FIG. 17B, and screen transition when the user performs a drag operation in the upward direction is shown in FIG. 17C.
- the switching instruction unit 127 displays the zoom lens of the imaging device 20 corresponding to the camera icon “camera 2” based on the direction (right direction) of the drag operation and the distance.
- the focal length, the moving direction and moving angle in the imaging direction are calculated, and the imaging device 20 is instructed to set the focal length of the zoom lens of the camera, the camera orientation to the left, and the moving angle.
- the focal length, moving direction, and moving angle of the zoom lens can be obtained by converting the direction and distance of the drag operation, the wheel operation, or the pinch operation, respectively.
- the direction of the drag operation is opposite to the direction of camera movement.
- the synthesizing unit 125 generates a two-dimensional projection image corresponding to the new pan information, tilt information, and zoom information received from the imaging device 20, and applies the generated two-dimensional projection image to a predetermined region at this time.
- a synthesized image shown in FIG. 17B is generated.
- the switching instruction unit 127 captures an image of the image capturing apparatus 20 corresponding to the camera icon “camera 2” based on the drag operation direction (upward direction) and the distance.
- the movement direction and the movement angle of the direction are calculated, and the imaging apparatus 20 is instructed to move the camera to the down direction, the movement angle, and the like.
- no instruction to change the focal length of the zoom lens is given.
- the moving direction and the moving angle are obtained by converting the direction and distance of the drag operation, the wheel operation or the pinch operation, respectively.
- the synthesizing unit 125 generates a two-dimensional projection image corresponding to the new pan information, tilt information, and zoom information received from the imaging device 20, and applies the generated two-dimensional projection image to a predetermined region at this time.
- a synthesized image shown in FIG. 17C is generated.
- FIG. 18 is a flowchart illustrating an example of the flow of overall processing according to the fourth embodiment.
- description of the same processing as the overall processing according to Embodiment 1 shown in FIG. 7 may be omitted.
- the processing in steps S401 to S406 is the same as the processing in steps S101 to S106.
- the switching instruction unit when the operation unit 121 of the image processing apparatus 10 accepts an operation for switching the imaging direction of the camera on the composite image (Yes in step S407), the switching instruction unit according to the drag operation.
- the combining unit 125 receives the pan information, tilt information, and the received information from the imaging device 20.
- a two-dimensional projection image is generated based on the zoom information, and a composite image is generated by combining the captured image of the imaging device 20 corresponding to the selected camera icon in a predetermined region of the generated two-dimensional image (step S408).
- the output unit 126 displays the composite image on the display screen (step S409).
- the operation unit 121 has not received an operation for switching the imaging direction of the camera on the composite image (No in step S407), the process is terminated.
- FIG. 19 is a flowchart illustrating an example of the flow of image processing according to the fourth embodiment. Note that the image processing according to the fourth embodiment mainly refers to the processing in step S408.
- the switching instruction unit 127 when a drag operation or a pinch operation (wheel operation) is received in a state where a camera icon is selected by a user operation on the operation unit 121, the switching instruction unit 127 is a drag operation. Is determined (step S501). At this time, when the operation is a drag operation (Yes at Step S501), the switching instruction unit 127 calculates the movement direction and movement angle of the imaging direction with respect to the imaging device 20 according to the drag distance on the composite image (Step S502). . On the other hand, when the operation is a pinch operation (No in step S501), the switching instruction unit 127 calculates the focal length of the zoom lens with respect to the imaging device 20 according to the pinch distance on the composite image (step S503).
- the moving direction of the imaging direction is opposite to the direction of the drag operation.
- the movement angle in the imaging direction can be obtained by converting the distance (pixel) of the drag operation into the movement angle (degree) in the imaging direction as one aspect.
- the focal length of the zoom lens can be obtained by converting a pinch operation or a wheel operation into a focal length of the zoom lens as one aspect.
- the switching instruction unit 127 transmits the focal length of the zoom lens, the moving direction and moving angle of the imaging direction, and the like as control signals to the imaging device 20 via the communication processing unit 122, thereby focusing the zoom lens.
- An instruction to switch between the distance and the imaging direction is given to the imaging apparatus 20 (step S504).
- the combining unit 125 The captured image, pan information, tilt information, and zoom information acquired by the acquisition unit 123 are acquired via the control unit 124, and the 3D model, icon, and icon position (camera position) are acquired from the 3D model storage unit 111. get.
- the communication processing unit 122 waits for reception of the captured image, pan information, tilt information, and zoom information. It becomes a state.
- the synthesis unit 125 determines a viewpoint, a gazing point, and an angle of view for the three-dimensional model based on the camera position, pan information, tilt information, and zoom information. Subsequently, the synthesis unit 125 generates a two-dimensional projection image from the three-dimensional model data based on the determined viewpoint, gazing point, and angle of view (step S506). At this time, the synthesizing unit 125 arranges each icon in the two-dimensional projection image based on the icon position, and the captured image captured by the imaging device 20 corresponding to the camera icon is displayed in a pop-up set near the camera icon. Is synthesized. Thereafter, the combining unit 125 generates a combined image by combining the captured image with a predetermined area on the generated two-dimensional projection image (step S507).
- the image processing apparatus 10 instructs the focal length and the imaging direction of the zoom lens of the imaging apparatus 20 by a drag operation by the user on the composite image, and the zoom lens of the imaging by the imaging apparatus 20 according to the instruction.
- a two-dimensional projection image corresponding to the focal length and the imaging direction is generated, and the captured image is synthesized with a predetermined region of the generated two-dimensional projection image and output.
- FIG. 20 is a flowchart illustrating an example of the flow of image processing according to a modification of the fourth embodiment.
- the switching instruction unit 127 when a drag operation or a pinch operation (wheel operation) is received in a state where a camera icon is selected by a user operation on the operation unit 121, the switching instruction unit 127 is a drag operation. It is determined whether or not (step S601). At this time, in the case of a drag operation (Yes in step S601), the switching instruction unit 127 calculates the movement direction and movement angle of the imaging direction with respect to the imaging device 20 according to the drag distance on the composite image (step S602). ).
- the switching instruction unit 127 calculates the focal length of the zoom lens with respect to the imaging device 20 according to the pinch distance on the composite image (Step S603).
- the moving direction of the imaging direction is opposite to the direction of the drag operation.
- the distance in the imaging direction can be obtained by converting the distance (pixel) of the drag operation into the moving angle (degree) in the imaging direction as one aspect.
- the focal length of the zoom lens can be obtained by converting a pinch operation or a wheel operation into a focal length of the zoom lens as one aspect.
- the switching instruction unit 127 instructs the synthesizing unit 125 to generate a two-dimensional projection image according to the calculated focal length of the zoom lens and the moving direction and moving angle of the imaging direction.
- the composition unit 125 generates a two-dimensional projection image according to the focal length of the zoom lens and the moving direction and moving angle in the imaging direction (step S604).
- the two-dimensional projection image generated by the synthesis unit 125 is displayed on the screen by the output unit 126.
- the switching instruction unit 127 transmits the focal length of the zoom lens, the movement direction and the movement angle of the imaging direction, and the like as control signals to the imaging device 20 via the communication processing unit 122, so that the imaging direction and zoom An instruction to switch the focal length of the lens is given to the imaging device 20 (step S605). Further, in the user operation on the operation unit 121, when the drag operation or the pinch operation is not performed for a predetermined time (Yes in step S606), the switching instruction unit 127 is connected to the imaging device 20 via the communication processing unit 122. A request for transmitting the captured image, pan information, tilt information, and zoom information is made (step S607).
- Step S601 when the user operation (drag operation or pinch operation) on the operation unit 121 is performed within a predetermined time (No at Step S606), the process at Step S601 is performed again. Fine adjustments may be made in user operations such as a drag operation or pinch operation by the user. If the drag operation is not performed continuously even after a predetermined time has elapsed, the adjustment at that point is confirmed. As a result, the subsequent processing is executed. In other words, while the user operation such as the drag operation or the pinch operation is being performed, it is not determined from which direction the user wants to monitor. Therefore, if the user operation is not performed after a predetermined time has elapsed. Assuming that the user wants to monitor from the current direction, the subsequent processing is executed.
- the switching instruction unit 127 makes a transmission request for the captured image, pan information, tilt information, and zoom information to the imaging device 20 corresponding to the selected camera icon (step S607).
- the imaging device 20 is operating based on the information transmitted in step S605.
- the combining unit 125 The captured image, pan information, tilt information, and zoom information acquired by the acquisition unit 123 are acquired via the control unit 124, and the 3D model, icon, and icon position (camera position) are acquired from the 3D model storage unit 111. get.
- the communication processing unit 122 waits for reception of the captured image, pan information, tilt information, and zoom information. It becomes a state.
- the composition unit 125 determines the viewpoint, gazing point, and angle of view of the three-dimensional model based on the camera position, pan information, tilt information, and zoom information. Subsequently, the synthesizing unit 125 generates a two-dimensional projection image from the three-dimensional model data based on the determined viewpoint, gazing point, and angle of view (step S609). At this time, the synthesizing unit 125 arranges each icon in the two-dimensional projection image based on the icon position, and the captured image captured by the imaging device 20 corresponding to the camera icon is displayed in a pop-up set near the camera icon. Is synthesized. Thereafter, the combining unit 125 generates a combined image by combining the captured image with a predetermined area on the generated two-dimensional projection image (step S610).
- the image processing apparatus 10 generates a two-dimensional projection image corresponding to a drag operation by the user on the composite image before generating the two-dimensional projection image based on the information from the imaging apparatus 20. Since the display is performed, the camera switching position of the imaging apparatus 20 can be matched to a position desired by the user to some extent, and the operability related to the camera switching can be improved.
- FIG. 21 is a flowchart illustrating an example of the flow of image processing according to the fifth embodiment. Note that the image processing according to the fifth embodiment mainly refers to the processing in step S408.
- the synthesizing unit 125 rotates the synthesized image in accordance with the user's drag operation on the operation unit 121 with the selected camera icon as an axis (step S701).
- the rotation range of the composite image may be limited.
- the rotated composite image is generated and displayed as appropriate according to the drag direction, drag distance, and the like.
- the switching instruction unit 127 displays the composite image. The distance in the imaging direction with respect to the imaging device 20 corresponding to the drag distance is calculated.
- the switching instruction unit 127 also calculates the focal length of the zoom lens with respect to the imaging device 20 according to the pinch distance on the composite image even when a pinch operation (wheel operation) is performed. Since the calculation of the distance in the imaging direction and the focal length of the zoom lens may be performed in the same manner as in the fourth embodiment, detailed description thereof is omitted here. However, since the distance in the imaging direction, the focal length of the zoom lens, and the like are limited in the actual camera (imaging device 20), they are changed to suitable values according to the limitation.
- the switching instruction unit 127 transmits the focal length of the zoom lens, the movement direction and the movement angle of the imaging direction, and the like as control signals to the imaging device 20 via the communication processing unit 122, thereby obtaining the imaging direction.
- An instruction to switch the focal length of the zoom lens is given to the imaging device 20 (step S703).
- step S701 the process of step S701 is executed again.
- the communication processing unit 122 waits for reception of the captured image, pan information, tilt information, and zoom information. It becomes a state.
- the composition unit 125 determines the viewpoint, gazing point, and angle of view of the three-dimensional model based on the camera position, pan information, tilt information, and zoom information. Subsequently, the composition unit 125 generates a two-dimensional projection image from the three-dimensional model data based on the determined viewpoint, gazing point, and angle of view (step S705). At this time, the synthesizing unit 125 arranges each icon in the two-dimensional projection image based on the icon position, and the captured image captured by the imaging device 20 corresponding to the camera icon is displayed in a pop-up set near the camera icon. Is synthesized. Thereafter, the combining unit 125 generates a combined image by combining the captured image with a predetermined region on the generated two-dimensional projection image (step S706).
- the image processing apparatus 10 rotates around the camera icon on the composite image in accordance with the drag operation, the viewer of the composite image selects a camera icon that does not appear in the composite image before the rotation. Can be easily performed. Further, in the rotation of the composite image, the image processing apparatus 10 also performs the imaging direction of the camera and the focus of the zoom lens when no other camera icon is selected even after a predetermined time has elapsed. Since switching of the distance is instructed, the viewer of the composite image can intuitively grasp the area captured by the camera.
- FIG. 22 is an image diagram showing an example of determining a base point according to the sixth embodiment.
- FIG. 23 is an image diagram showing an example of setting the camera operating speed according to the drag distance according to the sixth embodiment.
- the switching instruction unit 127 uses the position on the composite image that is mouse-down or touch-down as a base point. Determine as.
- the switching instruction unit 127 sets the direction opposite to the drag direction as the moving direction of the image capturing direction and performs image capturing.
- the moving speed when operating the camera in the moving direction is set to a speed corresponding to the drag distance.
- a range in which the panning and tilting operations of the camera are accelerated stepwise is set in advance according to the distance from the base point. Then, the switching instruction unit 127 determines the moving speed when the camera is operated in the moving direction of the imaging direction based on the speed set stepwise according to the drag distance from the base point determined by the mouse down or the touch down. Set.
- the range of the moving speed that is set is set so as not to move if it is in the vicinity of the base point, and after passing through the vicinity of the range, “low speed”, “medium speed”, It is set to be “fast”.
- the moving speed of the camera corresponding to each of “low speed”, “medium speed”, and “high speed” may be set arbitrarily. Further, the speed range shown in FIG. 23 can be arbitrarily changed, and the display may not be performed.
- FIG. 24 is a flowchart illustrating an example of the flow of image processing according to the sixth embodiment. Note that image processing according to the sixth embodiment mainly refers to processing in step S408.
- the switching instruction unit 127 displays the position on the composite image that is mouse down or touched down.
- the base point is determined (step S801).
- the switching instruction unit 127 determines a moving speed according to the determined drag distance from the base point (step S802).
- the switching instruction unit 127 determines whether or not the determined moving direction and moving speed in the imaging direction are different from the instruction currently being given to the imaging device 20 (step S803). At this time, when the switching instruction unit 127 determines that the determined moving direction and moving speed in the imaging direction are different from the currently performed instruction (Yes in step S803), the switching instruction unit 127 determines via the communication processing unit 122. By transmitting the moving direction and moving speed of the image capturing direction, the image capturing apparatus 20 is instructed to switch the image capturing direction (step S804).
- the switching instruction unit 127 determines that the determined moving direction and moving speed in the imaging direction are the same as the instruction that is currently being performed (No in step S803), the switching instruction unit 127 executes the process of step S802 again. .
- the combining unit 125 controls the control unit 124.
- the captured image, pan information, and tilt information acquired by the acquisition unit 123 are acquired, and the 3D model, icon, and icon position (camera position) are acquired from the 3D model storage unit 111.
- the communication processing unit 122 enters a state of waiting for reception of the captured image and the imaging direction.
- the composition unit 125 determines the viewpoint and the gazing point of the three-dimensional model based on the camera position, pan information, and tilt information. Subsequently, the synthesizing unit 125 generates a two-dimensional projection image from the three-dimensional model data based on the determined viewpoint, gazing point, and preset angle of view (step S806). At this time, the synthesizing unit 125 arranges each icon in the two-dimensional projection image based on the icon position, and the captured image captured by the imaging device 20 corresponding to the camera icon is displayed in a pop-up set near the camera icon. Is synthesized. Thereafter, the combining unit 125 generates a combined image by combining the captured image with a predetermined area on the generated two-dimensional projection image (step S807).
- the image processing apparatus 10 instructs the imaging direction of the imaging apparatus 20 and the speed of the camera operation by a drag operation by the user on the composite image, and responds to the imaging direction of imaging by the imaging apparatus 20 according to the instruction.
- the two-dimensional projection image is generated, and the captured image is synthesized and output to a predetermined area of the generated two-dimensional projection image.
- the viewer of the composite image can switch the camera of the imaging device 20 with a simple user operation, and can intuitively grasp the area imaged by the camera.
- FIG. 25 is an image diagram showing an example of determining a base point according to the seventh embodiment.
- FIG. 26 is an image diagram showing an example of setting the camera operation speed according to the drag speed according to the seventh embodiment.
- the switching instruction unit 127 uses the position on the composite image that is mousedown or touchdown as a base point. Determine as.
- the switching instruction unit 127 sets the direction opposite to the drag direction as the movement direction of the imaging direction, and moves the camera in the movement direction of the imaging direction. Set the movement speed for operation to the speed corresponding to the drag speed. For example, as shown in FIG. 26, when dragging like an arrow is performed, the “base point”, “reference point”, and “current point” of the drag operation are set as shown in FIG.
- the switching instruction unit 127 sets the moving direction of the imaging direction based on the direction from the base point to the current point, and performs imaging based on the drag speed calculated from the distance from the reference point to the current point. Set the movement speed when operating the camera in the direction of movement.
- FIG. 27 is a flowchart illustrating an example of the flow of image processing according to the seventh embodiment. Note that image processing according to the seventh embodiment mainly refers to processing in step S408.
- the switching instruction unit 127 displays the position on the composite image that is mouse down or touched down.
- the base point is determined (step S901).
- the switching instruction unit 127 calculates a position on the composite image every predetermined time, and sets the latest position as the current point and the previous position as the reference point. Then, the switching instruction unit 127 determines the moving direction of the imaging direction according to the direction from the base point to the current point. Further, the speed of the drag is calculated by dividing the distance between the reference point and the current point by a predetermined time, and the moving speed is set according to the speed of the drag (step S902).
- the switching instruction unit 127 determines whether or not the determined moving direction and moving speed of the imaging direction are different from the instruction currently being given to the imaging device 20 (step S903). At this time, when the switching instruction unit 127 determines that the set moving direction and moving speed in the imaging direction are different from the currently performed instruction (Yes in step S903), the switching instruction unit 127 sets the setting via the communication processing unit 122. By transmitting the moving direction and moving speed of the imaging direction, the imaging device 20 is instructed to switch the imaging direction (step S904).
- the switching instruction unit 127 determines that the set movement direction and movement speed in the imaging direction are the same as the instruction that is currently being made (No at step S903), the switching instruction unit 127 executes the process at step S902 again. .
- the combining unit 125 controls the control unit 124.
- the captured image, pan information, and tilt information acquired by the acquisition unit 123 are acquired, and the 3D model, icon, and icon position (camera position) are acquired from the 3D model storage unit 111.
- the communication processing unit 122 enters a state of waiting for reception of the captured image, pan information, and tilt information.
- the composition unit 125 determines the viewpoint and the gazing point of the three-dimensional model based on the camera position, pan information, and tilt information. Subsequently, the synthesizing unit 125 generates a two-dimensional projection image from the three-dimensional model data based on the determined viewpoint, gazing point, and preset angle of view (step S906). At this time, the synthesizing unit 125 arranges each icon in the two-dimensional projection image based on the icon position, and the captured image captured by the imaging device 20 corresponding to the camera icon is displayed in a pop-up set near the camera icon. Is synthesized. Thereafter, the synthesizing unit 125 synthesizes the captured image with a predetermined region on the generated two-dimensional projection image to generate a synthesized image (step S907).
- the image processing apparatus 10 instructs the moving direction and moving speed of the image capturing apparatus 20 by a drag operation by the user on the composite image, and the pan information sent from the image capturing apparatus 20 according to the instruction.
- a two-dimensional projection image corresponding to the tilt information is generated, and the captured image is synthesized with a predetermined region of the generated two-dimensional projection image and output.
- a camera icon is arranged on a two-dimensional projection image of all monitoring areas, and when a camera icon is selected, a captured image captured by a camera (imaging device) corresponding to the selected camera icon is synthesized. A composite image is displayed.
- a plurality of monitoring areas are arranged in a hierarchy, and a monitoring area of another hierarchy can be selected from a two-dimensional projection image of any monitoring area and is selected.
- the camera icon of only the camera associated with the monitoring area is arranged in the two-dimensional projection image of the monitored area and the camera icon is selected, the camera corresponding to the selected camera icon has taken an image.
- a composite image obtained by combining captured images is displayed.
- the configuration of the monitoring system and the configuration of the imaging device are the same as those in the first embodiment, and thus description thereof is omitted.
- the image processing apparatus 10 includes a three-dimensional model storage unit 111, a viewpoint information storage unit 112, an operation unit 121, a communication processing unit 122, an acquisition unit 123, a control unit 124, a synthesis unit 125, and an output unit 126. And have.
- the image processing apparatus 10 is an information processing apparatus such as a PC connected to a network camera such as the imaging apparatus 20 or the sensor 40 via the network 30 (see FIG. 1).
- the communication processing unit 122 and the control unit 124 are the same as those in the first embodiment, description thereof is omitted.
- the 3D model storage unit 111 stores a 3D model and the like.
- FIG. 28 is a diagram illustrating an example of information stored in the three-dimensional model storage unit 111.
- the three-dimensional model storage unit 111 stores the hierarchy to which the monitoring area belongs, the three-dimensional model data of each monitoring area, the device ID, the icon type ID, and the icon position in association with each other. .
- the details of the device ID, icon type ID, and icon position are the same as those in the first embodiment.
- the 3D model data indicates a 3D model of each of the monitoring areas (3D regions) described above (see FIG. 5).
- the monitoring area has a hierarchical structure in which a plurality of monitoring areas are configured in a hierarchical manner, and a plurality of three-dimensional model data corresponding to each of the plurality of monitoring areas is stored.
- FIG. 29 is an explanatory diagram of the hierarchical structure of the monitoring area.
- a monitoring area all monitoring areas indicating all monitoring areas, a mall area indicating a mall area that is a building in all monitoring areas, and each floor in the mall
- a central area, a west second floor area, a west first floor area, an east second floor area, and an east first floor area are provided.
- These monitoring areas are composed of three layers, all monitoring areas belong to the first layer, the mall area belongs to the second layer, which is the layer immediately below the first layer, and immediately below the second layer.
- the central area, the west second floor area, the west first floor area, the east second floor area, and the east first floor area belong to the third hierarchy which is a hierarchy.
- the 3D model storage unit 111 includes a hierarchy “first hierarchy”, 3D model data “all monitoring areas”, a device ID “# 01” indicating the camera 1, and an icon.
- An icon type ID “A001” indicating a camera as a type and an icon position “(x1, y1, z1)” are stored in association with each other. Therefore, by referring to the three-dimensional model storage unit 111, the arrangement position of each device can be specified as in the first embodiment. In the present embodiment, by referring to the three-dimensional model storage unit 111, it is possible to determine which hierarchy each monitoring area belongs to.
- the three-dimensional model storage unit 111 stores a three-dimensional part model of various icons in association with the icon type ID.
- the details are the same as those in the first embodiment, and thus the description thereof is omitted.
- the viewpoint information storage unit 112 is used when the synthesis unit 125 generates a two-dimensional projection image from the three-dimensional model data, or when a user generates a selection image to be displayed for selecting a monitoring area, a camera icon, or the like. Memorize the viewpoint position.
- FIG. 30 is a diagram illustrating an example of information stored in the viewpoint information storage unit 112. Specifically, the viewpoint information storage unit 112 generates two-dimensional projection images of the entire monitoring area, mall area, central area, west second floor area, west first floor area, east second floor area, and east first floor area.
- the viewpoint position used at the time and the viewpoint position used when generating the two-dimensional projection image of the monitoring area observed from the camera arrangement position are stored in association with the viewpoint position ID for identifying the viewpoint position. To do.
- the selected image is a two-dimensional projection image obtained by projecting a three-dimensional model of any monitoring area onto a two-dimensional surface, and belongs to a layer different from the projected monitoring area, for example, a layer directly below or directly above. It is an image from which a monitoring area can be selected.
- the selected image is a two-dimensional projection image of the monitoring area in which camera icons corresponding to only the cameras associated with the monitoring area among the cameras are arranged. This is an image in which a camera (imaging device) can be selected by selecting.
- the viewpoint information storage unit 112 associates the viewpoint position ID “B001” indicating all the monitoring areas belonging to the first hierarchy with the viewpoint position “(x10, y10, z10)”. And remember. Further, the viewpoint information storage unit 112 stores a viewpoint position ID “B003” indicating a central area belonging to the third hierarchy and a viewpoint position “(x14, y14, z14)” in association with each other. The viewpoint information storage unit 112 stores a viewpoint position ID “B011” indicating the camera 1 and a viewpoint position “(x11, y11, z11)” in association with each other.
- the operation unit 121 includes an input device such as a mouse or a touch panel, and accepts various instructions by user operations of the user of the monitoring system 1. For example, the operation unit 121 receives a camera selection instruction (selection of a camera icon) from the user from the selected image. In addition, when a predetermined camera (imaging device 20) is selected by a user operation, the operation unit 121 receives an instruction to output a captured image of the selected imaging device 20. In addition, the operation unit 121 receives pan, tilt, and zoom setting instructions for a predetermined imaging device 20 by a user operation.
- pan, tilt, and zoom setting instructions are transmitted to the imaging apparatus 20 via the control unit 124 and the communication processing unit 122 as control signals related to the above-described pan, tilt, and zoom.
- the operation unit 121 receives an end instruction indicating that the process is to be ended from the user.
- the operation unit 121 accepts selection of a monitoring area belonging to a layer (other layer) different from the monitoring area displayed in the selected image, from the selected image on which the monitoring area is displayed. Specifically, the operation unit 121 accepts selection of a monitoring area belonging to a hierarchy immediately below or immediately above a hierarchy to which the monitoring area displayed in the selection image belongs, from a selection image on which the monitoring area is displayed. Furthermore, the operation unit 121 accepts selection of a monitoring area belonging to a lower layer or an upper layer than a hierarchy to which the monitoring area displayed in the selected image belongs, from the selected image in which the monitoring area is displayed.
- the operation unit 121 accepts selection of the camera icon from the selected image. For example, when a selection image that is a two-dimensional projection image of the entire monitoring area is displayed, the operation unit 121 displays, from the selection image, a mall area that belongs to the immediately lower layer, a central area that belongs to the lower layer, and the west second floor. Selection of camera icons of cameras associated with areas and all monitoring areas is accepted.
- the operation unit 121 corresponds to an area selection reception unit and a camera selection reception unit.
- FIG. 31 is an explanatory diagram showing monitoring areas or camera icons that can be selected from the selected image.
- the selected area which is a two-dimensional projection image of all the monitoring areas, shows the mall area, cameras 1 to 5
- the camera icon can be selected.
- two-dimensional projection of the mall area From the selected image, which is an image, the central area, the west second floor area, the west first floor area, the east second floor area, and the east first floor area can be selected.
- the camera icons of the cameras 6 and 7 can be selected from the selection image that is a two-dimensional projection image of the central area. The same applies to the other second monitoring areas, the west second floor area, the west first floor area, the east second floor area, and the east first floor area.
- the acquisition unit 123 captures an image captured by the camera (imaging device 20) corresponding to the selected camera icon, pan information as an imaging direction, and tilt. Information and zoom information are acquired from the imaging device 20 via the communication processing unit 122.
- the acquisition unit 123 acquires detection data from the sensor 40.
- the detection data may include information related to the detection direction of the sensor 40 (pan information, tilt information, etc.).
- the combining unit 125 receives the viewpoint of the three-dimensional model of the monitoring area projected on the selected image and the camera (the imaging device 20) corresponding to the selected camera icon. ) Is generated in accordance with the imaging direction.
- the synthesizing unit 125 synthesizes a captured image, which is a two-dimensional image captured by the camera (imaging device 20) corresponding to the selected camera icon, in the two-dimensional image region of the generated two-dimensional projection image.
- the combining unit 125 also arranges icons such as camera icons and sensor icons output by the control unit 124 in the two-dimensional projection image based on the icon positions.
- the details of the composition of the captured image and the arrangement of the icons are the same as in the first embodiment.
- the combining unit 125 accepts selection of a monitoring area from the selected image by the operation unit 121, icon positions of camera icons, sensor icons, etc. in the three-dimensional model of the selected monitoring area (arranged in the selected monitoring area)
- the camera icon or sensor icon is placed at the camera position or the sensor position of the sensor.
- the synthesizing unit 125 acquires the viewpoint (viewpoint position) of the three-dimensional model of the monitoring area (selected monitoring area) stored in the viewpoint information storage unit 112, and a camera icon or a sensor icon according to the viewpoint.
- a selection image that is a two-dimensional projection image obtained by projecting a three-dimensional model of the monitoring area (selected monitoring area) on the two-dimensional surface is generated. At this time, if no camera or sensor is associated with the selected monitoring area, the camera icon or sensor icon is not arranged.
- the output unit 126 outputs the synthesized image generated by the synthesizing unit 125.
- the output unit 126 may include various icons combined with the two-dimensional projection image by the combining unit 125, a captured image combined with a region near the camera icon, or a captured image combined with the superimposed two-dimensional image region.
- a composite image including is output.
- the output unit 126 outputs the selected image generated by the combining unit 125.
- the output unit 126 may be a display device that displays a composite image or a selected image, or may output the composite image or the selected image to a display device connected to the image processing device 10.
- FIG. 32 is a flowchart illustrating an example of the flow of overall processing according to the eighth embodiment.
- steps S2001 to S2009 are the same as the processing of FIG. 18 in the fourth embodiment, the description thereof is omitted (see steps S401 to S409).
- steps S2001 to S2003 the initial image in the fourth embodiment is replaced with the selected image. If the determination in step S2007 is negative, the process proceeds to step S2013.
- step S2004 when the operation unit 121 does not accept selection of a camera icon from the selected image (No in step S2004), the control unit 124 determines whether the operation unit 121 accepts selection of a monitoring area from the selected image. (Step S2010).
- Step S2010 When the selection of the monitoring area is received from the selected image (Yes at Step S2010), the composition unit 125 generates a selection image that is a two-dimensional projection image of the selected monitoring area (Step S2011). Then, the output unit 126 displays the selected image on the display screen (step S2012). On the other hand, when the selection of the monitoring area is not received from the selected image (No at Step S2010), the process proceeds to Step S2013.
- control unit 124 determines whether or not an end instruction indicating that the process is to be ended is received by the operation unit 121 (step S2013). If an end instruction has been accepted (Yes at step S2013), the process ends. On the other hand, if an end instruction has not been accepted (No at step S2013), the process returns to step S2004. Note that, for example, an icon or the like for the user to input an end instruction is displayed on the selected image.
- the initial image described in the fourth embodiment is one of the selected images.
- the initial image is displayed in step S2011.
- a selection image which is a two-dimensional projection image of all corresponding monitoring areas is displayed.
- a selectable icon or the like indicating that the selected image returns to the selected image in the previous process is displayed, and when the selection of the icon is accepted, the selected image in the previous process is displayed ( Return).
- a selectable icon or the like indicating each monitoring area is displayed at the corner of the display screen where the selected image is displayed, and the selection is made when selection of one of the displayed monitoring area icons is accepted.
- a selection image that is a two-dimensional projection image of the monitoring area corresponding to the icon is generated, and the generated selection image is displayed. Thereby, it is possible to select not only the monitoring area immediately below the monitoring area displayed in the selected image, but also the monitoring area immediately above, the upper layer, and the lower layer.
- FIG. 33 is a flowchart illustrating an example of the flow of selected image generation processing according to the eighth embodiment.
- the selected image generation process according to the eighth embodiment refers to the process in step S2011.
- the composition unit 125 of the image processing apparatus 10 acquires the three-dimensional model data of the selected monitoring area from the three-dimensional model storage unit 111 (Step S2021).
- the synthesizing unit 125 refers to the three-dimensional model storage unit 111 and determines whether there is a device (camera or sensor) associated with the three-dimensional model of the selected monitoring area (step S2022). If there is no associated device (No at step S2022), the process proceeds to step S2028 without combining icons.
- the composition unit 125 refers to the three-dimensional model storage unit 111 and determines each device associated with the three-dimensional model of the selected monitoring area. An arrangement position, that is, each icon and the icon position of each icon are acquired (step S2023).
- the composition unit 125 determines whether or not the associated device has a camera (imaging device 20) (step S2024). If there is no camera in the associated device (No at step S2024), the process proceeds to step S2027.
- the synthesizing unit 125 sets the imaging direction of the camera transmitted by each imaging device 20 to the communication processing unit 122, the acquisition unit 123, and the control unit 124. (Step S2025).
- the combining unit 125 specifies the direction of each camera icon from the acquired imaging direction (step S2026).
- the synthesizing unit 125 synthesizes the icon of each device (only the camera or the camera and the sensor) at each icon position of the three-dimensional model (step S2027).
- the combining unit 125 arranges the camera icon in the three-dimensional model with the lens of the camera icon directed in the direction specified in step S2026.
- the icon is synthesized at the corresponding icon position in the three-dimensional model without special consideration of the orientation.
- the synthesizing unit 125 acquires the viewpoint position associated with the viewpoint position ID of the selected monitoring area from the viewpoint information storage unit 112 (step S2028). Subsequently, the synthesizing unit 125 performs rendering by an arbitrary method such as projecting a three-dimensional model onto a projection plane (two-dimensional plane) based on the acquired viewpoint position, a preset gazing point, and a field angle. Thus, a two-dimensional projection image is generated, and the generated two-dimensional projection image is set as a selection image (step S2029). As shown in FIG. 9, the selected image is combined with a captured image captured by the imaging device 20 corresponding to each camera icon in a pop-up area set near each camera icon in the two-dimensional projection image. Has been.
- each monitoring area is configured in a hierarchical manner, and selection of a monitoring area in another hierarchy from a selection image that is a two-dimensional projection image of any monitoring area.
- a selection image that is a two-dimensional projection image of any monitoring area.
- many camera icons are not displayed at the same time.
- a camera icon is selected from a selected image that is a two-dimensional projection image of the selected monitoring area, a combined image obtained by combining captured images captured by the camera corresponding to the selected camera icon is displayed. Therefore, it is easy to intuitively grasp the position of the camera installed in the monitoring area which is a three-dimensional space with a two-dimensional image, and the operability can be improved.
- an image processing apparatus 10 When the image processing apparatus 10 according to the first to eighth embodiments generates a two-dimensional projection image as a selected image (initial image), the viewpoint information storage unit 112 corresponds to a monitoring area to be processed. The attached viewpoint position is identified, and based on the viewpoint position and the preset gazing point and angle of view, a two-dimensional projection image is generated based on the three-dimensional model data of the monitoring area to be processed.
- the image processing apparatus 10 according to the present embodiment further appropriately changes the viewpoint position used when generating a two-dimensional projection image from the three-dimensional model data of the monitoring area to be processed in accordance with an instruction from the user. Can do.
- FIG. 34 is a diagram for explaining processing of the image processing apparatus 10 according to the ninth embodiment.
- FIG. 34 is a two-dimensional projection image of the entire monitoring area corresponding to the initial image.
- the vertical direction of the three-dimensional model of all the monitoring areas is defined as the z-axis
- the predetermined direction of the horizontal plane that is horizontal to the vertical direction is defined as the x-axis
- the direction perpendicular to the x-axis is defined as the y-axis.
- a position that can be set to limit the range of positions that can be set as the viewpoint It is assumed that a viewpoint range that is a range of is previously set in the three-dimensional model storage unit 111.
- the viewpoint range set in the image processing apparatus 10 according to the present embodiment is a range of an arc corresponding to a predetermined rotation angle range centered on a rotation axis arranged on the three-dimensional model and having a radius as a predetermined value. It is.
- the image processing apparatus 10 has a two-dimensional projection image in a state where the z-axis is the rotation axis and the viewpoint position is changed along the arc within a predetermined rotation angle range, as shown in FIG. Can be generated and output, and a two-dimensional projection image can be generated and output in a state where the y-axis is the rotation axis and the viewpoint position is changed along the arc within a predetermined rotation angle range. .
- the operation unit 121 receives an instruction to change the viewpoint position in response to a user operation. That is, the operation unit 121 functions as a change instruction receiving unit.
- the synthesizing unit 125 is based on the three-dimensional model stored in the three-dimensional model storage unit 111, the viewpoint position stored in the viewpoint information storage unit 112, and the viewpoint position indicated in the change instruction received by the operation unit 121.
- the viewpoint position is changed based on, and a two-dimensional projection image in which the three-dimensional model is drawn is generated based on the changed viewpoint. That is, the synthesizing unit 125 functions as a viewpoint position changing unit and a generating unit.
- FIG. 35 is a diagram illustrating an example of information stored in the three-dimensional model storage unit 111 of the image processing apparatus 10.
- the three-dimensional model storage unit 111 of the present embodiment further stores a viewpoint range in association with the three-dimensional model data. That is, the three-dimensional model storage unit 111 corresponds to a three-dimensional model storage unit and a viewpoint range storage unit.
- the viewpoint range includes a rotation axis position, a radius, and a rotation angle range with respect to the z axis and the y axis, respectively.
- the rotation axis position is the position of the rotation axis in the three-dimensional model.
- the radius is a value corresponding to the distance between the viewpoint and the gazing point.
- the rotation angle range is a rotation angle around the rotation axis corresponding to the viewpoint range.
- the rotation angle range is determined based on the viewpoint position associated with each monitoring area in the viewpoint information storage unit 112 described with reference to FIG. 30 in the eighth embodiment.
- the direction and the negative direction are set.
- the viewpoint range can be specified based on the rotation axis position, the radius, the rotation angle range, and the viewpoint position. That is, the three-dimensional model storage unit 111 stores a rotation axis position, a radius, and a rotation angle range as a viewpoint range.
- the three-dimensional model storage unit 111 stores a plurality of coordinates of settable viewpoint positions in the viewpoint range as another example, instead of storing the coefficient for determining the viewpoint range in this way. It is good as well.
- the 3D model storage unit 111 changes the radius around the rotation axis,
- the range may be further stored as a viewpoint range.
- the three-dimensional model storage unit 111 only needs to store, as a viewpoint range, information that can specify coordinates that can be set as the viewpoint position, and specific information on the viewpoint range is limited to the embodiment. It is not a thing.
- the rotation axis position stored in the viewpoint information storage unit 112 is generated and displayed as a 2D projection image that is difficult for the user to grasp the 3D model, such as a 2D projection image including a blank area where no 3D model exists. It is a value that never happens. From this point of view, the rotation axis position is determined in advance by a designer or the like and registered in the viewpoint information storage unit 112.
- the rotation angle range stored in the viewpoint information storage unit 112 is observed from a position that is not supposed to be viewed by the user, such as the back side of the ground of the three-dimensional model of all monitoring areas. This is a range excluding a rotation angle at which a three-dimensional projection image is generated and displayed. It is assumed that the rotation angle range is determined in advance by a designer or the like and registered in the viewpoint information storage unit 112.
- a viewpoint in which a two-dimensional projection image projected in an observation direction that is not assumed to be viewed by a user of a 3D model designer or the like is not generated. Since the range is set in advance, only the two-dimensional projection image assumed by the designer or the like can be provided to the user.
- 36A to 36C are diagrams showing two-dimensional projection images drawn when the viewpoint position is changed.
- the viewpoint positions of the two-dimensional projection images of all the monitoring areas shown in FIG. 34 are based on the z-axis stored in the three-dimensional model storage unit 111 in association with all the monitoring areas. It is a two-dimensional projection image drawn at a viewpoint position within the viewpoint range.
- FIG. 36A shows a state in which the viewpoint position of the two-dimensional projection image shown in FIG. 34 is rotated 90 ° clockwise around the z axis while the three-dimensional model is viewed from above the three-dimensional model in the z-axis direction.
- Is a two-dimensional projection image in which a three-dimensional model is drawn at the viewpoint position.
- FIG. 36B is a two-dimensional projection image in which the three-dimensional model is drawn at the viewpoint position after the viewpoint position of the two-dimensional projection image shown in FIG. 36A is further rotated 90 ° clockwise.
- FIG. 36C is a two-dimensional projection image in which a three-dimensional model is similarly drawn at the viewpoint position after the two-dimensional projection image shown in FIG. 36B is rotated by 90 °.
- FIG. 37 is a diagram for explaining the viewpoint range with the y-axis as the rotation axis for all monitoring areas.
- the rotation angle range corresponding to the viewpoint range of the y-axis is set to an upper limit of 40 ° in the positive z-axis direction and a lower limit of 20 ° in the negative z-axis direction of the three-dimensional model. ing.
- the viewpoint position may be rotated in the minus direction around the y axis, but if it is greatly rotated in the minus direction, it will be in a state of looking up from the basement On the contrary, it is difficult to grasp the positional relationship. Therefore, the rotation angle range in the minus direction is set to a value smaller than the rotation angle range in the plus direction.
- 38A and 38B are diagrams showing a two-dimensional projection image drawn when the viewpoint position is changed in the viewpoint range with the y axis as the rotation axis.
- a two-dimensional projection image as shown in FIG. 38B is obtained.
- the viewpoint range for any monitoring area is a range having the z-axis and the y-axis as rotation axes.
- the direction, position and number are not limited to those in the embodiment.
- a rotation axis extending in a direction other than the xyz three axes may be set.
- the number of rotating shafts may be one, or may be three or more.
- FIG. 39 is a flowchart of the image processing apparatus 10 according to the ninth embodiment. Note that the processing from step S3001 to step S3013 shown in FIG. 39 is the same as the processing from step S2001 to step S2013 according to the eighth embodiment.
- the user changes the viewpoint position by operating the operation unit 121.
- the direction of changing the viewpoint position is designated by moving the cursor displayed on the display screen with a mouse or the like.
- the moving distance of the viewpoint position is designated at a speed corresponding to the moving speed.
- the synthesis unit 125 When the operation unit 121 accepts an instruction to change the viewpoint position in response to a user operation (No at Step S3004, No at Step S3010, Yes at Step S3020), the synthesis unit 125 first selects a viewpoint position within the viewpoint range according to the change instruction. Identify. Specifically, for example, when the user moves the cursor on the display screen, the synthesis unit 125 specifies the change amount of the viewpoint position within the viewpoint range from the movement amount of the cursor, and changes based on the specified change amount. Specify a later viewpoint position.
- the synthesizing unit 125 further performs a two-dimensional projection image from the three-dimensional model data of the three-dimensional model corresponding to the display selection image displayed when the change instruction is received based on the changed viewpoint position, that is, after the viewpoint is changed.
- the selected image is generated (step S3021).
- the synthesis unit 125 extracts the icon of the associated device, The extracted icon is arranged in the three-dimensional model, and a selection image is generated from the three-dimensional model data of the arranged three-dimensional model.
- the output unit 126 outputs the selection image after changing the viewpoint, the selection image output on the display screen is displayed (step S3022), and the process proceeds to step S3013. That is, when a camera icon is selected in the selected image displayed in step S3022, the captured image captured by the imaging device 20 corresponding to the selected camera icon is synthesized by the processing in steps S3004 to S3006. Synthesized image generation and display.
- the viewpoint position when the viewpoint position is changed according to a change instruction from the user, the viewpoint position can be changed according to a preset rotation axis and rotation angle range. Therefore, a two-dimensional projection image unnecessary for the user is not displayed, and the user can display a two-dimensional projection image in which a desired area can be observed with a simple operation.
- the image processing apparatus 10 stores a plurality of monitoring areas and viewpoint ranges used for displaying the selection screen in association with each other, the viewpoint position of the selection screen can be changed.
- the image processing apparatus 10 further associates a viewpoint position and a viewpoint range for generating a confirmation image for selecting a camera by the user and for the user to confirm the direction of the lens of the camera, and the three-dimensional model. May be further stored and displayed so that the viewpoint position of the confirmation image can be changed.
- a two-dimensional projection image obtained by observing the three-dimensional model from a viewpoint that allows easy confirmation of the direction of the lens of the camera is generated and displayed as a confirmation image, and a three-dimensional model expressed in the two-dimensional projection image is further displayed. It is possible to display a two-dimensional projection image (confirmation image) that is rotated according to an instruction from the user.
- a captured image (hereinafter also referred to as a monitoring image) acquired by the imaging device 20 is displayed live (hereinafter referred to as live streaming playback) on the image processing device 10 using a live streaming technique or the like.
- live streaming playback a live streaming technique or the like.
- imaging information or event information acquired by the image processing apparatus 10 from the imaging apparatus 20 or the sensor 40 via the network 30 is stored in a storage device provided in the image processing apparatus 10 or a storage device arranged on the network 30. It may be recorded and used for the later reproduction of the monitoring image (hereinafter referred to as recording reproduction in order to distinguish it from live streaming reproduction).
- recording reproduction in order to distinguish it from live streaming reproduction.
- a mode in which the image processing apparatus 10 includes a storage device that stores imaging information and event information will be described as a tenth embodiment.
- FIG. 40 is a diagram illustrating a configuration example of an image processing device 4010 according to the tenth embodiment.
- the image processing device 4010 includes an image storage unit 4127 and an event storage unit 4128 in addition to the same configuration as the image processing device 10 shown in FIG.
- Each of the image storage unit 4127 and the event storage unit 4128 may use various storage devices such as a hard disk built in a personal computer that implements the image processing apparatus 4010 and a hard disk externally attached to the personal computer.
- the imaging information transmitted from the imaging device 20 includes the device ID of the imaging device 20 that acquired the captured image and the imaging time when the captured image was acquired. And are included.
- the device ID and the imaging time are sent from the control unit 204 of the imaging apparatus 20 to the compression unit 202, and are added to the header or footer of the captured image together with pan information, tilt information, and zoom information.
- pan information, tilt information, and zoom information are collectively referred to as PTZ information.
- the acquisition unit 123 that has acquired the imaging information from the network 30 via the communication processing unit 122 identifies the captured image, device ID, imaging time, and PTZ information included in the imaging information, and inputs them to the control unit 124.
- the control unit 124 uses the captured image, device ID, imaging time, and PTZ information acquired by the acquisition unit 123 to perform live streaming reproduction of the monitoring image with the two-dimensional projection image according to each embodiment described above as a background.
- the captured image, device ID, captured time, and PTZ information are stored in the image storage unit 4127. Note that the captured image, device ID, imaging time, and PTZ information may be stored in the image storage unit 4127 from the control unit 124 even when live streaming playback of the monitoring image is not performed.
- the control unit 124 specifies the device ID of the imaging device 20 selected by the user when storing the captured image, device ID, imaging time, and PTZ information in the image storage unit 4127. Therefore, when the device ID of the selected imaging device 20 matches the device ID to be stored, the control unit 124 sets a predetermined selection flag in the data set to be stored (captured image, device ID, imaging time, and PTZ information). In addition, these are stored in the image storage unit 4127.
- the predetermined selection flag is flag information indicating that the associated captured image is a captured image acquired by the imaging device 20 while the imaging device 20 is selected.
- the control unit 124 is a captured image acquired by the imaging device 20 in a state where the corresponding captured image is not selected by the user.
- a selection flag indicating the presence may be added to the data set to be stored and stored in the image storage unit 4127.
- the selection flag at this time may be null data.
- a selection flag indicating that the image is acquired while the imaging device 20 is in a selected state is referred to as a selection flag ('1').
- a selection flag indicating that the captured image is acquired during non-selection is referred to as a non-selection flag ('0').
- a captured image, a device ID, an imaging time, PTZ information, and a selection flag are distinguished and stored for each imaging device 20.
- a device ID is used for this distinction.
- a table management method is used for data management in the image storage unit 4127.
- FIG. 41 is a diagram illustrating an example of a captured image management table held by the image storage unit 4127.
- each captured image is associated with the device ID of the image capturing apparatus 20 (camera) that captured the captured image, the capturing time, PTZ information, and a selection flag. be registered.
- FIG. 41 shows an example of a captured image management table that can be used for live streaming playback when the monitoring image (captured image) is a continuous still image for the sake of simplicity. It is not something that can be done.
- a captured image is compressed in a moving image compression format including a plurality of frames such as MPEG-4
- a series of captured images acquired by one shooting is managed as one moving image file.
- PTZ information, selection flags, and the like that change for each time zone may be added to the corresponding time zone of the captured image (operation) using a metadata description method such as MPEG-7.
- the event information transmitted from the sensor 40 includes, in addition to event detection data, the device ID of the sensor 40 that detected this event and the event occurrence time.
- FIG. 42 shows a configuration example of the sensor 40.
- the sensor 40 includes, for example, a sensor unit 4201, a pan head drive unit 4205, an angle sensor 4206, a control unit 4204, and a compression unit 4202, similarly to the imaging device 20 shown in FIG. And a communication processing unit 4203.
- Event detection data obtained by the sensor unit 4201, the device ID specified by the control unit 4204, and the event occurrence time are input to the compression unit 4202.
- pan information and tilt information (hereinafter referred to as PT information) obtained by the angle sensor 4206 is input to the compression unit 4202 via the control unit 4204. Also good.
- the detection data, device ID, and occurrence time input to the compression unit 4202 are compressed together with PT information as necessary, and are sent from the communication processing unit 4203 to the image processing apparatus 4010 via the network 30 as event information.
- the acquisition unit 123 that acquired the event information from the network 30 via the communication processing unit 122 identifies the detection data, device ID, and occurrence time (and PT information) included in the event information, and inputs them to the control unit 124. .
- the acquisition unit 123 inputs the occurrence time included in the event information when the detection of the same event is completed to the control unit 124.
- the control unit 124 specifies the occurrence time specified from the event information when the detection of the event is ended as the event end time, and uses the end time as the detection data, device ID, and occurrence time (and PT) of the corresponding event. Information) and the event storage unit 4128.
- the control unit 124 uses the event detection data, device, generation time, and end time (and PT information) acquired by the acquisition unit 123 for the live streaming reproduction of the monitoring image described above, and the detection data, device, and generation time.
- the end time (and PT information) is stored in the event storage unit 4128. Even when the live streaming reproduction of the monitoring image is not executed, the detection data, device ID, generation time and end time (and PT information) may be stored from the control unit 124 into the event storage unit 4128.
- the control unit 124 when storing the detection data, device, occurrence time and end time (and PT information) in the event storage unit 4128, the control unit 124 generates an event ID unique to each event, and stores this event ID. Together with the data set (detection data, device, occurrence time and end time (and PT information)) to be stored in the event storage unit 4128.
- the event storage unit 4128 stores an event ID, device ID, detection data, event occurrence time and end time (and PT information) separately for each sensor 40.
- a device ID is used for this distinction.
- a table management method is used for data management in the event storage unit 4128.
- FIG. 43 is a diagram showing an example of an event management table held by the event storage unit 4128.
- event detection data is registered in association with the device ID, event ID, occurrence time, and end time (and PT information) of the sensor 40 that detected this event. Yes.
- Each record registered in the event management table may be associated with a confirmation flag indicating whether the user has confirmed each event. This confirmation flag is triggered by a predetermined event such as a user operation or completion of reproduction of the corresponding event during recording and reproduction of the captured image and event information stored in the image storage unit 4127 and event storage unit 4128. 4128 may be stored.
- the sensor 40 associated with the imaging device 20 may exist.
- the sensor 40 may be associated with the imaging device 20 that images the human sensing range of the sensor 40. If it is a door opening / closing sensor, the sensor 40 and the imaging device 20 that images the door detected by the sensor 40 may be associated with each other. Alternatively, motion detection may be performed using a captured image of the imaging device 20, and the imaging device 20 itself may be used as the sensor 40.
- FIG. 44 is a diagram illustrating an example of a sensor management table managed in the three-dimensional model storage unit 111.
- the sensor management table associates and manages the device ID of the sensor 40 and the device ID (hereinafter referred to as a cover camera ID) of the imaging device 20 associated with the device ID.
- a cover camera ID the device ID of the sensor 40
- the sensor management table may manage the importance of the sensor 40 corresponding to each device ID and the sensor type indicating the sensor type (human sensor, thermal sensor, etc.) in association with each device ID. Good.
- control unit 124 when storing image information and event information in the image processing apparatus 4010 will be described in detail with reference to the drawings.
- a case where a recording operation is executed in a time slot reserved for preset recording is illustrated.
- the present invention is not limited to this, and various modifications can be made such as continuous recording during operation.
- FIG. 45 is a flowchart showing a recording operation of the control unit 124 according to the tenth embodiment.
- the control unit 124 waits for the main recording operation until the recording start timing comes (No in step S4001).
- the recording start timing is not to record all images sent from the camera, but to record by thinning out in units of frames.
- the control unit 124 acquires the device ID of the imaging device 20 that is currently selected by the user (Step S4002).
- the device ID of the imaging device 20 currently selected by the user may be managed by a CPU (Central Processing Unit) cache memory that implements the control unit 124, for example. Further, when there is no imaging device 20 that is currently selected by the user, the control unit 124 may acquire, for example, null data as a device ID.
- a CPU Central Processing Unit
- the control unit 124 stands by until receiving imaging information from the imaging device 20 on the network 30 via the communication processing unit 122 (No in step S4003).
- the control unit 124 uses the acquisition unit 123 to specify the captured image, device ID, imaging time, and PTZ information from the received imaging information. This is acquired from the acquisition unit 123 (step S4004).
- the control unit 124 determines whether or not the acquired device ID matches the device ID specified in step S4002 (step S4005).
- step S4005 As a result of the determination in step S4005, if both device IDs match (Yes in step S4005), the control unit 124 associates the captured image, device ID, imaging time, and PTZ information identified in step S4004 with the selection flag. And stored in the image storage unit 4127 (step S4006). On the other hand, if both device IDs do not match (No in step S4005), the control unit 124 associates the captured image, device ID, imaging time, and PTZ information specified in step S4004 with the non-selected flag, and stores the image storage unit. It stores in 4127 (step S4007).
- control unit 124 determines whether or not the preset recording end time has come (step S4008). When the end time is reached (Yes at step S4008), the control unit 124 ends the recording operation. On the other hand, when it is not the end time (No at Step S4008), the control unit 124 returns to Step S4002 and executes the subsequent operations.
- the captured image is stored in the image storage unit 4127 in association with the device ID, the imaging time, the PTZ information, and the selection flag.
- FIG. 46 is a flowchart showing the event recording operation of the control unit 124 according to the tenth embodiment.
- the control unit 124 waits for this event recording operation until the preset recording start time comes after activation (No in step S4011).
- the control unit 124 waits until event information is received from the sensor 40 on the network 30 via the communication processing unit 122 (No at Step S4012).
- event information is received from any of the sensors 40 (Yes in step S4012)
- the control unit 124 uses the acquisition unit 123 to specify detection data, device ID, and occurrence time (and PT information) from the received event information. This is acquired from the acquisition unit 123 (step S4013).
- the control unit 124 waits until the input of the same event information is finished (No at Step S4014), and when finished (Yes at Step S4014), specifies the end time (Step S4015).
- the control unit 124 determines whether PT information is included in the event information (step S4016).
- the control unit 124 associates the detection data, device ID, generation time, and PT information specified in step S4013 with the end time specified in step S4015. And stored in the event storage unit 4128 (step S4017).
- the control unit 124 may generate an event ID and store it in the event storage unit 4128 together with the detection data, device ID, occurrence time, end time, and PT information.
- the control unit 124 associates the detection data, device ID, and occurrence time specified in step S4013 with the end time specified in step S4015. And stored in the event storage unit 4128 (step S4018). At this time, the control unit 124 may generate an event ID and store it in the event storage unit 4128 together with the detection data, device ID, occurrence time, and end time.
- control unit 124 determines whether or not the recording end timing has come (step S4019).
- the control unit 124 ends the event recording operation.
- the control unit 124 returns to Step S4012 and executes the subsequent operations.
- the detection data of the event detected by the sensor 40 is stored in the event storage unit 4128 in association with the event ID, device ID, occurrence time, end time (and PT information).
- FIG. 47 is a flowchart illustrating an example of a recording / playback operation performed by the control unit 124 according to the tenth embodiment. As shown in FIG. 47, after starting, the control unit 124 waits until a recording data playback instruction is input from the user (No in step S4031). Note that an instruction to reproduce recorded data may be input by the user using the operation unit 121 of the image processing apparatus 4010.
- the control unit 124 When receiving an instruction to reproduce the recorded data (Yes at Step S4031), the control unit 124 reads the recording data to be reproduced (Step S4032).
- the recorded data to be read includes the captured image, device ID, imaging time, PTZ information, and selection flag stored in the image storage unit 4127, detection data, event ID, device ID, stored in the event storage unit 4128, The generation time and end time (and PT information) and a confirmation flag are included.
- control unit 124 determines whether there is an imaging device 20 being selected by referring to a selection flag associated with the first captured image in the read recording data (step S4033). .
- the imaging device 20 being selected here is an imaging device in a state selected by the user at the start of recording.
- the control unit 124 associates the selected position on the three-dimensional model (camera icon position) of the selected imaging device 20 with the captured image to be reproduced. Based on the PTZ information, a viewpoint position for generating a two-dimensional projection image from the three-dimensional model data is determined (step S4036). In addition, the control unit 124 generates an operation screen for the user to input an operation related to recording / playback (step S4037).
- This operation screen includes a predetermined area for recording and reproducing the captured image (hereinafter referred to as a reproduction area). This reproduction area may correspond to, for example, a predetermined area on the two-dimensional projection image in the above-described embodiment.
- the control unit 124 determines the viewpoint position of the initial condition regarding the corresponding area from the viewpoint information storage unit 112 described above with reference to FIG. Obtain (step S4034). In addition, the control unit 124 generates an operation screen for the user to input an operation related to recording and reproduction (step S4035). This operation screen may or may not include a playback area for generating a captured image.
- the control unit 124 generates a two-dimensional projection image from the three-dimensional model data based on the viewpoint position determined in step S4036 or the viewpoint position acquired in step S4034 (step S4038).
- the PTZ information of the imaging device 20 and the PT information of the sensor 40 read out in step S4032 are reflected on all the camera icons and sensor icons included in the three-dimensional model data at this time.
- the three-dimensional model data at the time of reproduction is a reproduction of the movement (change, etc.) of the three-dimensional model data during the recording period as it is at the time of the live streaming reproduction according to the above-described embodiment.
- each camera icon may be reproduced by pop-up display of a captured image (or a thumbnail thereof) acquired by the imaging device 20 associated with each camera icon, as in the above-described embodiment.
- control unit 124 synthesizes the captured image read in step S4032 with the operation screen generated in step S4037 (step S4039). Note that the captured image combined with the operation screen sequentially transitions along a time axis from a series of captured images in time series. As a result, the captured image is reproduced in the reproduction area of the operation screen.
- control unit 124 combines the operation screen in which the captured image is combined in step S4039 with the two-dimensional projection image generated in step S4038 (step S4040). Subsequently, the control unit 124 outputs the two-dimensional projection image combined with the operation screen to the output unit 126 such as a monitor (step S4041), and displays it.
- the control unit 124 determines whether or not to end the reproduction of the recorded data (step S4042). This determination may be made based on, for example, whether or not a playback stop button has been input from the operation screen, whether or not playback of recorded data has been completed to the end. If it is determined that the playback has ended (Yes at step S4042), the control unit 124 ends the recording / playback operation. On the other hand, if it is determined not to end the reproduction operation (No at Step S4042), the control unit 124 determines whether, for example, an operation input for changing the viewpoint position is received from the user (Step S4043).
- the operation input method for changing the viewpoint position may be the same as the operation input method for the pan direction, the tilt direction, and the focal length of the zoom lens of the imaging device 20 in the above-described embodiment.
- the control unit 124 determines a new viewpoint position from the operation data input to the operation unit 121 (Step S4044).
- the user operation at the viewpoint position in step S ⁇ b> 4044 may be the same as the user operation on the imaging device 20 in the above-described embodiment.
- the operation data input from the operation unit 121 is not transmitted to the imaging apparatus 20 as a control signal, but is used to calculate the viewpoint position change amount with respect to the three-dimensional model. Thereafter, the control unit 124 returns to step S4038 and executes the subsequent operations.
- the control unit 124 determines whether or not the PTZ information of the selected imaging device 20 specified in step S4033 is changed in the recording data read in step S4032. Is determined (step S4045). When there is a change in the PTZ information (Yes at Step S4045), the control unit 124 returns to Step S4036 and performs the subsequent operations based on the changed PTZ information.
- the control unit 124 determines whether or not the user has performed a camera icon selection operation on the two-dimensional projection image (Step S4046).
- the control unit 124 returns to step S4036, and based on the position of the selected camera icon on the three-dimensional model and the PTZ information corresponding to the device ID thereof. Execute the following operations.
- Step S4046 determines whether or not the user has performed a sensor icon selection operation on the two-dimensional projection image.
- the control unit 124 selects a cover camera ID (device ID) associated with the device ID of the sensor icon selected from the sensor management table shown in FIG. (Step S4048). Thereafter, the control unit 124 returns to step S4036 and performs the subsequent operation based on the position on the three-dimensional model of the imaging device 20 corresponding to the newly selected device ID and the PTZ information corresponding to this device ID. Execute.
- the recording / playback operation By performing the recording / playback operation as described above, it is possible to directly reproduce the live streaming playback of the monitoring image at the time of recording.
- the selected imaging device 20 or sensor 40 is switched as necessary, or the viewpoint position with respect to the three-dimensional model is moved regardless of the selected camera icon position. Is possible. Therefore, the user can more easily and in detail check the past monitoring image.
- FIG. 48 is a diagram showing an example of the composite image generated in step S4040 of FIG.
- the operation screen 4300 with a two-dimensional projection image including camera icons 4001, 4002, and 4003 as a background. Is synthesized. Note that the arrangement of the operation screen 4300 is not limited to the lower right of the two-dimensional projection image, and may be a central portion or the like.
- the operation screen 4300 may display various information in addition to the captured image to be displayed as described above.
- FIG. 49 shows an example of the operation screen 4300.
- the operation screen 4300 includes a captured image reproduction area 4310 for recording and reproducing captured images, and an event display area 4320 for displaying information on various events. Further, the operation screen 4300 records the identification information (camera ID or camera name) 4311 of the imaging device 20 that acquired the captured image displayed in the captured image reproduction area 4310, that is, the selected imaging device 20, and preset recording.
- An operation button group 4330 for the user to perform the above operation and a recording interval button 4331 for setting a recording time interval may be displayed.
- the imaging time (may be the date and time) of the captured image being displayed in the captured image reproduction area 4310, a time-sequential list of event IDs, and the event corresponding to each event ID are displayed.
- the occurrence time the cover camera ID (device ID) of the imaging device 20 that covers the sensor 40 corresponding to each event ID, the importance level associated with each event or the sensor 40 that detected each event, and each event A confirmation flag indicating whether or not the user has confirmed may be displayed.
- a time bar indicating where in the recorded data is currently being played back may be displayed.
- An index indicating the time of occurrence of each event may be added to the time bar.
- the output unit 126 displays, for example, a list of recording data files that have been preset recorded.
- the control unit 124 executes an operation for recording and reproducing the selected file.
- This recording / playback operation is the operation described above with reference to FIG.
- the synthesized image displayed on the output unit 126 is a synthesized image (for example, FIG. 9 and FIG. 12).
- the recording interval button 4331 is, for example, a pull-down menu button.
- a list of recording intervals that can be set (for example, full frame (30 frames / second), 15 frames / second, 10 frames / second, 5 frames / second, 1 frame / second, etc.) Is displayed.
- the control unit 124 When the user selects one of the listed recording intervals (for example, 10 frames / second), the selected recording interval (for example, 10 frames / second) is set in the control unit 124, for example.
- the control unit 124 repeatedly executes the recording operation shown in FIG. 45 and the recording operation shown in FIG. 46 at each recording interval set in this way.
- each of the imaging device 20 and the sensor 40 may include a storage device, and the image processing device 4010 may be configured to acquire imaging information and event information stored in the storage device online or offline.
- the imaging device 20 and the sensor 40 directly transmit imaging information and event information to the storage device on the network 30, and the image processing device 4010 transmits the imaging information and event information stored in the storage device via the network 30. It may be configured to obtain.
- the icon of the sensor 40 or the imaging device 20 that detected the event may be highlighted on the two-dimensional projection image.
- each imaging device 20 performs motion detection on a captured image and each imaging device 20 also functions as the sensor 40
- an event is selected from the camera icons corresponding to these imaging devices 20, as shown in FIG.
- Camera icons 4003a and 4005a corresponding to the detected imaging device 20 may be highlighted.
- FIG. 50 is a diagram illustrating an example of a composite image according to the eleventh embodiment.
- Various display methods such as blinking display, display with a color different from the background (for example, red), and enlarged display can be used for highlighting various icons.
- the highlighting of various icons can be realized, for example, by temporarily replacing an icon image associated with the device ID of the device to be highlighted (sensor 40 or imaging device 20) with another icon image for highlighting. In that case, since an icon image for highlighting is incorporated in the 3D model data instead of the normal icon image, the icon image in the 2D projection image generated from the 3D model data is also highlighted.
- generated from the three-dimensional model data was highlighted was mentioned as an example here like the embodiment mentioned above, it is not restricted to this.
- a two-dimensional model obtained by mapping a layout similar to this on a two-dimensional plane parallel to the ground (or the floor surface) may be used instead of the three-dimensional model data.
- the two-dimensional model as it is is combined with the captured image and the operation screen and displayed on the output unit 126.
- highlighting icons are used for various icons mapped in the two-dimensional model. Accordingly, as in the case of the above-described three-dimensional model, various icons mapped in the two-dimensional model are highlighted as necessary. Since other configurations, operations, and effects may be the same as those of the above-described embodiment, detailed description thereof is omitted here.
- the configuration of the image processing apparatus 10 is realized by, for example, a hardware CPU (Central Processing Unit), a memory, and other LSI (Large Scale Integration) as hardware.
- the configuration of the image processing apparatus 10 can be realized by, for example, a program loaded into a memory as software.
- the above embodiment has been described as a functional block realized by cooperation of these hardware or software. That is, these functional blocks can be realized in various forms by hardware only, software only, or a combination thereof.
- FIG. 51 is a diagram showing that the image processing program is realized using a computer.
- a computer 1000 as the image processing apparatus 10 includes a control device such as a CPU (Central Processing Unit) 1001 connected via a bus 1009, a ROM (Read Only Memory) 1002, and a RAM (Random). (Access Memory) 1003, a storage device such as HDD (Hard Disk Drive) 1004, disk drive 1005, a display device such as a display 1006, and input devices such as a keyboard 1007 and a mouse 1008. It has a hardware configuration using a normal computer. In the storage device or the external storage device, various types of information of the above-described 3D model storage unit 111 and viewpoint information storage unit 112 are stored.
- the image processing program executed by the image processing apparatus 10 is, as one form, an installable or executable format file such as a CD-ROM, flexible disk (FD), CD-R, DVD (Digital Versatile Disk), etc. And recorded on a computer-readable recording medium.
- the image processing program executed by the image processing apparatus 10 may be provided by being stored on a computer connected to a network such as the Internet and downloaded via the network.
- the image processing program executed by the image processing apparatus 10 may be provided or distributed via a network such as the Internet. Further, the image processing program may be provided by being incorporated in a ROM or the like.
- the image processing program executed by the image processing apparatus 10 has a module configuration including the above-described functional units (acquisition unit 123, synthesis unit 125, output unit 126), and a CPU (processor) is used as actual hardware.
- acquisition unit 123, synthesis unit 125, output unit 126 By reading out and executing the image processing program from the storage medium, each of the above functional units is loaded onto the main storage device, and the acquisition unit 123, the synthesis unit 125, and the output unit 126 are generated on the main storage device and stored as appropriate. Processing is performed using various information stored in the device or the external storage device.
- the constituent elements of the illustrated image processing apparatus 10 are functionally conceptual, and need not be physically configured as illustrated.
- the specific form of distribution or integration of each device is not limited to the illustrated one, and all or a part thereof is functionally or physically distributed or arbitrarily distributed in arbitrary units according to various burdens or usage conditions.
- the synthesizing unit 125 determines a viewpoint of the three-dimensional model based on the camera position, and a two-dimensional projection image that is a two-dimensional image obtained by projecting the three-dimensional model onto a two-dimensional surface based on the determined viewpoint and imaging direction.
- a “synthesis unit” that generates a synthesized image by synthesizing a captured image with a predetermined region of the generated two-dimensional projection image.
- the example in which the image processing apparatus 10 is applied to the monitoring system 1 has been described.
- various other systems such as a distribution system that distributes live video in real time. It can be applied to various uses.
- the image processing apparatus, the image processing method, and the image processing program according to the present invention are useful when combining a model of an object imaged by a camera at a corresponding position of a three-dimensional model, This is suitable for intuitively grasping the area imaged by the camera.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Un dispositif de traitement d'image (4010) est caractérisé en ce qu'il comprend : une unité de stockage de modèles tridimensionnels (111) qui stocke un ensemble de données de modèle tridimensionnel indiquant le modèle d'une région tridimensionnelle, et la position d'une caméra disposée dans la région tridimensionnelle en tant que position de la caméra dans le modèle tridimensionnel ; une unité d'acquisition (123) qui acquiert une image capturée qui a été capturée au moyen de la caméra et une direction de prise d'image lors de la capture de l'image ; une unité de stockage d'image (4127) qui stocke la direction de prise d'image et l'image capturée acquise au moyen de l'unité d'acquisition en association l'une avec l'autre ; une unité opérationnelle (121) qui reçoit une commande de reproduction pour l'image capturée stockée dans l'unité de stockage d'image ; une unité de synthèse (125) qui, lorsque l'unité opérationnelle reçoit la commande de reproduction, détermine le point de vue par rapport au modèle tridimensionnel selon la position de la caméra, génère une image de projection bidimensionnelle dans laquelle le modèle tridimensionnel est projeté sur une surface bidimensionnelle selon le point de vue déterminé et la direction de prise d'image stockée dans l'unité de stockage d'image, et génère une image de synthèse en synthétisant l'image capturée stockée dans l'unité de stockage d'image sur une région prédéterminée sur l'image de projection bidimensionnelle générée ; et une unité de sortie (126) qui sort l'image de synthèse.
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012044778 | 2012-02-29 | ||
JP2012044777 | 2012-02-29 | ||
JP2012-044777 | 2012-02-29 | ||
JP2012-044778 | 2012-02-29 | ||
JP2012-219203 | 2012-10-01 | ||
JP2012219203A JP5966834B2 (ja) | 2012-02-29 | 2012-10-01 | 画像処理装置、画像処理方法及び画像処理プログラム |
JP2012-219202 | 2012-10-01 | ||
JP2012219202A JP5983259B2 (ja) | 2012-02-29 | 2012-10-01 | 画像処理装置、画像処理方法及び画像処理プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013129188A1 true WO2013129188A1 (fr) | 2013-09-06 |
Family
ID=49082388
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/054028 WO2013129188A1 (fr) | 2012-02-29 | 2013-02-19 | Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2013129188A1 (fr) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2958080A1 (fr) * | 2014-06-17 | 2015-12-23 | Furuno Electric Co., Ltd. | Système de commande et de caméra maritime |
EP3006923A1 (fr) * | 2014-10-07 | 2016-04-13 | Apodius UG | Procédé de détermination de l'orientation d'une structure fibreuse |
CN107465887A (zh) * | 2017-09-14 | 2017-12-12 | 潍坊学院 | 视频通话系统及视频通话方法 |
CN109643468A (zh) * | 2016-08-19 | 2019-04-16 | 索尼公司 | 图像处理装置和图像处理方法 |
WO2022220065A1 (fr) * | 2021-04-14 | 2022-10-20 | キヤノン株式会社 | Dispositif d'imagerie, procédé de commande et programme |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008005450A (ja) * | 2006-06-20 | 2008-01-10 | Kubo Tex Corp | 3次元仮想空間を利用したビデオカメラのリアルタイム状態把握、制御の方法 |
JP2008502228A (ja) * | 2004-06-01 | 2008-01-24 | エル‐3 コミュニケーションズ コーポレイション | ビデオフラッシュライトを実行する方法およびシステム |
-
2013
- 2013-02-19 WO PCT/JP2013/054028 patent/WO2013129188A1/fr active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008502228A (ja) * | 2004-06-01 | 2008-01-24 | エル‐3 コミュニケーションズ コーポレイション | ビデオフラッシュライトを実行する方法およびシステム |
JP2008005450A (ja) * | 2006-06-20 | 2008-01-10 | Kubo Tex Corp | 3次元仮想空間を利用したビデオカメラのリアルタイム状態把握、制御の方法 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2958080A1 (fr) * | 2014-06-17 | 2015-12-23 | Furuno Electric Co., Ltd. | Système de commande et de caméra maritime |
US9544491B2 (en) | 2014-06-17 | 2017-01-10 | Furuno Electric Co., Ltd. | Maritime camera and control system |
EP3006923A1 (fr) * | 2014-10-07 | 2016-04-13 | Apodius UG | Procédé de détermination de l'orientation d'une structure fibreuse |
CN109643468A (zh) * | 2016-08-19 | 2019-04-16 | 索尼公司 | 图像处理装置和图像处理方法 |
CN109643468B (zh) * | 2016-08-19 | 2023-10-20 | 索尼公司 | 图像处理装置和图像处理方法 |
CN107465887A (zh) * | 2017-09-14 | 2017-12-12 | 潍坊学院 | 视频通话系统及视频通话方法 |
WO2022220065A1 (fr) * | 2021-04-14 | 2022-10-20 | キヤノン株式会社 | Dispositif d'imagerie, procédé de commande et programme |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5966834B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP5910447B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
US20140368621A1 (en) | Image processing apparatus, image processing method, and computer program product | |
JP2013210989A (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP4458158B2 (ja) | 表示装置、表示方法、及びプログラム | |
JP7017175B2 (ja) | 情報処理装置、情報処理方法、プログラム | |
JP6529267B2 (ja) | 情報処理装置及びその制御方法、プログラム、並びに記憶媒体 | |
JP6226538B2 (ja) | 表示制御装置、表示制御方法、およびプログラム | |
WO2013129188A1 (fr) | Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image | |
JP6409107B1 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
WO2010008518A1 (fr) | Configuration de capture et d'affichage d'image | |
US9141190B2 (en) | Information processing apparatus and information processing system | |
CN102668556A (zh) | 医疗支援装置,医疗支援方法以及医疗支援系统 | |
US20210044793A1 (en) | Generation method for generating free viewpoint image, display method for displaying free viewpoint image, free viewpoint image generation device, and display device | |
JPWO2004006572A1 (ja) | 映像生成処理装置、映像生成処理方法および映像記憶装置 | |
JP2021177351A (ja) | 画像表示装置、制御方法、およびプログラム | |
US20190155465A1 (en) | Augmented media | |
JP5920152B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP7020024B2 (ja) | 情報処理装置およびプログラム | |
JP2013211821A (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP5983259B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP5910446B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
US20230033201A1 (en) | Image processing apparatus, image processing method, and storage medium | |
WO2013129187A1 (fr) | Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image | |
WO2013129190A1 (fr) | Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13754359 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13754359 Country of ref document: EP Kind code of ref document: A1 |