WO2010067486A1 - 画像処理装置及び画像処理方法 - Google Patents
画像処理装置及び画像処理方法 Download PDFInfo
- Publication number
- WO2010067486A1 WO2010067486A1 PCT/JP2009/003718 JP2009003718W WO2010067486A1 WO 2010067486 A1 WO2010067486 A1 WO 2010067486A1 JP 2009003718 W JP2009003718 W JP 2009003718W WO 2010067486 A1 WO2010067486 A1 WO 2010067486A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- plane
- unit
- map
- image processing
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B29/00—Maps; Plans; Charts; Diagrams, e.g. route diagram
- G09B29/003—Maps
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
Definitions
- the present invention relates to an image processing apparatus and an image processing method for generating a plane map by performing perspective transformation on an image obtained by imaging a state of a space.
- FIG. 1 is a diagram showing a configuration of a vehicle display device 11 disclosed in Patent Document 1.
- the vehicle display device 11 includes a camera group 12, a distance sensor group 13, an image processing device 14, a display 15, an imaging condition detection device 16, and an obstacle position detection device 17.
- the camera group 12 is a device for imaging the surroundings of the host vehicle.
- the camera group 12 is configured to include one or more cameras attached to the peripheral edge of the body of the host vehicle.
- the distance sensor group 13 is a device for detecting obstacles existing in the vicinity.
- the distance sensor group 12 includes one or more sensors attached to the host vehicle.
- the image processing device 14 functions as a surrounding image composition unit 41, a vehicle detection unit 42, a simulated vehicle drawing unit 43, and an image composition unit 44 by various programs stored in advance in a ROM (Read Only Memory).
- ROM Read Only Memory
- the surrounding image synthesizing unit 41 perspective-transforms a plurality of surrounding image data obtained by the camera group 12 into an upper viewpoint image viewed from an upper viewpoint of the own vehicle, and synthesizes these images into one sheet. A vehicle surrounding image is generated.
- the vehicle detection unit 42 detects other vehicles existing around the host vehicle using the obstacle detection data and the vehicle surrounding image of the distance sensor group 13.
- the pseudo vehicle drawing unit 43 draws a pseudo image (pseudo vehicle image) when the vehicle detected by the vehicle detection unit 42 is viewed from above according to the detected wheel and body.
- the simulated vehicle image used in the simulated vehicle drawing unit 43 is registered in a database such as a ROM.
- the image synthesizing unit 44 synthesizes the vehicle surrounding image and the pseudo vehicle image to generate a surrounding image including other vehicles existing around the host vehicle.
- the display 15 displays an image around the host vehicle based on the signal of the surrounding image.
- An object of the present invention is to provide an image processing apparatus and an image processing method that generate a planar map that is easy for a user to visually recognize without increasing the processing amount.
- An image processing apparatus of the present invention is an image obtained by capturing an image of an object and capable of calculating an image capable of calculating the three-dimensional coordinates of the object, and a reference plane in a three-dimensional space among the images.
- Coordinate acquisition means for extracting an upper surface of an object (first object) existing on a plane (a surface desired to be used as a reference on a plane map) from the image and acquiring three-dimensional coordinates of the extracted feature points of the upper surface
- the first plane for generating the first plane map by adjusting the size and position of the upper surface so that the coordinate value corresponding to the direction perpendicular to the reference plane is the same as the reference plane.
- a map generation means for generating the first plane map by adjusting the size and position of the upper surface so that the coordinate value corresponding to the direction perpendicular to the reference plane is the same as the reference plane.
- An image processing method includes an image acquisition step of acquiring an image obtained by capturing an object and capable of calculating the three-dimensional coordinates of the object, and a reference plane in a three-dimensional space among the images.
- a first plane map generation step of generating a first plane map by adjusting the size and position of the upper surface so that the same value as that of the reference plane is obtained.
- FIG. 1 The figure which shows the structure of the display apparatus for vehicles disclosed by patent document 1
- the block diagram which shows the internal structure of the plane map production
- FIG. 2 The flowchart which shows the process sequence of the plane map production
- FIG. The figure which shows the mode of a process of the plane map production
- FIG. The block diagram which shows the internal structure of the plane map production
- FIG. 2 is a diagram showing the monitoring system 100 according to Embodiment 1 of the present invention.
- the monitoring system 100 in FIG. 2 includes a camera group 101, a network 102, a plane map generation device 103, and a display 104.
- the camera group 101 is arranged for imaging a space in a monitoring target area such as a factory or an office.
- the camera group 101 is composed of one or more cameras attached to a fixed place in the monitoring target area.
- a monitoring target area such as a factory or an office.
- a CCD camera, a CMOS camera stereo camera, or the like can be adopted as the camera.
- Each camera is installed on the ceiling or wall so that the number of blind spots that are not imaged in the entire office is reduced.
- the network 102 connects the camera group 101 and the planar map generation device 103. An image captured by the camera group 101 is transmitted to the planar map generation device 103 via the network 102.
- the plane map generation device 103 generates a plane map from the acquired image and outputs it to the display 104.
- the plane map generation apparatus 103 can employ a terminal such as a PC (personal computer). Details of the plane map generation device 103 will be described later.
- the display 104 displays the plane map generated by the plane map generation device 103.
- FIG. 4 is a block diagram showing an internal configuration of the planar map generation apparatus 103 shown in FIG.
- the plane map generation device 103 includes an image acquisition unit 301, a coordinate acquisition unit 302, a first plane map generation unit 303, an image region extraction unit 304, a second plane map generation unit 305, a superimposition processing unit 306, and a pasting unit. 307.
- the image acquisition unit 301 acquires an image of the camera group 101 transmitted from the network 102. Then, the image acquisition unit 301 outputs the acquired image to the coordinate acquisition unit 302, the first plane map generation unit 303, and the image region extraction unit 304.
- the image acquired here is an image from which three-dimensional coordinates can be calculated. In the present embodiment, a case where this image is a stereo image will be described.
- the coordinate acquisition unit 302 performs two processes.
- the coordinate acquisition unit 302 first performs perspective transformation and smoothing processing on the image output from the image acquisition unit 301. Subsequently, the coordinate acquisition unit 302 extracts the upper surface area of the target object (first object), and calculates the three-dimensional coordinates of each feature point (Features) in the extracted upper surface area. Then, the coordinate acquisition unit 302 outputs the calculation result to the first plane map generation unit 303. More specifically, the coordinate acquisition unit 302 calculates the three-dimensional coordinates of each feature point on the upper surface by triangulation using the stereo image from the image acquisition unit 301.
- the three-dimensional coordinate axes will be described using FIG. 3A as a specific example.
- the axes that intersect at right angles on the reference plane are the x-axis and the y-axis, and the upward normal direction of the reference plane is the z-axis.
- the reference plane is a plane defined by the user as a reference in the plane map generation process, for example, a floor plane.
- the coordinate acquisition unit 302 first acquires each feature point of the image region of the second object from the image region extraction unit 304 described later. Then, the coordinate acquisition unit 302 calculates the three-dimensional coordinates of each acquired feature point, and outputs the calculation result to the second plane map generation unit 305. The coordinate acquisition unit 302 obtains the three-dimensional coordinates of each feature point in the area using the same method as the calculation of each feature point in the upper surface area.
- the first plane map generation unit 303 acquires the three-dimensional coordinates of each feature point in the upper surface area from the coordinate acquisition unit 302. Then, the first plane map generation unit 303 determines the size of the image area of the first object so that the z coordinate value of the acquired three-dimensional coordinates is the same as the z coordinate value of the reference plane. Convert. Thereafter, the first plane map generation unit 303 generates a first plane map by moving the converted region onto the reference plane, and outputs the generated first plane map to the superimposition processing unit 306.
- the image region extraction unit 304 is based on the coordinates of the region of each first object acquired from the coordinate acquisition unit 302 in the image of an object (hereinafter referred to as a second object) present on the first object. Extract regions.
- the image region extraction unit 304 outputs the extracted regions to the second plane map generation unit 305 and the coordinate acquisition unit 302, respectively.
- the second plane map generation unit 305 acquires the three-dimensional coordinates for each feature point of the image region of the second object output from the coordinate acquisition unit 302 and the image region output from the image region extraction unit 304. .
- the second plane map generation unit 305 acquired from the image region extraction unit 304 so that the z coordinate value of the three-dimensional coordinates acquired from the coordinate acquisition unit 302 is the same value as the z coordinate value of the reference plane. Convert the size of the image area.
- the second plane map generation unit 305 generates a second plane map by moving the converted region onto the reference plane, and outputs the generated second plane map to the superimposition processing unit 306.
- the superimposition processing unit 306 acquires the first planar map output from the first planar map generation unit 303 and the second planar map output from the second planar map generation unit 305. Then, the superimposition processing unit 306 generates a third plane map by superimposing the acquired first plane map and the second plane map, and sends the generated third plane map to the bonding unit 307. Output.
- the pasting unit 307 acquires a plurality of third planar maps generated based on images captured by a plurality of cameras arranged at different locations in the office. Then, the bonding unit 307 combines the plurality of acquired third plane maps to generate a fourth plane map representing the entire room, and outputs the generated fourth plane map to the display 104.
- the pasting unit 307 preferentially adopts a planar map image generated from a stereo image taken by a camera closest to the overlapping area as an overlapping area when a plurality of planar maps are pasted together.
- the planar map generation apparatus 103 generates a planar map shown in FIG. 3B for the office space shown in FIG.
- the planar map is an image that shows a pseudo state when the office space is viewed from above.
- the planar map generation apparatus 103 uses a floor surface as a reference plane, and a first object existing on the reference plane and a second object placed on the first object, respectively. Handle it differently.
- the first object is an object having a different height, such as a desk, a side shelf, or a cabinet in FIG.
- the second object is, for example, a personal computer, a book, a notebook, or the like in FIG.
- the process for the first object is performed by the first plane map generation unit 303.
- the process for the second object is performed by the second plane map generation unit 305.
- step S401 the coordinate acquisition unit 302 performs perspective transformation on the image acquired by the image acquisition unit 301.
- the perspective transformation is a transformation process for projecting a three-dimensional object placed in a three-dimensional space onto a two-dimensional plane when viewed from an arbitrary viewpoint.
- the conversion process is performed using a two-dimensional plane as a reference plane (for example, a floor surface) arbitrarily set by the user.
- the height of the object is not taken into consideration, and therefore, when the perspective conversion is directly performed on the captured image shown in FIG. 6A, the state shown in FIG. 6B is obtained.
- the shape of a tall object such as a display is distorted.
- the transformation to the reference plane is performed using the perspective transformation matrix shown in Expression (1).
- the perspective transformation matrix is a 3 ⁇ 3 matrix P.
- (X, y) is the value of the spatial coordinates set in the monitoring area
- (x ′, y ′) is the value of the coordinates on the two-dimensional plane
- w represents the perspective of the image that differs depending on the viewpoint. Is a variable.
- the coordinate acquisition unit 302 performs a smoothing process.
- the smoothing process is, for example, a process for removing a small image component such as noise from the entire image and creating a smooth image.
- the smoothing process in the present embodiment is a process of removing an image of an object that occupies a small area in the image.
- the coordinate acquisition unit 302 extracts a region of the first object.
- the area extraction process in the present embodiment is a process for obtaining an area on the image of a first object, such as a desk, that exists on a reference plane in a three-dimensional space. It is.
- this region is referred to as “image region of the first object”.
- the coordinate acquisition unit 302 performs this region extraction using color information, for example. Specifically, the coordinate acquisition unit 302 extracts the image area of the first object based on, for example, a portion where the change in color shading is severe. Note that the coordinate acquisition unit 302 may extract an image region of the first object based on a portion where the color change is severe using an HSV system or an RGB system.
- the HSV system is a color space composed of three components of hue, saturation, and brightness / value.
- the subsequent processing is repeated by the number of image areas of the first object obtained in step S402. In FIG. 10, the same number of desks, side shelves, and cabinets as the first object is repeated.
- step S403 the coordinate acquisition unit 302 determines whether or not the processing described below as steps S404 to S410 has been performed for the number of first objects obtained in step S402. If not completed, the process proceeds to step S404. If completed, the process ends.
- step S404 the coordinate acquisition unit 302 extracts the upper surface portion of the object from the image area of the first object.
- the result of performing the top surface extraction process in this embodiment is as shown in FIG.
- the coordinate acquisition unit 302 obtains feature points using a stereo image for the extracted image area of the first object, and calculates the three-dimensional coordinates of each feature point.
- the feature point is a point in the stereo image where there is a specific change such as a sharp change in shading with surrounding pixels in each image.
- the coordinate acquisition unit 302 holds feature points using xy coordinate values in the stereo image.
- FIG. 7 shows a feature point extraction image. Square marks in the figure show examples of feature points.
- the coordinate acquisition unit 302 has the same height among the feature points existing in the image area of the same first object in the image, and the color information of the area surrounded by those points is the same.
- a plurality of feature points are extracted.
- the coordinate acquisition unit 302 extracts an area surrounded by the extracted feature points as an upper surface area of the first object.
- the coordinate acquisition unit 302 divides the image area of the first object into an upper surface area and an area other than the upper surface area.
- a region indicated by diagonal lines surrounded by feature points t1 (tx1, ty1), t2 (tx2, ty2), t3 (tx3, ty3), and t4 (tx4, ty4) in FIG. 6D is an example of an upper surface region. It is.
- the upper surface area is defined as an area surrounded by feature points in the image, its shape changes depending on the number of feature points. For example, if there are 4 feature points, the upper surface area is a rectangle, and if there are 5 feature points, the upper surface area is a pentagon.
- the extracted upper surface region (t1-t4) is a result of perspective-transforming the image of the upper surface of the first object with respect to the reference surface. For this reason, the converted upper surface region differs from the original size by the height of the desk from the reference surface.
- step S405 the first plane map generation unit 303 sets the upper surface region so that the z coordinate value of the three-dimensional coordinates obtained in step S404 is the same as the z coordinate value of the reference plane. Move. That is, the first plane map generation unit 303 moves the upper surface area on the reference plane. At that time, the first planar map generation unit 303 reduces or enlarges the size of the region so that the region becomes equal to the actual size (see FIG. 6E).
- first object converted region the image of the first object on which the process of step S405 has been performed is referred to as “first object converted region”.
- the image region extraction unit 304 extracts a second object such as a personal computer, a book, or a notebook existing in the upper surface region of the first object from the image extracted in step S404 (see FIG. 6F).
- the image area is extracted.
- this region is referred to as “image region of the second object”.
- the extraction of the image area of the second object is performed by, for example, extracting an area corresponding to the second object by performing area extraction using density information in the image corresponding to the upper surface area of the first object. (See FIG. 6G). This process is repeated for the number of regions in the image of the first object in the image.
- the image area extraction unit 304 outputs the extracted coordinates of the image area of each second object to the coordinate acquisition unit 302.
- step S407 the coordinate acquisition unit 302 calculates the three-dimensional coordinates of the image area of the second object extracted in step S406.
- the image area of the second object in which the representative points o1 (ox1, oy1), o2 (ox2, oy2), etc. in the image shown in FIG. 6G are present is the desk where the second object exists.
- the height is different from the upper surface region.
- the size of the image area of the second object is different because the height is different from the reference plane. This is the same as the first planar map generation unit 303 shown in FIG.
- the second plane map generation unit 305 causes the z coordinate value of the three-dimensional coordinates acquired in step S407 to be the same value as the z coordinate value of the reference plane. Specifically, the second planar map generation unit 305 adjusts the size of the area in the image and moves the image area of the second object as it is (that is, in the same shape). For example, the representative point o1 (ox1, oy1) of the region moves to O1 (Ox1, Oy1), and o2 (ox2, oy2) moves to O2 (Ox2, Oy2), respectively (see FIG. 6H).
- the image of the second object that has been subjected to the processing in step S408 is referred to as a “second object converted region”.
- the plane map generation device 103 performs perspective transformation on the input image, acquires the upper surface area of the object, and obtains a suitable position and area in the three-dimensional coordinates. Convert to the size of.
- the planar map generation device 103 converts the position and the size of the area of the second object as the input image.
- the plane map generation apparatus 103 performs a different process on the first object and the second object in this way, thereby generating a plane map on which a second object having a height such as a display is displayed without distortion. Can be generated.
- the second object may be converted into a state as viewed from above by performing the same process as the process for the first object.
- step S409 the first planar map generation unit 303 performs post-processing on areas that are blind spots from the camera, such as areas other than the top surface area of the first object (hereinafter collectively referred to as “blind spot areas”). Then, the color information is used for painting.
- the blind spot area corresponds to a side area of the first object, an occlusion area generated by the first object, an upper surface area before the processing in step S405, and the like.
- FIG. 8A shows the result of perspective transformation of the captured image.
- FIG. 8B shows the result of performing the processing from step S402 to step S408 on the perspective transformation result shown in FIG. That is, FIG.
- FIG. 8B is a diagram in which the extracted upper surface area of the first object is changed to a suitable position and size of three-dimensional coordinates.
- FIG. 8C is an overlay of the blind spot area in FIG. 8A and FIG. 8B. The part corresponding to the blind spot area is displayed in black.
- the first planar map generation unit 303 may fill the blind spot area with a color that does not exist in the office so that it can be distinguished from other areas when displayed on the planar map (see FIG. 8C). ).
- the first plane map generation unit 303 may fill the blind spot area by using surrounding colors (for example, information on the density of the floor that is the reference plane).
- step S410 the superimposing processing unit 306 superimposes the converted area of the first object obtained in step S405 and the converted area of the second object obtained in step S408 on the planar map. (See FIG. 6I).
- the superimposition processing unit 306 also performs a process of superimposing the blind spot area (filled blind spot area) obtained in step S409 on the planar map.
- the procedure of superimposition processing is as follows. First, the superimposition processing unit 306 arranges a region of the first object that is the first planar map, and combines the region of the second object that is the second planar map.
- the superimposition processing unit 306 fills a region that is not filled with the converted regions of the first and second objects in the blind spot region obtained in step S409 with a predetermined color.
- the superimposition processing unit 306 uses the image information of the surrounding area for the side surface portion, the occlusion portion, and the like of the first object by complementing the blind spot region extraction with information obtained from another camera in step S410. You can fill it with.
- the plane map generation device 103 proceeds to step S403 after the process of step S410.
- FIG. 9 is a diagram showing a state in which a plurality of cameras are arranged in the office. As shown in this figure, in the office, a plurality of cameras are installed at suitable positions so that the number of blind spots that are not imaged is reduced.
- FIG. 10A is a camera image (image diagram) acquired from the camera.
- FIG. 10B is a planar map obtained by performing perspective transformation on the image of FIG. 10A, and
- FIG. 10C is a plan map generation apparatus 103 processing the image of FIG. It is a plane map.
- FIG. 10 (B) the object does not retain the original shape, is very distorted, and is a plane map that is very difficult to see.
- FIG. 10C a planar map that is easy to visually recognize is displayed.
- the monitoring system 100 separates the image area of the first object existing on the reference plane from the image area of the second object existing above the first object. Then, the monitoring system 100 synthesizes and displays the upper surface area of the first object and the image area of the second object by adjusting the size and position, respectively. Thereby, in Embodiment 1, the plane map with few distortions of an object can be produced
- planar map generation apparatus may have a configuration other than the above-described configuration.
- FIG. 11 is a block diagram showing an internal configuration of another planar map generation apparatus 500 according to the first embodiment.
- a person detection unit 501 acquires an image from the image acquisition unit 301, and detects a person from the acquired image using the characteristics of the person. The detection result is output to the flow line display unit 502. Further, the flow line display unit 502 analyzes the trajectory of the movement of the person based on the detection result output from the person detection unit 501. Then, the flow line display unit 502 embeds the analyzed trajectory in the plane map as a human flow line, and displays it on the display 104.
- the other planar map generation apparatus 500 can display a flow line indicating the movement of a person on the planar map, and can easily grasp the positional relationship of objects in the entire monitoring target area.
- the person moving in the room and the relationship between the movement of the person and the object can be easily confirmed on the display 104.
- the action of the person can be determined from the flow line. That is, by using another plane map generation device 500, it is possible to analyze the relationship between an object on a plane map and the movement of a person.
- the other plane map generation apparatus 500 can see the movement of the person in association with the object, and can further identify what the object is. For example, if it is analyzed that there is an attendance management board at the location A as an object related to a person, information such as “There is an attendance management board at A” is displayed on the edge etc. on the plane map. Can do.
- the plane map generation device can display a state that is closer to the latest even when a table, equipment, or the like moves in the monitoring target area by executing the process of generating the plane map at regular time intervals. . If the state in the monitoring target area does not change, the planar map generation device may be executed only at the time of installation (the process may not be executed until it changes).
- the camera group has been described as a stereo camera, but two or more cameras may be used.
- stereo matching using a stereo image is used for calculating coordinates, but a distance measuring sensor or the like may be used.
- the fill color of the blind spot area on the generated planar map may be set with a value that unifies color information, such as a value corresponding to black.
- the monitoring apparatus 100 obtains a feature point in the image and calculates the three-dimensional coordinates of the feature point. Next, when they have the same height and the colors in the region surrounded by them are the same, the monitoring device 100 acquires the enclosed region as the upper surface region.
- the height condition need not be the same as the height of the feature points, and may be substantially the same.
- the color condition may be that the colors in the region are similar. Further, the condition for acquiring the upper surface region may be that either one of the height condition and the color condition is satisfied.
- the pasting unit 307 may fill a portion where no overlapping area exists with image information of a surrounding area or image information of a reference plane.
- FIG. 13 is a block diagram showing an internal configuration of planar map generating apparatus 600 according to Embodiment 2 of the present invention.
- a difference from FIG. 4 of FIG. 13 is that a difference detection unit 601 and a difference determination unit 602 are added.
- the difference detection unit 601 holds the plane map output from the pasting unit 307 and detects the difference between the plane map output last time and the plane map output this time.
- the difference detection unit 601 detects a difference
- the difference detection unit 601 outputs difference information indicating the content of the difference to the difference determination unit 602.
- the difference determination unit 602 registers an object existing in the office in advance, and determines whether an object corresponding to the difference information output from the difference detection unit 601 is registered. Then, if the object is registered, the difference determination unit 602 displays a message to that effect on the display 104, such as issuing an alarm, assuming that the object present in the office is lost.
- the plane map generation device 600 may notify the plane map with a color or a character, or prepare a warning light separately from the display 104 and blink the light. Also good. Further, the plane map generation device 600 may directly contact the office management department or the like together with these alarms or instead of these alarms.
- the plane map generation apparatus 600 detects the difference between the plane map output last time and the plane map output this time, and determines whether an object corresponding to the difference information is registered. . As a result, the planar map generation apparatus 600 can manage the status of objects present in the office.
- planar map generation apparatus may have a configuration other than the above-described configuration.
- FIG. 14 is a block diagram showing an internal configuration of another planar map generation apparatus 700 according to the second embodiment.
- the person detection unit 501 acquires an image from the image acquisition unit 301, extracts the characteristics of the person from the acquired image, and detects a person.
- the detection result is output to the flow line display unit 502.
- the flow line display unit 502 analyzes the locus of movement of the person based on the detection result output from the person detection unit 501. Then, the flow line display unit 502 embeds the analyzed trajectory as a human flow line on the plane map and causes the display 104 to display it.
- another planar map generation apparatus 700 can display a flow line indicating a person's movement on the planar map. Furthermore, when another object is lost in the room, the other plane map generation apparatus 700 can grasp the relationship between the person and the object, such as when the object is lost, and specifies the cause of the object loss. It becomes possible.
- the image processing apparatus and the image processing method according to the present invention can be applied to a security management system, a safety management system, and the like.
Abstract
Description
図2は、本発明の実施の形態1に係る監視システム100を示した図である。図2の監視システム100は、カメラ群101、ネットワーク102、平面マップ(Plane Map)生成装置103、ディスプレイ104を備えている。
図13は、本発明の実施の形態2に係る平面マップ生成装置600の内部構成を示すブロック図である。図13の図4と異なる点は、差分検知部601及び差分判定部602を追加した点である。差分検知部601は、貼り合わせ部307から出力された平面マップを保持し、前回出力された平面マップと今回出力された平面マップとの差分を検知する。そして、差分検知部601は、差分を検知したとき、差分の内容を示す差分情報を、差分判定部602に出力する。
102 ネットワーク
103、500、600、700 平面マップ生成装置
104 ディスプレイ
301 画像取得部
302 座標取得部
303 第1の平面マップ生成部
304 画像領域抽出部
305 第2の平面マップ生成部
306 重畳処理部
307 貼り合わせ部
501 人物検出部
502 動線表示部
601 差分検知部
602 差分判定部
Claims (11)
- 物体が撮像された画像であり、かつ、前記物体の三次元座標を算出可能な画像を取得する画像取得部と、
基準面上に存在する第1の物体の上面を前記画像から抽出し、抽出した前記上面の特徴点の座標を取得する座標取得部と、
取得した前記座標を用いて、前記基準面に対して垂直方向に相当する座標の値が前記基準面と同じ値になるように、前記上面の大きさ及び位置を調整して第1の平面マップを生成する第1の平面マップ生成部と、
を具備する画像処理装置。 - 前記第1の物体の上に存在する第2の物体の位置に対応する画像領域を前記画像から抽出する画像領域抽出部と、
前記画像領域の代表点の座標を取得し、取得した前記代表点の座標を用いて、前記基準面に対して垂直方向に相当する座標の値が基準面と同じ値になるように、前記画像領域の大きさ及び位置を調整して第2の平面マップを生成する第2の平面マップ生成部と、
前記第1の平面マップと前記第2の平面マップとを重ね合わせて第3の平面マップを生成する重畳部と、を更に具備する、
請求項1に記載の画像処理装置。 - 複数の異なる位置で撮像された画像から生成された複数の前記第3の平面マップを貼り合わせる貼り合わせ部、を更に具備する、
請求項2に記載の画像処理装置。 - 前記重畳部は、
前記第1の平面マップと前記第2の平面マップとを重ね合わせる際、重なり領域が存在しない部分を、周囲の領域の画像情報によって埋め合わせる、
請求項2に記載の画像処理装置。 - 前記重畳部は、
前記第1の平面マップと前記第2の平面マップとを重ね合わせる際、重なり領域が存在しない部分を、基準面の画像情報によって埋め合わせる、
請求項2に記載の画像処理装置。 - 前記貼り合わせ部は、
複数の前記第3の平面マップを貼り合わせる際、重なり領域が存在する部分において、カメラまでの距離が近い前記第3の平面マップを優先して貼り合わせる、
請求項3に記載の画像処理装置。 - 前記貼り合わせ部は、
複数の前記第3の平面マップを貼り合わせる際、重なり領域が存在しない部分を、周囲の領域の画像情報によって埋め合わせる、
請求項3に記載の画像処理装置。 - 前記貼り合わせ部は、
複数の前記第3の平面マップを貼り合わせる際、重なり領域が存在しない部分を、基準面の画像情報によって埋め合わせる、
請求項3に記載の画像処理装置。 - 前回生成された前記第3の平面マップと今回生成された前記第3の平面マップとの差分を検知する差分検知部、を更に具備する、
請求項2に記載の画像処理装置。 - 人物を検出する人物検出部と、
検出された前記人物が移動した軌跡を動線で前記第3の平面マップに表す動線表示部と、を更に具備する、
請求項2に記載の画像処理装置。 - 物体が撮像された画像であり、かつ、前記物体の三次元座標を算出可能な画像を取得する画像取得工程と、
基準面上に存在する第1の物体の上面を前記画像から抽出し、抽出した前記上面の特徴点の座標を取得する座標取得工程と、
取得した前記座標を用いて、前記基準面に対して垂直方向に相当する座標の値が基準面と同じ値になるように、前記上面の大きさ及び位置を調整して第1の平面マップを生成する第1の平面マップ生成工程と、
を具備する画像処理方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200980149254.7A CN102246201B (zh) | 2008-12-12 | 2009-08-04 | 图像处理装置及图像处理方法 |
US13/132,504 US8547448B2 (en) | 2008-12-12 | 2009-08-04 | Image processing device and image processing method to generate a plan map |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008-317378 | 2008-12-12 | ||
JP2008317378 | 2008-12-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010067486A1 true WO2010067486A1 (ja) | 2010-06-17 |
Family
ID=42242485
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/003718 WO2010067486A1 (ja) | 2008-12-12 | 2009-08-04 | 画像処理装置及び画像処理方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US8547448B2 (ja) |
JP (1) | JP5349224B2 (ja) |
CN (1) | CN102246201B (ja) |
WO (1) | WO2010067486A1 (ja) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012118666A (ja) * | 2010-11-30 | 2012-06-21 | Iwane Laboratories Ltd | 三次元地図自動生成装置 |
JP6132275B2 (ja) | 2012-07-02 | 2017-05-24 | パナソニックIpマネジメント株式会社 | サイズ測定装置及びサイズ測定方法 |
US9886636B2 (en) * | 2013-05-23 | 2018-02-06 | GM Global Technology Operations LLC | Enhanced top-down view generation in a front curb viewing system |
JP6517515B2 (ja) * | 2015-01-20 | 2019-05-22 | 能美防災株式会社 | 監視システム |
CN108140309B (zh) * | 2015-11-20 | 2020-12-08 | 三菱电机株式会社 | 驾驶辅助装置、驾驶辅助系统以及驾驶辅助方法 |
JP2018048839A (ja) | 2016-09-20 | 2018-03-29 | ファナック株式会社 | 三次元データ生成装置及び三次元データ生成方法、並びに三次元データ生成装置を備えた監視システム |
US10409288B2 (en) | 2017-05-03 | 2019-09-10 | Toyota Research Institute, Inc. | Systems and methods for projecting a location of a nearby object into a map according to a camera image |
WO2019104732A1 (zh) * | 2017-12-01 | 2019-06-06 | 深圳市沃特沃德股份有限公司 | 视觉扫地机器人及障碍物检测方法 |
JP7081140B2 (ja) * | 2017-12-25 | 2022-06-07 | 富士通株式会社 | 物体認識装置、物体認識方法及び物体認識プログラム |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002329195A (ja) * | 2001-04-27 | 2002-11-15 | Sumitomo Electric Ind Ltd | 画像処理装置、画像処理方法及び車両監視システム |
JP2002342758A (ja) * | 2001-05-15 | 2002-11-29 | Osamu Hasegawa | 視覚認識システム |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6813371B2 (en) * | 1999-12-24 | 2004-11-02 | Aisin Seiki Kabushiki Kaisha | On-vehicle camera calibration device |
US7321386B2 (en) * | 2002-08-01 | 2008-01-22 | Siemens Corporate Research, Inc. | Robust stereo-driven video-based surveillance |
JP3994950B2 (ja) * | 2003-09-19 | 2007-10-24 | ソニー株式会社 | 環境認識装置及び方法、経路計画装置及び方法、並びにロボット装置 |
JP4681856B2 (ja) * | 2004-11-24 | 2011-05-11 | アイシン精機株式会社 | カメラの校正方法及びカメラの校正装置 |
JP2008537190A (ja) * | 2005-01-07 | 2008-09-11 | ジェスチャー テック,インコーポレイテッド | 赤外線パターンを照射することによる対象物の三次元像の生成 |
JP4720446B2 (ja) | 2005-11-10 | 2011-07-13 | トヨタ自動車株式会社 | 車両検出装置及びこれを用いた車両用表示装置 |
US7899211B2 (en) * | 2005-12-07 | 2011-03-01 | Nissan Motor Co., Ltd. | Object detecting system and object detecting method |
US8073617B2 (en) | 2006-12-27 | 2011-12-06 | Aisin Aw Co., Ltd. | Map information generating systems, methods, and programs |
JP2008164831A (ja) | 2006-12-27 | 2008-07-17 | Aisin Aw Co Ltd | 地図情報生成システム |
-
2009
- 2009-08-04 WO PCT/JP2009/003718 patent/WO2010067486A1/ja active Application Filing
- 2009-08-04 US US13/132,504 patent/US8547448B2/en active Active
- 2009-08-04 CN CN200980149254.7A patent/CN102246201B/zh active Active
- 2009-09-10 JP JP2009209304A patent/JP5349224B2/ja active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002329195A (ja) * | 2001-04-27 | 2002-11-15 | Sumitomo Electric Ind Ltd | 画像処理装置、画像処理方法及び車両監視システム |
JP2002342758A (ja) * | 2001-05-15 | 2002-11-29 | Osamu Hasegawa | 視覚認識システム |
Also Published As
Publication number | Publication date |
---|---|
JP5349224B2 (ja) | 2013-11-20 |
JP2010160785A (ja) | 2010-07-22 |
CN102246201A (zh) | 2011-11-16 |
CN102246201B (zh) | 2014-04-02 |
US8547448B2 (en) | 2013-10-01 |
US20110261221A1 (en) | 2011-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5349224B2 (ja) | 画像処理装置及び画像処理方法 | |
CN109561296B (zh) | 图像处理装置、图像处理方法、图像处理系统和存储介质 | |
KR101506610B1 (ko) | 증강현실 제공 장치 및 그 방법 | |
US8994721B2 (en) | Information processing apparatus, information processing method, and program for extending or expanding a viewing area of content displayed on a 2D workspace into a 3D virtual display screen | |
CN107484428B (zh) | 用于显示对象的方法 | |
US20190354799A1 (en) | Method of Determining a Similarity Transformation Between First and Second Coordinates of 3D Features | |
JP2022022434A (ja) | 画像処理装置、画像処理方法、及びプログラム | |
KR101822471B1 (ko) | 혼합현실을 이용한 가상현실 시스템 및 그 구현방법 | |
US20140192055A1 (en) | Method and apparatus for displaying video on 3d map | |
JP2017201745A (ja) | 画像処理装置、画像処理方法およびプログラム | |
CN109155055B (zh) | 关注区域图像生成装置 | |
CN103051867A (zh) | 图像生成器 | |
JP3301421B2 (ja) | 車両周囲状況提示装置 | |
JP6447706B1 (ja) | 校正用データ生成装置、校正用データ生成方法、キャリブレーションシステム、及び制御プログラム | |
JP2021067469A (ja) | 距離推定装置および方法 | |
US20150016673A1 (en) | Image processing apparatus, image processing method, and program | |
JP6640294B1 (ja) | 複合現実システム、プログラム、携帯端末装置、及び方法 | |
JP6362401B2 (ja) | 画像処理装置及び画像処理装置の制御方法 | |
US11043019B2 (en) | Method of displaying a wide-format augmented reality object | |
JP7341736B2 (ja) | 情報処理装置、情報処理方法及びプログラム | |
JP2008177856A (ja) | 俯瞰画像提供装置、車両、および俯瞰画像提供方法 | |
JP2006318015A (ja) | 画像処理装置および画像処理方法、画像表示システム、並びに、プログラム | |
JP6744237B2 (ja) | 画像処理装置、画像処理システムおよびプログラム | |
KR101850134B1 (ko) | 3차원 동작 모델 생성 방법 및 장치 | |
JP4715187B2 (ja) | 画像処理装置及び画像処理システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200980149254.7 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09831595 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13132504 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09831595 Country of ref document: EP Kind code of ref document: A1 |