JP4525019B2 - Status detection apparatus and method, image processing apparatus and method, program, program recording medium, data structure, and data recording medium - Google Patents

Status detection apparatus and method, image processing apparatus and method, program, program recording medium, data structure, and data recording medium Download PDF

Info

Publication number
JP4525019B2
JP4525019B2 JP2003281478A JP2003281478A JP4525019B2 JP 4525019 B2 JP4525019 B2 JP 4525019B2 JP 2003281478 A JP2003281478 A JP 2003281478A JP 2003281478 A JP2003281478 A JP 2003281478A JP 4525019 B2 JP4525019 B2 JP 4525019B2
Authority
JP
Japan
Prior art keywords
image
brightness
space
area
brightness signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2003281478A
Other languages
Japanese (ja)
Other versions
JP2005051511A (en
Inventor
義教 渡邊
哲二郎 近藤
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Priority to JP2003281478A priority Critical patent/JP4525019B2/en
Publication of JP2005051511A publication Critical patent/JP2005051511A/en
Application granted granted Critical
Publication of JP4525019B2 publication Critical patent/JP4525019B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a state detection device and method, an image processing device and method, a program, a program recording medium, a data structure, and a data recording medium, and in particular, a state detection device capable of detecting the state of an object with low power consumption and The present invention relates to a method, an image processing apparatus and method, a program, a program recording medium, a data structure, and a data recording medium.

  As a monitoring system for monitoring a space or indoor space, for example, an abnormality detection unit is provided for each of a plurality of cameras, and only the signal of the camera in which an abnormality is detected by the abnormality detection unit is displayed, thereby improving the efficiency of the monitoring work. (For example, refer to Patent Document 1).

  In addition, as a monitoring system, sensor information sent from a sensor installed in an area captured by an imaging device is displayed in an overlapping manner on an image captured by the imaging device, so that abnormalities can be detected from many areas. In some cases, the area where the error occurred can be immediately identified (see, for example, Patent Document 2).

Furthermore, in order to improve the reliability of the monitoring system, a technique for correcting the sensitivity of the infrared sensor and the processing conditions of its output using image information from the image sensor has been proposed (see, for example, Patent Document 3).
JP-A-7-2127748 Japanese Patent Laid-Open No. 8-124078 JP 2000-339554 A

  However, in a conventional monitoring system, both a sensor that detects an abnormality (change) in a monitored space and a camera that captures the space must always be in an operating state. Power was consumed. For this reason, the power consumption of the whole monitoring system was also large.

  The present invention has been made in view of such a situation, and an object thereof is to monitor the state of an object with low power consumption.

State detecting device of the present invention detects the brightness between the sky, an obtaining unit configured to obtain the brightness signal from the brightness detecting means for outputting a brightness signal representing the average value of the brightness, the space the position and area of the object of the object on the image obtained by imaging the, relationship information indicating the relationship between the brightness signal of the space where the object exists, the position and the object on the image the point identified from the area, and the relationship information generating means for generating and registering the brightness signal corresponding, the brightness signal of the related information corresponding to the brightness signal obtained from said brightness detecting means the basis to identify the position and the area of the object on the image, from the position and the area of the object on a specified said image of said object Characterized in that it comprises a state detecting means for detecting the position of the serial space.

State detecting method of the present invention detects the brightness between the sky, an acquisition step of acquiring the brightness signal from the brightness detecting means for outputting a brightness signal representing the average value of the brightness, the space the position and area of the object of the object on the image obtained by imaging the, relationship information indicating the relationship between the brightness signal of the space where the object exists, the position and the object on the image the point identified from the area, and the relation information generating step of generating and registering the brightness signal corresponding, the brightness signal of the related information corresponding to the brightness signal obtained from said brightness detecting means the basis to identify the position and the area of the object on the image, from the position and the area of the identified said object of said object Characterized in that it comprises a state detecting step of detecting a position of a serial space.

First program of the present invention, an acquisition step of detecting the brightness between the sky, to obtain the brightness signal from the brightness detecting means for outputting a brightness signal representing the average value of the brightness, the space the position and area of the object on the object image obtained by imaging the inner, relationship information indicating the relationship between the brightness signal of the space where the object exists, and the position of the object on the image the point identified from the area, and the relation information generating step of generating and registering the brightness signal corresponding, the brightness of the related information corresponding to the brightness signal obtained from said brightness detecting means based on the signal, and identifying the position and the area of the object on the image, it said from the position and the area of the identified said object object Wherein the of and a state detection step of detecting a position of the space.

First program recorded in the first program recording medium of the present invention detects the brightness between the sky, the brightness from the brightness detecting means for outputting a brightness signal representing the average value of the brightness an acquisition step of acquiring a signal, the position and area of the object on the image obtained by capturing an object in the space, relationship information indicating the relationship between the brightness signal of the space where the object exists and the point identified from the position and the area of the object on the image, and the relation information generating step of generating and registering the brightness signal corresponding, wherein obtained from the brightness detecting means and said brightness based on the brightness signal of the related information corresponding to the signal to identify the position and the area of the object on the image, specified in the object Serial position and from the area, characterized in that it comprises a state detection step of detecting a position of the space of the object.

An image processing apparatus according to the present invention includes a background image creating unit that creates a background image from a captured image output by an imaging unit that captures an image of a space, and an object existing in the space using the captured image and the background image. an object image extracting means for extracting an object image is an image, based on the object image, the position and area of the object on the captured image, brightness signal representing the brightness average value of the space where the object exists Relation information creating means for registering and creating the corresponding brightness signal at a point specified from the position and the area of the object .

The image processing method of the present invention includes a background image creation step of creating a background image from a captured image output by an imaging unit that captures an image of a space, and an object existing in the space using the captured image and the background image. an object image extracting step of extracting an object image is an image, based on the object image, the position and area of the object on the captured image, brightness signal representing the brightness average value of the space where the object exists And a relation information creating step for creating the relation information by registering the corresponding brightness signal at a point specified from the position and the area of the object .

A second program according to the present invention includes a background image creating step of creating a background image from a captured image output by an imaging unit that captures an image of a space, and an object existing in the space using the captured image and the background image an object image extracting step of extracting an object image is an image, brightness based on said object image, representing the position and area of the object on the captured image, an average value of the brightness of the space where the object exists of And a relation information creating step for creating relation information representing a relation with a signal by registering the brightness signal corresponding to the point specified from the position and the area of the object .

The program recorded in the second program recording medium of the present invention uses a background image creation step of creating a background image from a captured image output by an imaging means that captures the interior of the space, and the captured image and the background image. Te, an object image extracting step of extracting an object image is an image of an object existing in the space, based on the object image, the position and area of the object on the captured image, brightness of the space where the object exists A relation information creating step for creating relation information representing a relation with a brightness signal representing an average value of the object by registering the brightness signal corresponding to a point specified from the position and the area of the object ; It is characterized by including.

State Detection Device of the Present Invention In the state detection device of the present invention, the state detection method, the first program, and the program recorded on the first program recording medium, a brightness signal representing the average value of the brightness of the space is provided. The relationship information between the position and area of the object on the image obtained by imaging the object in the space and the brightness signal for the space in which the object exists is specified from the position and area of the object on the image The position and area of the object on the image are specified based on the brightness signal of the relation information corresponding to the acquired brightness signal. The position of the object space is detected from the position and area of the object above .

In the image processing apparatus, the image processing method, the second program, and the program recorded in the second program recording medium of the present invention, a background image is created from the captured image, and an object image is created using the captured image and the background image. Is extracted. Based on the object image, the position and area of the object on the captured image are obtained, and the relationship information indicating the relationship between the position and area of the object and the brightness signal of the space in which the object exists is identified from the position and area of the object. The corresponding brightness signal is registered and created at the point.

  According to the present invention, low power consumption can be achieved.

  Hereinafter, embodiments of the present invention will be described with reference to the drawings.

  FIG. 1 shows a configuration example of an embodiment of a monitoring system to which the present invention is applied.

  The image capturing apparatus 1A is an apparatus that acquires an image of a space to be monitored (hereinafter referred to as a monitoring space as appropriate), and supplies the captured image data to the information processing apparatus 2. Further, the image capturing device 1A can change the state of the power supply, such as turning on and off the power supply in response to a signal from the information processing device 2.

  The information processing device 2 is supplied with the image data from the image capturing device 1A and the output of the sensor 4A installed in the monitoring space. The information processing apparatus 2 performs various processes for monitoring the monitoring space by appropriately using the image data from the image capturing apparatus 1A, the output of the sensor 4A, and the information stored in the recording medium. Furthermore, the information processing device 2 can communicate with the information terminal device 6 via the network 5, and transmits a monitoring result of the monitoring space to the remote information terminal device 6 as necessary.

  The recording medium 3 is configured by, for example, a magnetic disk, an optical disk, a semiconductor memory, and the like, and stores data necessary for the information processing apparatus 2 to perform processing.

  The sensor 4A is installed in the same space as the space in which the image capturing device 1A is installed, that is, a monitoring space. The sensor 4A detects the brightness in the monitoring space, and outputs an average brightness value that is an average brightness value of the brightness. The average luminance value output from the sensor 4A is supplied to the information processing apparatus 2. The sensor 4A may be any sensor that can detect the brightness of a certain space (region) and output a brightness signal representing the average value of the brightness, such as a photo sensor. In the sensor 4A, it is also possible to detect color brightness, that is, brightness by using a color filter in the photosensor. By detecting the brightness in the space for each of a plurality of colors at the same time and creating a map to be described later describing the brightness of the plurality of colors, it is possible to improve the state detection accuracy of an object to be described later.

  The information terminal device 6 is connected to the information processing device 2 at a remote location via a network 5 such as the Internet, a wireless or wired LAN (Local Area Network), or a telephone line. The information terminal device 6 is operated by an operator of the monitoring system in FIG. 1 and transmits a signal for controlling the information processing device 2 to the information processing device 2 via the network 5 in response to the operation. In addition, the information terminal device 6 receives the monitoring result data of the monitoring space transmitted from the information processing device 2 via the network 5 and presents it to the operator by displaying it on a monitor (not shown). To do.

  FIG. 2 shows a detailed configuration example of the image capturing device 1A, the information processing device 2, the recording medium 3, and the sensor 4A of FIG.

  2, the information processing apparatus 2 includes a background image creation unit 11, an object extraction unit 13, a luminance average processing unit 15, a map creation unit 16, an AD conversion unit 18, a change detection unit 19, a map matching unit 20, and a position output unit. 22 and a control unit 23. The recording medium 3 includes a background holding unit 12, an object holding unit 14, a map holding unit 17, and a position data holding unit 21 (configured to be divided into recording areas).

  A camera (video camera) 1, which is an example of an image capturing device 1 </ b> A, captures an image of the surveillance space and supplies image data obtained by the imaging to the background image creation unit 11, the object extraction unit 13, and the luminance average processing unit 15. To do. Note that the image capturing device 1A can be configured by a plurality of cameras instead of a single camera 1. Further, the image capturing device 1A can be provided with a drive mechanism for changing the imaging direction by rotating the camera 1 or the like.

  Here, the camera 1 can be turned on / off (including a standby state) in accordance with control from the control unit 23.

  The background image creation unit 11 creates background image data from the image data of the captured image in the monitoring space supplied from the camera 1. The captured image supplied to the background image creation unit 11 may include objects other than the background such as an operating object (for example, a passerby passing through the monitoring space). Therefore, the background image creation unit 11 performs, for example, time averaging of long-time image data supplied from the camera 1, that is, obtains an average value of image data of a large number of consecutive frames. Create data. The background image data created by the background image creation unit 11 is output from the background image creation unit 11 to the background holding unit 12 and stored therein.

  When the camera 1 can capture a background image, the background image creation unit 11 supplies image data of a certain frame supplied from the camera 1 as it is to the background image holding unit 12 as background image data. To do.

  The background holding unit 12 stores the background image data created by the background image creating unit 11 or the like. The background holding unit 12 outputs the stored background image data to the map creating unit 16.

  The object extraction unit 13 extracts an object by performing image processing on the captured image and the background image supplied from the camera 1. That is, for example, the object extraction unit 13 obtains a difference between the captured image data acquired by the camera 1 and the background image data stored in the background holding unit 12, and captures an image in a range where the difference between the image data is large. A process of extracting an object image from image data is performed. The object extraction unit 13 obtains the position and area of an object (image) on the frame of the image data when extracting the object image from the image data from the camera 1. Here, the position of the center of gravity of the area occupied by the object image can be adopted as the position of the object. Further, as the area of the object, a value corresponding to the number of pixels in the area occupied by the object image can be adopted. The object image extracted by the object extraction unit 13 is associated with its position and area, and is output to and stored in the object holding unit 14.

  The object holding unit 14 stores the object image data extracted by the object extracting unit 13 and the position and area of the object on the image. Further, the object holding unit 14 outputs the stored object image and the position and area of the object to the map creating unit 16.

  The luminance average processing unit 15 performs processing for calculating an average value of luminance values of the background image stored in the background image holding unit 12 and the object image stored in the object holding unit 14. That is, the luminance average processing unit 15 performs a process of dividing the background image into strips in the horizontal direction or the vertical direction and calculating an average value of luminance values for each strip. Further, the luminance average processing unit 15 obtains an average value of luminance values of the object image data. The average value of the luminance values for each strip and the average value of the luminance values of the object image for the background image calculated by the luminance average processing unit 15 are output to the map creating unit 16.

  The map creating unit 16 maps the background image data stored in the background holding unit 12, the object image data stored in the object holding unit 14, the average value of the luminance values of various images calculated by the luminance average processing unit 15, and the like. Create The map created by the map creation unit 16 is output to the map holding unit 17. The map is the state of the object, that is, here, for example, the position and area of the object on the image are taken as axes, and the average value of the luminance value of the image is described at a point specified by the position and area. Is. The map will be described later together with the principle of the present invention.

  The photo sensor 4 which is an example of the sensor 4A detects the brightness in the same monitoring space as the image taken by the camera 1, and outputs, for example, a brightness value as a brightness signal representing the average value of the brightness. To do. Here, the photosensor 4 may output a brightness signal corresponding to the average brightness of R, G, or B light in the monitoring space instead of the luminance value. If a color filter is used for the photosensor 4, the brightness of the color corresponding to the color can be detected. Further, the brightness in the monitoring space may be detected by being divided by a plurality of photo sensors, or may be detected in duplicate. Note that an example of one photosensor as a minimum unit will be described here unless otherwise specified. A photosensor detection luminance value representing the average brightness in the monitoring space detected by the photosensor 4 is output to an AD (Analog Digital) conversion unit 18.

  The AD conversion unit 18 converts the photosensor detection luminance value of the photosensor 4 from an analog signal to a digital signal and outputs it to the change detection unit 19.

  The change detection unit 19 determines whether or not the state in the monitoring space has changed based on the photosensor detection luminance value supplied from the AD conversion unit 18, that is, whether a person has entered or has entered the monitoring space. It is detected whether the state of the monitoring space has changed due to the movement of the person who has moved. When the change detection unit 19 detects a change in the state in the monitoring space, the change detection unit 19 outputs a change detection signal indicating that to the control unit 23 and supplies the photosensor detection luminance value to the map matching unit 20.

  The map matching unit 20 compares the photosensor detection luminance value supplied from the change detection unit 19 with the map stored in the map holding unit 17, and corresponds to the luminance value on the map close to the photosensor detection luminance value. The position of the object specified by the point to be performed, that is, the position and area of the object on the image obtained when the camera 1 images the surveillance space is supplied to the position data holding unit 21. If the photosensor detection luminance value and the luminance value on the map close to the photosensor detection luminance value are larger than a predetermined threshold, the map matching unit 20 outputs an error signal or the like to the control unit 23.

  The position data holding unit 21 stores the state (information) of the object supplied from the map matching unit 20. The position data holding unit 21 can store the state of the object supplied from the map matching unit 20 together with the current time. In this case, the state of the object in the monitoring space can be recognized in time series.

  The position output unit 22 reads and outputs the state (information) of the object stored in the position data holding unit 21. That is, the position output unit 22 transmits, for example, the state of the object to the information terminal device 6 via the network 5 or displays it on a display (not shown).

  In response to an error signal supplied from the map matching unit 20, a change detection signal supplied from the change detection unit 19, a signal from a module (not shown), the control unit 23 performs power on / off control of the camera 1. Control of a module (not shown) is performed.

  Here, as a module (not shown), for example, a signal transmitted from the information terminal device 6 is received and an abnormality in the monitoring space is detected according to the module supplied to the control unit 23 or the control of the control unit 23. There are modules for transmitting information to be notified to the information terminal device 6.

  Next, the principle of the present invention will be described with reference to FIGS.

  FIG. 3 shows image data (captured image) obtained by capturing an image of a monitoring space in which no object exists with the camera 1. When the camera 1 and the photo sensor 4 monitor the same monitoring space, the average luminance value (average luminance value) of the image output by the camera 1 and the photo sensor detection luminance value output by the photo sensor are obtained. Assume that the camera 1 or the photosensor 4 is adjusted so as to have the same value.

  Now, when the background image of FIG. 3 is equally divided into eight strips A to H in the horizontal direction (X direction), for example, as indicated by the arrow, the average value of the luminance values of the strips A to H is as follows: Assume that the numbers are 10, 20, 30, 40, 50, 60, 70, and 80, respectively. Since the average value of the luminance values of the background image in FIG. 3 is equal to the average value of the average values of the respective strips A to H, it can be calculated by (10 + 20 + 30 + 40 + 50 + 60 + 70 + 80) / 8. In the case of FIG. Become. The photosensor detection luminance value at this time is a value that matches the average value 45 of the luminance values of the background image.

  Next, a change in the photosensor detection luminance value when an object enters the monitoring space will be described. Here, in order to simplify the description, it is assumed that the object has a size that is an integral multiple of a strip on the image.

  The upper side of FIG. 4 shows a state in which two strip-sized objects exist at the positions of strips B and C. Since the objects exist at the positions of the strips B and C, the average value of the brightness values of the strips B and C becomes the average value of the brightness values of the objects. If the average value of the luminance values of the object is 100, for example, the average value of the luminance values of the strips A to H in the upper side of FIG. 4 is 10, 100, 100, 40, 50, 60, 70, 80, respectively. . Accordingly, the average value of the luminance values of the images when the object is in the position of the strips B and C, that is, the photosensor detection luminance value is 64.

  Here, the average value of the luminance values in the strip is hereinafter referred to as a strip luminance average value as appropriate.

  The lower side of FIG. 4 shows a state where the objects for the two strips described in the upper side of FIG. 4 move in the horizontal direction (X direction) and exist at the positions of the strips F and G. Due to the movement of the object, the average brightness of strip B and strip C returns from 100 and 100 to 20 and 30 in the original background image, respectively, and the strip brightness average of strip F and strip G, respectively. Is the average value of the luminance values of the objects. If the average value of the luminance values of the objects remains 100, the average luminance values of the strips A to H in the lower side of FIG. 4 are 10, 20, 30, 40, 50, 100, 100, and 80, respectively. It becomes. Accordingly, the average value of the luminance values of the images when the object is at the position of the strips F and G, that is, the photosensor detection luminance value is 54.

  As described above, the photosensor detection luminance value changes depending on the horizontal position where the object exists. Therefore, the horizontal position of the object can be detected by the photosensor detection luminance value output from the photosensor 4. Note that by considering the strips arranged in the vertical direction, the vertical direction (Y direction) of the object can be detected from the photosensor detection luminance value on the same principle as described above.

  Next, a method for detecting the position of the object in the depth direction (Z direction) will be described with reference to FIG.

  The upper side of FIG. 5 is the same diagram as the upper side of FIG. 4, and shows a state in which two strip-sized objects exist at the positions of strips B and C. In this case, as described in the upper side of FIG. 4, the strip luminance average values of the strips A to H are 10, 100, 100, 40, 50, 60, 70, and 80, respectively. That is, the photosensor detection luminance value is 64.

  The lower side of FIG. 5 shows a state where an object having a size corresponding to two strips in the upper side of FIG. 5 has moved in the direction (Z direction) of the camera 1 (photo sensor 4). On the lower side of FIG. 5, the size of the object on the image becomes four strips as the object moves in the Z direction. In the lower part of FIG. 5, four strip-sized objects exist at the positions of strips B to E. Now, assuming that the average value of the luminance values of the objects is 100, the average luminance values of the strips A to H are 10, 100, 100, 100, 100, 60, 70, and 80 on the lower side of FIG. The average value of the luminance values of the images when the object is in the position of the strips B to E, that is, the photosensor detection luminance value is 78.

  As described above, the photosensor detection luminance value also changes depending on the position in the depth direction (Z direction) where the object exists. Therefore, the position of the object in the depth direction can be detected based on the photosensor detection luminance value output from the photosensor 4.

  That is, for the sake of simplicity of explanation, if only the X direction, for example, of the X direction (horizontal direction) and the Y direction (vertical direction) is considered, the object depends on the position of the object in the X direction. Depending on the object image, the strip in which the strip brightness average value is changed changes, and the photosensor detection brightness value also changes accordingly.

  Further, the area (size) of the object image of the object changes depending on the position in the Z direction (depth direction) of the object, and the number of strips whose strip luminance average value changes according to the object image changes. Also in this case, the photosensor detection luminance value changes.

  Therefore, the state of the object, that is, the position and area of the object image in the X direction can be associated with the photosensor detection luminance value, and the map creating unit 16 in FIG. Then, a map is created that associates the photosensor detection luminance value when the object image is obtained.

  FIG. 6 schematically shows a map created by the map creation unit 20. In the map, the photosensor detection luminance value corresponding to the position (area) specified by each position and area is described with the position and area in the X direction of the object (image) as axes. By using this map, the state of the object in the monitoring space can be detected from the photosensor detection luminance value output by the photosensor 4.

  That is, for example, if the photosensor 4 outputs V1 as the photosensor detection luminance value, according to the map in FIG. 6, the object exists on the image in the left direction with a small area (at a far position). It is possible to detect that it is in a state. Further, for example, when the photosensor detection luminance value is V2, according to the map of FIG. 6, the object is present in the right direction with a small area (distant position) on the image. Can be detected. Further, for example, when the photosensor detection luminance value is V3, according to the map of FIG. 6, it can be detected that the object is present in a large area (close position) in the center. it can.

  The map is created by the map creation unit 16 based on the background image data stored in the background holding unit 12 and the object image data stored in the object holding unit 14, and is supplied to the map holding unit 17 for storage. Is done. Each of the background holding unit 12 and the object holding unit 14 can hold a plurality of image data. The background image data stored in the background holding unit 12 and the object image data stored in the object holding unit A map can be created by the map creation unit 16 as many as the number of combinations, and can be held in the map holding unit 17.

  Next, a detection process for detecting the state of the object, which is performed by the information processing apparatus 2 in FIG. 2, will be described with reference to the flowchart in FIG.

  When the power source of the camera 1 is on, the control unit 23 controls the camera 1 to turn off the power source of the camera 1 (or the standby state). In step S1, the change detection unit 19 determines whether the state of the monitoring space has changed, depending on whether the photosensor detection luminance value supplied from the photosensor 4 via the AD conversion unit 18 has changed. judge. If the change detection unit 19 determines in step S1 that the state in the monitoring space has not changed, the process returns to step S1. On the other hand, when it is determined that the state of the monitoring space has changed, that is, for example, when an object enters the monitoring space and the photosensor detection luminance value (for example, a predetermined threshold value or more) changes, the change detection unit 19 The change detection signal indicating that the state in the monitoring space has changed is supplied to the map matching unit 20 and the control unit 23, and the photosensor detection luminance value is supplied to the map matching unit 20, and the process proceeds to step S2.

  The map matching unit 20 receives the change detection signal from the change detection unit 19, and determines whether or not a map is created, that is, whether or not the map is stored in the map holding unit 17 in step S <b> 2. If it is determined in step S <b> 2 that the map has been created, the process proceeds to step S <b> 3, and the map matching unit 20 stores the photosensor detection luminance value supplied from the change detection unit 19 and the map stored in the map holding unit 17. From this, the position of the object that has entered the surveillance space is estimated.

  Here, the object position estimation processing performed in step S3 of FIG. 7 will be described with reference to the flowchart of FIG.

  First, in step S31, the map matching unit 20 acquires the photosensor detection luminance value supplied from the change detection unit 19, and proceeds to step S32.

  In step S32, the map matching unit 20 reads a map from the map holding unit 17, and proceeds to step S33.

  In step S33, the map matching unit 20 compares the photosensor detection luminance value supplied from the change detection unit 19 with the luminance value described in the map, and determines the luminance value on the map that matches the photosensor detection luminance value. The search proceeds to step S34. Here, the luminance value on the map that matches the photosensor detection luminance value may be one luminance value on the map that is closest to the photosensor detection luminance value and whose difference is within a predetermined threshold, The difference from the photosensor detection luminance value may be one or more luminance values within a predetermined threshold.

  In step S34, the map matching unit 20 determines whether there is a luminance value on the map that matches the photosensor detection luminance value. If it is determined in step S34 that there is a luminance value on the map that matches the photosensor detection luminance value, the process proceeds to step S35, and the map matching unit 20 determines that the luminance value on the map that matches the photosensor detection luminance value is the same. The position and area in the X direction of the object on the image specified from the described column are recognized. In step S35, the map matching unit 20 estimates, for example, the position of the object that has entered the monitoring space (for example, the position in the horizontal direction and the depth direction) from the recognized position and area in the X direction. Then, the data is supplied to and stored in the position data holding unit 21, and the process returns. In step S35, the position and area in the X direction of the object on the image can be supplied to the position data holding unit 21 as they are.

  On the other hand, if it is determined in step S34 that there is no luminance value on the map that matches the photosensor detection luminance value, the process proceeds to step S36, and the map matching unit 20 determines that the luminance value that matches the photosensor detection luminance value is the map. An error signal indicating that it is not above is output to the control unit 23 and the process returns.

  Here, the map matching unit 20 will be described in detail with reference to FIG. FIG. 9 is a block diagram illustrating a detailed configuration example of the map matching unit 20. In FIG. 9, the photosensor 4 is composed of three photosensors that detect the brightness (brightness) of each color of RGB (Red, Green, Blue), and there is also a map for each color of RGB. And

  The map matching unit 20 is supplied with the R value, G value, and B value corresponding to the brightness of R, G, and B light output from the photosensor 4 as the photosensor detection luminance value from the change detection unit 19. The map is supplied from the map holding unit 17. The map matching unit 20 searches for a column of luminance values (R value, G value, B value) on the map that matches the photosensor detection luminance value, and specifies the X direction of the object on the image specified by the column. Recognize position and area. Further, the map matching unit 20 estimates and outputs the position of the object in the monitoring space from the position and area of the object in the X direction on the image.

  In FIG. 9, RGB photosensor detection luminance value data 31 to 33 (R value 31, G value 32, B value 33) are respectively sent from the change detection unit 19 to the position analysis units 37 to 39 of the map matching unit 20. Supplied. The RGB maps 34 to 36 are respectively supplied from the map holding unit 17 to the position analyzing units 37 to 39 of the map matching unit 20. The position analysis units 37 to 39 search for luminance values (R value, G value, B value) on the maps 34 to 36 that match the photosensor detection luminance value data 31 to 33, respectively, and use the respective luminance value columns. The position and area of the object on the specified image in the X direction are output to the object position estimation unit 40. The object position estimation unit 40 estimates the signal of the object in the monitoring space from the position and area in the X direction of the object on the image from each of the position analysis units 37 to 39, and outputs it to the estimation result output unit 41. The estimation result output unit 41 outputs the estimation result of the object position from the object position estimation unit 40 to the position data holding unit 21.

  The position analysis unit 37 searches for a luminance value (R value) on the map 34 that matches the photosensor detection luminance value 31 (R value). The luminance value to be searched for is described above. As described above, the difference (magnitude) between the luminance value on the map 34 whose difference from the photosensor detection luminance value 31 is within a predetermined threshold (hereinafter referred to as a search luminance value as appropriate) and the photosensor detection luminance value is as follows. The smaller the value, the higher the reliability of the position of the object in the monitoring space estimated from the position and area of the object in the X direction on the image specified by the field on the map 34 of the search luminance value. Therefore, in the position analysis unit 37, in addition to the position and area in the X direction of the object on the image specified by the column on the search brightness value map 34, the difference between the search brightness value and the photosensor detection brightness value is calculated. The reciprocal of the size or the like can be obtained as the reliability and supplied to the object position estimation unit 40. The same applies to the position analysis unit 38 and the position analysis unit 39.

  Further, the object position estimation unit 40 considers the position and area in the X direction of the object on the image supplied from each of the position analysis units 37 to 39 with a weight corresponding to each reliability, and in the monitoring space. The position of the object can be estimated. That is, the object position estimation unit 40 estimates the position of the object in the monitoring space from the position and area of the object in the X direction on the image supplied from each of the position analysis units 37 to 39, for example. The weighted average value corresponding to the reliability can be used as the final object position estimation result. Alternatively, the object position estimation unit 40 can estimate the position of the object in the monitoring space by using only the most reliable output from the position analysis units 37 to 39.

  Further, the estimation result output unit 41 can output the reliability to the position data holding unit 21 together with the estimation result of the position of the object.

  Returning to FIG. 7, after the process of step S <b> 3, the process proceeds to step S <b> 4, and it is determined whether or not the object position estimation process by the map matching unit 20 is successful. If it is determined in step S4 that the process has been successful, that is, if the control unit 23 has not received an error signal from the map matching unit 20, the process proceeds to step S5, and the position output unit 22 estimates in step S3. Then, the position of the object in the monitoring space stored in the position data holding unit 21 is output, and the process proceeds to step S6. Here, when the position of the object is output to the monitor, for example, in step S5, the position of the object can be displayed on the monitor, for example, in a form as shown in FIG.

  When the information processing apparatus 2 has an operator, the operator makes an input indicating whether or not the estimation process in step S3 is successful, and the estimation process in step S4 can be performed based on the input from the operator. it can. Further, in step S4, depending on whether or not the locus of the position of the object obtained by the processing of step S3 performed so far is unnatural (for example, whether the locus of the object is discontinuous). Depending on whether or not, it can also be determined whether the estimation process of step S3 was successful.

  In step S6, the control unit 23 determines whether or not there is a request to end the detection process of FIG. In step S6, when it is determined that there is a termination request, that is, for example, when a signal indicating a termination request is transmitted from the information terminal device 6 via the network 5, the detection process is terminated. If it is determined in step S6 that there is no termination request, the process returns to step S1, and the same processing is repeated thereafter.

  On the other hand, if it is determined in step S2 that a map has not been created, or if it is determined in step S4 that the estimation process in step S3 has not succeeded (for example, the control unit 23 receives an error from the map matching unit 20). When a signal is received), the process proceeds to steps S7 to S10 in sequence, and a new map is created (or updated).

  That is, in step S7, the control unit 23 controls the camera 1, thereby turning the camera 1 from the off state to the on state (or returning from the standby state) and starting an image of the monitoring space. Image data output by the camera 1 when the camera 1 starts imaging is supplied to the background creation unit 11, the object extraction unit 13, and the luminance average processing unit 15. The image output from the camera 1 can be transmitted to the information terminal device 6 via the network 5 and displayed.

  After the processing in step S7, the process proceeds to step S8, and a map is created using image data including the object captured by the camera 1.

  Here, with reference to FIGS. 10 to 12, the map creation process performed in step S8 of FIG. 7 will be described.

  FIG. 10 shows a configuration example of the map creation unit 16 of FIG. Here, it is assumed that background image data is already stored in the background holding unit 12 and object image data is already stored in the object holding unit 14.

  The map creation unit 16 creates a map based on the background image data stored in the background holding unit 12 and the object image data stored in the object holding unit 14 and supplies the map to the map holding unit 17 for storage.

  That is, the map creation unit 16 includes a background strip luminance average calculation unit 51 that calculates a strip luminance average value for each strip of background image data, an object luminance average calculation unit 52 that calculates a luminance average value of object image data, and an object monitoring Position replacement luminance calculation unit 53 that performs processing assuming that the space has moved in the horizontal direction or vertical direction, and object area change that performs processing assuming that the object has moved in the depth direction (Z direction) in the monitoring space And a map creation processing unit 55 for converting these calculated results into a map.

  The background strip luminance average calculation unit 51 divides the background image data supplied from the background holding unit 12 into strips in the horizontal direction (X direction) or the vertical direction (Y direction), and calculates a strip luminance average value for each strip. The position replacement luminance calculation unit 53 is supplied. When the background image is divided into strips, the direction in which the strips are arranged is the direction in which the position of the object is detected. Here, as described in FIGS. 3 to 5, the background image is divided into strips arranged in the X direction, and the horizontal position of the object is detected.

  The object luminance average calculating unit 52 calculates an object luminance average value that is an average value of the luminance and an object area that is a value corresponding to the number of pixels from the object image data supplied from the object holding unit 14. The data is output to the area changing unit 54.

  The position replacement luminance calculation unit 53 replaces the average strip luminance value of the strips constituting the background image supplied from the background strip luminance average calculation unit 51 with the average object luminance value supplied from the object area changing unit 54 ( The average value of the brightness values of the entire background image after the replacement is calculated. Further, the position replacement luminance calculation unit 53 outputs the calculated average value of the luminance values of the background image to the map creation processing unit 55.

  The object area changing unit 54 changes the object area supplied from the object luminance average calculating unit 52 and outputs the same to the position replacement luminance calculating unit 53 together with the object luminance average value supplied from the object luminance average calculating unit 52.

  The map creation processing unit 55 creates the map described in FIG. 6 using the output of the position replacement luminance calculation unit 53. That is, a map is created in which luminance values are described in columns (points) specified by each position and area, with the position in the X direction on the image and the object area as axes. The map created by the map creation processing unit 55 is supplied to and stored in the map holding unit 17.

  With reference to the flowchart of FIG. 11, the process which produces the map performed by FIG.7 S8 is demonstrated.

  In the information processing apparatus 2, first, background image data is acquired in step S41. That is, in step S41, the camera 1 starts imaging in the monitoring space, and supplies image data obtained by the imaging to the background image creation unit 11, the object extraction unit 13, and the luminance average processing unit 15. The background image creation unit 11, the object extraction unit 13, and the luminance average processing unit 15 acquire image data from the camera 1, and proceed from step S41 to step S42.

  In step S42, the background image creation unit 11 creates background image data (image data in the initial state of monitoring) from the image data from the camera 1, supplies it to the background holding unit 12, holds it, and proceeds to step S43. Specifically, the background image creation unit 11 creates background image data by, for example, performing time-average processing on long-time image data output from the camera 1. The background image data can be captured by the operator operating the camera 1 and stored in the background holding unit 12.

  Here, with reference to the flowchart of FIG. 12, the process of the background image creation unit 11 in the case of creating a background image by averaging long-time image data will be specifically described. In step S21, the background image creation unit 11 takes in one frame of image data output from the camera 1, and proceeds to step S22. In step S22, the background image creation unit 11 temporarily accumulates the image data captured in step S21, and the process proceeds to step S23. In step S23, the background image creation unit 11 determines whether or not a predetermined number of frames of image data have been accumulated. If it is determined that the image data has not yet been accumulated, the process returns to step S21, and thereafter the same processing is repeated. .

Further, in step S23, if it is determined that the accumulated image data of a predetermined number of frames, the process proceeds to step S2 4, the background image creating portion 11 obtains the average value of the image data of the predetermined number of frames accumulated That is, image data composed of an average value of pixel values of pixels at the same position is obtained, the image data is supplied to the background image holding unit 12 as background image data, and the process is terminated.

  Returning to FIG. 11, in step S43, the background strip luminance average calculation unit 51 of FIG. 10 reads the background image created in step S42 and stored in the background holding unit 12, and divides it into strips arranged in the horizontal direction. Further, the background strip luminance average calculation unit 51 calculates the strip luminance average value of each strip, supplies it to the position replacement luminance calculation unit 53, and proceeds to step S44.

  In step S44, the control unit 23 determines whether or not an object exists in the monitoring space. If it is determined in step S44 that no object exists, the process returns to step S44 and the same processing is repeated.

  If it is determined in step S44 that the object exists, that is, if the control unit 23 receives a change detection signal from the change detection unit 19, the process proceeds to step S45, and the object extraction unit 13 outputs the output from the camera 1. The object part (object image) is extracted from the image to be processed. That is, as shown in FIG. 13, the object extraction unit 13 subtracts the background image stored in the background holding unit 12 from the image output from the camera 1 and containing the object in the background, thereby obtaining the object image. Extract only.

  The object extraction unit 13 supplies the extracted object image to the object holding unit 14 for storage, and proceeds from step S45 in FIG. 11 to step S46. In step S46, the object luminance average calculation unit 52 of FIG. 10 reads the object image extracted in step S45 and stored in the object holding unit 14, calculates the object luminance average value, and proceeds to step S47. In step S47, the object luminance average calculation unit 52 obtains the area of the object image (object area) and supplies it to the object area changing unit 54 together with the object luminance average value and the object image. The object area changing unit 54 calculates the object area from the object image supplied from the object luminance average calculating unit 52 and supplies the object area to the position replacement luminance calculating unit 53 together with the object image supplied from the object luminance average calculating unit. Proceed to step S48.

  Here, the strip luminance average value and the object luminance average value are not calculated by the background strip luminance average calculation unit 51 and the object luminance average calculation unit 52 of FIG. 10, but by the luminance average processing unit 15 of FIG. The map creation unit 16 can be supplied.

  Then, the process proceeds from step S47 to step S48, and the position replacement luminance calculation unit 53 sets, for example, 1 as an initial value for the variable representing the area of the object image from the object area change unit 54, and proceeds to step S49. In step S49, the position replacement luminance calculation unit 53 sets, for example, 1 as an initial value to the variable X that represents the position of the object image from the object area change unit 54 in the X direction on the background image. Proceed to S50. Here, it is assumed that X = 0 at the left end of the background image, and X increases as it proceeds in the right end direction.

  In step S50, the position replacement luminance calculating unit 53 has the object image from the object area changing unit 54 having an area represented by the variable S, and the object image of the area S is a variable on the background image. When it is at the position represented by X, the strip of the background image that overlaps the object image of the area S is recognized as a replacement strip that is replaced with the strip average luminance value.

  Here, for example, a strip whose overlap with the object image is a predetermined number% of the area of the strip is recognized as a replacement strip.

  In step S50, the position replacement brightness calculation unit 53 further replaces the strip average brightness value of the replacement strip with the object brightness average value from the object area changing unit 54, and the process proceeds to step S51. In step S51, the position replacement luminance calculation unit 53 calculates the average value of the strip average luminance values of all the strips of the background image, including the strip in which the strip average luminance value is replaced in step S50, and the object image of the area S is positioned. Obtained as the average value of the luminance values of the entire image when present in X, and supplies it to the map creation processing unit 55 together with the area S and position X, and proceeds to step S52.

  In step S52, the map creation processing unit 55 sets the average value of the luminance values from the position replacement luminance calculation unit 53 in the column on the map specified by the area S and the position X from the position replacement luminance calculation unit 53. Describe (register), and proceed to step S53.

  In step S53, the position replacement luminance calculation unit 53 increments the variable X representing the position by a predetermined value (for example, the width of the strip), and proceeds to step S54.

  In step S54, the position replacement luminance calculation unit 53 determines whether or not processing has been performed for all necessary positions X. If it is determined in step S54 that all necessary positions X have not been processed yet, the process returns to step S50, and the same processing is repeated thereafter.

  If it is determined in step S54 that processing has been performed for all necessary positions X, that is, for example, if the position represented by the variable X is on the right side of the right end of the image, the process proceeds to step S55, and FIG. The object area replacement unit 54 controls the position replacement luminance calculation unit 53 to increment the area S by a predetermined value (for example, the area of the strip), and proceeds to step S56.

  In step S56, the object area replacement unit 54 determines whether or not the object image has reached the area covering the entire background image. If it is determined that the object image has not reached, the process returns to step S49, and the same processing is performed thereafter. repeat.

  If it is determined in step S56 that the object image has reached the area covering the entire background image, the process proceeds to step S57, where the map creation processing unit 55 displays the map in which the luminance value has been described so far as the map holding unit. 17 is supplied and stored, and the process is terminated.

  In the above-described case, the object luminance average value of the object image is not changed from the value obtained in step S46 regardless of the area S, but the object luminance average value of the object image of each area is not changed. The object image obtained in step S45 may be interpolated or thinned to enlarge or reduce the area S, and obtained from the image after the interpolation or thinning.

  In step S8 of FIG. 7, a map is created as described above. Then, after creating the map, the process proceeds from step S8 to step S9. The map matching unit 20 is extracted from the object holding unit 14 via the map creating unit 16 and the map holding unit 17 in step S45 of FIG. The object image is read out, and the position and area of the object image in the X direction on the image are obtained. In step S9, the map matching unit 20 obtains the position of the object in the monitoring space from the position and area in the X direction, and supplies the position to the position data holding unit 21 for storage.

  Thereafter, the process proceeds from step S9 to step S10, and the control unit 23 controls the camera 1 to turn the camera 1 from the on state to the off state (or the standby state), and proceeds to step S5.

  In step S5, the position output unit 22 outputs the position of the object stored in the map holding unit 21 in step S9, and the process proceeds to step S6, where the above-described processing is performed.

  As described above, according to the detection process of FIG. 7, by using the output of the photosensor 4 and the map, the position of the object in the monitoring space can be obtained without the image captured by the camera 1. Therefore, theoretically, the power consumption of the system can be reduced by the amount adjusted by the camera 1.

  When the state in the monitoring space changes, the power of the camera 1 is turned on, and the position of the object is transmitted from the information processing device 2 to the information terminal device 6 together with the image captured by the camera 1. Can do. In this case, the information terminal device 6 displays the position of the object together with the image from the information processing device 2, so that the operator of the information terminal device 6 can identify the object that caused the change in the state in the monitoring space. Recognize instantly.

  Next, in FIG. 11, while changing the position and area in the X direction of the object image obtained by performing object extraction, the luminance value described in the map column corresponding to each position and area is replaced with the replacement strip in the background image. Therefore, the obtained luminance value is the average of the brightness in the actual monitoring space, that is, the output value of the photosensor 4. May contain errors.

  Therefore, the map created as described in FIG. 11 can be updated by the map update process shown in FIG. The update process in FIG. 14 can also be performed as a map creation process in step S8 in FIG.

  The update process of FIG. 14 can be performed when the camera 1 is powered on. Alternatively, the update process of FIG. 14 is performed with the power of the camera 1 turned on before the start, and after the end, the power of the camera 1 is turned off. Further, in performing the update process of FIG. 14, as shown by a dotted line in FIG. 2, the map sensor generates the photosensor detection luminance value output from the photosensor 4 via the AD converter 18 and the change detector 19. 16 need to be supplied.

  In the map update process, first, in steps S61 and S62, the same process as in steps S41 and S42 of FIG. 11 is performed, and thereby the background image is stored in the background holding unit 12.

  Thereafter, the process proceeds from step S62 to step S63, and the map creating unit 16 reads the map stored in the map holding unit 17, and proceeds to step S64. If no map is stored in the map holding unit 17, in step S63, the map creating unit 16 creates a so-called model of the map (a map with no luminance value described in the column).

  In step S <b> 64, the change detection unit 19 analyzes the photosensor detection luminance value supplied from the photosensor 4 via the AD conversion unit 18 and determines whether there is a change in the state of the monitoring space. In step S64, if the change detection unit 19 determines that there is no change in the state of the monitoring space, that is, if there is no change in the photosensor detection luminance value, the process returns to step S64 and repeats the same processing. On the other hand, if it is determined in step S64 that there is a change in the state of the monitoring space, that is, if there is a change in the photosensor detection luminance value, the change detection unit 19 sends a photosensor detection luminance value to the map creation unit 16. And proceeds to step S65.

  In step S65, the object extraction unit 13 extracts the object image included in the image from the camera 1 by subtracting the background image stored in the background holding unit 12 in step S62 from the image output from the camera 1. Then, the data is supplied to and stored in the object holding unit 14.

  After the processing in step S65, the process proceeds to step S66 and step S67 in sequence, and the map creation unit 16 displays the object image extracted in the previous step S65 and stored in the object holding unit 14 on the image output by the camera 1. The position (here, for example, the position in the X direction) and the area are respectively calculated, and the process proceeds to step S68.

  In step S68, the map creation unit 16 acquires the photosensor detection luminance value supplied from the change detection unit 19, and proceeds to step S69.

  In step S69, the map creating unit 16 recognizes a column specified by the object position obtained in step S66 and the area of the object obtained in step S67, and the photosensor detection luminance value obtained in step S68 in that column. Is recorded (description).

  Thereafter, the process proceeds from step S69 to step S70, and the map creating unit 16 performs the luminance of the column in which the luminance value is described for the predetermined column on the map where the luminance value is not yet described, as necessary. Interpolation processing of the map describing the luminance value is performed by performing interpolation processing using values, and the process proceeds to step S71. Note that the processing in step S70 (map interpolation processing) can be skipped.

  In step S71, the map creating unit 16 outputs the map to the map holding unit 17, stores it in an overwritten form, and proceeds to step S72.

  In step S72, the map creation unit 16 determines whether or not to end the update process of FIG. If it is determined in step S72 that the update process is not to be terminated, that is, for example, if the camera 1 is turned on and the update process can be continued, the process returns to step S64, and so on. The process is repeated.

  In step S72, when it is determined that the update process is to be ended, that is, for example, when the power of the camera 1 is turned off, the update process is ended.

  As described above, a high-accuracy map can be created by describing the luminance value (photosensor detection luminance value) actually obtained by the photosensor 4 with respect to the map.

Next, the map interpolation processing performed in step S70 of FIG. 14 will be described with reference to FIG.

  For example, it is assumed that there is an object that moves from left to right in the monitoring space and such an object enters the monitoring space, and the update process of FIG. 14 is performed.

  Further, it is assumed that the brightness value has already been described in the column of the row of the area S1 in the map by the update process performed in the past, as shown in FIG. Here, in this case, since it is assumed that an object moving from the left to the right has entered the monitoring space, the update process of FIG. 14 is performed on such an object, so that there is a map. The luminance value is described in the column of the line of area S1.

  When the update process of FIG. 14 is newly performed, according to the new update process, as shown in FIG. 15, the luminance value is described in a row of an area S2 on the map.

  Here, in FIG. 15, the new update process is performed on an object that moves from the left side to the right side (camera 1 side) with respect to the object that has been the target of the update process performed in the past. Therefore, the area S2 on the image of the object that is the object of the new update process is larger than the area S1 on the image of the object that is the object of the update process performed in the past. .

  Further, in FIG. 15, the section in which the brightness value is described in the new update process for the object moving on the near side is the past update process for the object moving on the back side (which was performed in the past). This is shorter than the interval in which the luminance value is described in the update process), for the following reason.

  That is, when the object exists on the back side, the area of the object on the image is small, while when the object exists on the near side (camera side), the area of the object on the image becomes large.

  In addition, for example, the map describes a luminance value that represents the average brightness of the monitoring space when the entire object is displayed in the image.

  The distance in the X direction in which the entire object having a large area is displayed on the image is shorter than the distance in the X direction in which the entire object having a small area is displayed on the image.

  For this reason, in FIG. 15, the section in which the brightness value is described in the new update process for the object moving on the near side is the past update process (which was performed in the past) for the object moving on the back side. The update value is shorter than the section in which the luminance value is described.

  In the map interpolation processing in step S70 of FIG. 14, for example, each of the two end points of the row of the area S1 in which the luminance value has been described by the past update processing and the area S2 in which the luminance value has been described by the new update processing. A straight line connecting each of the two end points of the row and a triangle surrounded by the X axis representing the position in the X direction are obtained.

  Furthermore, when the intersection of the triangle, the straight line drawn in the X-axis direction from the apex on the side facing the X-axis, and the line of the area S1 or the area S2 is D1 or D2, respectively, in the interpolation process, The column on the line segment D1D2 is interpolated by, for example, the luminance value obtained by linear interpolation of the luminance values described in the column corresponding to each of the points D1 and D2.

  Next, the simulation result of the object state using the map, that is, the position and area detection process of the object on the image will be described with reference to FIGS.

  FIG. 16 shows a map used in the simulation. In FIG. 16, the map is centered on the position and area of the object on the image, and in the column specified by each position and area, the object in the monitoring space where the object corresponding to the position and area exists is shown. An average brightness, that is, a photosensor detection luminance value is described.

  FIG. 17 shows luminance values described in a row of a certain area, that is, a column on the line segment AB in the map of FIG. In FIG. 17 (the same applies to FIGS. 18 to 20 described later), the horizontal axis is the position in the X direction on the image, and the vertical axis is the luminance value described in the map.

  In FIG. 17, signal components of R (thin line), G (thick line), and B (dotted line) are shown as luminance values.

  Here, in order to simplify the description, attention is paid only to the signal component (R value) of R (thin line) in the map of FIG. 17, as shown in the upper side of FIG. In addition, the photosensor 4 outputs an R value representing an average value of the brightness of R light as a photosensor detection luminance value. In this case, for example, if 125 is obtained as the photosensor detection luminance value, the value C on the horizontal axis specified by the column on the map whose luminance value is 125, as shown in the lower side of FIG. It is detected as a position in the X direction on the image.

  Now, in consideration of the accuracy of the map and the photosensor 4, if a luminance value within a predetermined threshold ε centered on the photosensor detection luminance value is detected from the map, the photosensor detection luminance value is, for example, In the case of 125, as shown in the upper side of FIG. 19, the result that the position of the object in the X direction is 26 to 28 is obtained.

  On the other hand, when the photosensor detection luminance value is 128, for example, as shown in the lower side of FIG. 19, the result that the position of the object in the X direction is 28 to 35 is obtained.

  Here, in the map of FIG. 19, when the portions where the luminance value is near 125 and 128 are compared, the portion where the luminance value is near 125 has a change rate that is greater than the portion where the luminance value is near 128. large.

  When the luminance value of the portion where the rate of change is large in the map is obtained as the photosensor detection luminance value, the position of the object in the X direction is set to, for example, 26 to 28 as shown in the upper side of FIG. It can be specified in a narrow range. On the other hand, when the luminance value of the portion where the change rate is small in the map is obtained as the photosensor detection luminance value, the position of the object in the X direction is, for example, 28 to 35 as shown in the lower side of FIG. It is specified by a range having a certain width.

  Therefore, the position of the object in the X direction can be detected with higher accuracy when the ratio of the change in the luminance value on the map is larger than when the ratio is smaller.

  FIG. 20 shows a map obtained in a monitoring space under a certain installation condition # 1 and installation condition # 2. In FIG. 20, the map obtained for the monitoring space under the installation condition # 1 is indicated by R1 (solid line), and the map obtained for the monitoring space under the installation condition # 2 is R2 (dotted line). It is shown in

  The map obtained in the monitoring space of the installation condition # 1 indicated by the solid line R1 has a large change rate of the luminance value as a whole compared with the map obtained in the monitoring space of the installation condition # 2 indicated by the dotted line R2. It has become. When the photosensor detection luminance value of the photosensor 4 is 135, for example, in the monitoring space under the installation condition # 1, the position of the object in the X direction is a narrow range of 37 to 39 as indicated by H1 in the figure. Can be specified. On the other hand, in the monitoring space of installation condition # 2, the position of the object in the X direction is specified within a certain range of 26 to 40, as indicated by H2 in the figure. That is, the position of the object can be detected with higher accuracy in the monitoring space under the installation condition # 1 than in the monitoring space under the installation condition # 2.

  FIG. 21 is a schematic diagram in which the detection result of the position in the X direction of the object using the map is superimposed on the actual image including the object.

  In FIG. 21, the person who is the object is moving from the left to the right in the monitoring space. In the upper side of FIG. 21, the background, which is an image of only the monitoring space, is an image with high contrast, and also has a high contrast as a whole in relation to the object. On the other hand, in the lower side of FIG. 21, the background, which is an image of only the monitoring space, is an image with a low contrast as a whole, and further, an image with a low contrast as a whole in relation to the object.

  In FIG. 21, the range of the position in the X direction of the object detected using the map is shown without shading. In the case shown in the upper part of FIG. 21, a range of positions where objects moving in the monitoring space exist (ranges A1 to A3 in the figure) is detected with relatively high accuracy. On the other hand, in the case shown in the lower side of FIG. 21, although the range of the position where the object moving in the monitoring space exists (the range of A4 to A6 in the figure) is detected with a relatively wide width, A range containing the object has been detected.

  Since the detection result of the range of the position where the object exists tends to be less accurate as the width is larger, the width can be used as a parameter representing the reliability of the detection result. .

  Next, in the map creation process of FIG. 11, for an object at a certain position in the monitoring space, the position and area of the object image on the image output by the camera 1 are obtained, and while changing the position and area, A map was created by obtaining the average value of the luminance values of the background image by replacing the average luminance value of the strips of the background image with the average luminance value of the object image.

  On the other hand, in the map creation process (update process) in FIG. 14, the photosensor detection luminance value output from the photosensor 4 is used on the premise that there is an object moving in the monitoring space in the horizontal direction (from left to right). And created a map.

  In the map creation process of FIG. 11, as in the case of FIG. 14, it is possible to create a map as shown in FIGS. 22 and 23 on the premise of the presence of an object that moves in the monitoring space in the horizontal direction. is there.

  That is, FIGS. 22 and 23 show a creation method for creating a map only from an image output by the camera 1 when there is an object that moves in the monitoring space in the horizontal direction. Here, the background image F0, which is an image of only the monitoring space, is already stored in the background holding unit 12 together with the strip luminance average value of the strips arranged in the horizontal direction when the background image F0 is divided into strips. It is assumed that

  In FIG. 22, the object is moving in the horizontal direction on the back side. In this case, the camera 1 captures the image F1 including the object and outputs the image F1 to the object extraction unit 13. The object extraction unit 13 extracts the object image O1 from the difference between the image F1 including the object and the background image F0, and supplies the object image O1 to the object holding unit 14 for storage. Then, the map creation unit 16 calculates the object position X1 and the object area S1 on the image F1 of the object image O1 stored in the object holding unit 14, and obtains the average value of the luminance values of the object image O1.

  Further, the map creation unit 16 uses the strip of the background image F0 stored in the background holding unit 12 at the position where the object image O1 is present as a replacement strip, and calculates the average brightness value of the replacement strip. Then, the average value of the luminance values of the object image O1 is replaced. Then, the map creation unit 16 calculates the average value of the luminance values of the background image after replacement of the strip luminance average value, and describes the value in a column specified by the position X1 and the area S1 on the map. The map creation unit 16 performs the same processing for each position in the X direction of the object moving in the monitoring space, and thereby describes the luminance value in the column of the area S1 row of the map as shown in FIG. .

  FIG. 23 shows a state in which the object is moving in the horizontal direction on the near side (camera 1 side) in the monitoring space as compared with the case in FIG. In this case, the camera 1 captures the image F2 including the object and outputs it to the object extraction unit 13. The object extraction unit 13 extracts the object image O2 from the difference between the image F2 including the object and the background image F0, and supplies the object image O2 to the object holding unit 14 for storage. Then, the map creation unit 16 calculates the object position X2 and the object area S2 on the image F2 of the object image O2 stored in the object holding unit 14, and obtains the average value of the luminance values of the object image O2.

  Further, the map creation unit 16 takes the strip image of the background image F0 stored in the background holding unit 12 at the position where the object image O2 is present as a replacement strip, and calculates the average brightness value of the replacement strip. Then, the average value of the luminance values of the object image O2 is replaced. Then, the map creation unit 16 calculates the average value of the luminance values of the background image after replacement of the strip luminance average value, and describes the value in a column specified by the position X2 and the area S2 on the map. The map creation unit 16 performs the same processing for each position in the X direction of the object moving in the monitoring space, and thereby describes the luminance value in the column of the row of the area S2 of the map as shown in FIG. .

  In FIG. 22 and FIG. 23, each column of the map describes a photosensor detection luminance value output by the photosensor 4 when an object corresponding to the position and area specified by the column exists in the monitoring space. However, this method corresponds to the method described in FIG. Also in the cases shown in FIGS. 22 and 23, as in the case described with reference to FIG. 14, the luminance values of the respective columns in the respective rows between the respective areas S1 and S2 are changed to the respective rows of the areas S1 and S2. It is possible to interpolate according to the luminance value described in the column.

  2 detects an object state, that is, the position and area of the object on the image, the control unit 23 controls the camera 1 to capture an image. The image can be optimized. For example, when the object is a person, it is possible to acquire an entire image of the person or an enlarged image of the person by controlling the imaging direction and zoom of the camera 1.

  In the embodiment of FIG. 2, the change detection unit 19 detects the change of the state in the monitoring space based on the output of the photosensor 4. However, as shown in FIG. 24, the infrared sensor 61 And the change detection unit 19 detects a change in the state in the monitoring space based on the output of the infrared sensor 61 and the microwave sensor 62 in addition to the output of the photosensor 4. Is possible.

  Furthermore, it does not matter whether the above-described series of processing is realized by hardware or software. When the above-described series of processing is executed by software, a program constituting the software executes various functions by installing a computer incorporated in dedicated hardware or various programs. For example, a general-purpose personal computer as shown in FIG. 25 is installed from a network or a recording medium.

  25, a CPU (Central Processing Unit) 71 executes various processes according to a program stored in a ROM (Read Only Memory) 72 or a program loaded from a storage unit 78 to a RAM (Random Access Memory) 73. To do. The RAM 73 also appropriately stores data necessary for the CPU 71 to execute various processes.

  The CPU 71, ROM 72, and RAM 73 are connected to each other via a bus 74. An input / output interface 75 is also connected to the bus 74.

  The input / output interface 75 includes an input unit 76 including a keyboard and a mouse, a display (display unit) including a CRT (Cathode Ray Tube) and an LCD (Liquid Crystal display), an output unit 77 including a speaker, a hard disk, and the like. A storage unit 78 including a communication unit 79 including a modem and a terminal adapter is connected. The communication unit 79 performs communication processing via a network such as the Internet.

  A drive 80 is also connected to the input / output interface 75 as necessary. A recording medium in which the program of the present invention is recorded is attached to the drive 80, and a computer program read out from the recording medium is loaded as necessary. And installed in the storage unit 78.

  The recording medium includes a magnetic disk 81, an optical disk 82, a magneto-optical disk 83, or a semiconductor memory 84.

  In the present specification, the step of describing the program recorded on the recording medium is not limited to the processing performed in chronological order according to the described order, but is not necessarily performed in chronological order. It also includes processes that are executed individually.

  Further, in this specification, the system represents the entire apparatus constituted by a plurality of apparatuses.

  In this embodiment, the position and area of the object on the image are obtained from the image captured by the camera 1 when creating the map. However, for each position of the object in the monitoring space, the map is created. If the position and area of the object on the image output by the camera 1 are known in advance, a map can be created only by the output of the photosensor 4, that is, without the camera 1.

  In addition, the map may be configured such that the camera 1 is turned on in advance to learn (create or update) a highly accurate map, or the camera 1 is turned on as described with reference to FIG. You may make it learn whenever it is done. When the map is learned each time the camera 1 is turned on, the power consumption of the camera 1 can be reduced, and the detection accuracy of the object state increases as the learning progresses. .

  Furthermore, the map can be created for each object, for example, for each person in different clothes. However, even if the person is in a different uniform, it is possible to detect the state of the object with one map for a person with similar brightness.

  The monitoring system of FIG. 1 can be applied to, for example, a system that monitors the intrusion of a suspicious person, and a system that monitors an abnormality of an elderly person or a person who tends to be ill.

It is a figure which shows the structural example of one Embodiment of the monitoring system to which this invention is applied. It is a block diagram which shows the detailed structural example of 1 A of image capture apparatuses, the information processing apparatus 2, the recording medium 3, and the sensor 4A. It is a figure which shows the state which divided the background image into strip shape. It is a figure explaining the change of the luminance value of a background image by presence of an object. It is a figure explaining the change of the luminance value of a background image by presence of an object. It is a figure explaining a map. It is a flowchart explaining the detection process by a monitoring system. It is a flowchart explaining the process which detects the state of an object. 3 is a block diagram illustrating a configuration example of a map matching unit 20. FIG. 3 is a block diagram illustrating a configuration example of a map creation unit 16. FIG. It is a flowchart explaining the process which produces a map. It is a flowchart explaining the creation process of a background image. It is a figure explaining the extraction method of an object. It is a flowchart explaining the process which updates (creates) a map. It is the figure which showed the method of map interpolation. It is a schematic diagram of a map. It is a figure which shows the line of a certain area of a map. It is a figure which shows the line of a certain area of a map. It is a figure which shows the line of a certain area of a map. It is a figure which shows the line of a certain area of a map. It is a figure which shows a simulation result. It is a figure which shows the preparation method of a map. It is a figure which shows the preparation method of a map. It is a block diagram for demonstrating the detection method of the change of monitoring space. It is a block diagram which shows the hardware structural example of the computer to which this invention is applied.

Explanation of symbols

1A image capturing device, 1 camera, 2 information processing device, 3 recording medium, 4A sensor, 4 photosensor, 5 network, 6 information terminal device, 11 background image creation unit, 12 background holding unit, 13 object extraction unit, 14 object Holding unit, 15 luminance average processing unit, 16 map creation unit, 17 map holding unit, 18 AD conversion unit, 19 change detection unit, 20 map matching unit, 21 position data holding unit, 22 position output unit, 23 control unit, 37 Position analysis unit R, 38 Position analysis unit G, 39 Position analysis unit B, 40 Object position estimation unit, 41 Estimation result output unit, 61 Infrared sensor, 62 Microwave sensor, 71 CPU, 72 ROM, 73 RAM, 74 bus, 75 input interface, 76 input unit, 77 output unit, 78 storage unit, 79 Shin unit, 80 drive, 81 magnetic disk, 82 optical disk, 83 magneto-optical disk, 84 semiconductor memory

Claims (19)

  1. A state detection device that detects the position and area of an object on an image obtained by imaging an object in space,
    An acquisition means for detecting the brightness of the space and acquiring the brightness signal from a brightness detection means for outputting a brightness signal representing an average value of the brightness;
    Relation information representing the relationship between the position and area of the object on the image and the brightness signal for the space in which the object exists is specified from the position and area of the object on the image. Relationship information creating means for registering and creating the corresponding brightness signal at a point;
    Based on the brightness signal of the relation information corresponding to the brightness signal obtained from the brightness detection means, the position and area of the object on the image are specified, and the object on the specified image is identified. A state detecting unit that detects a position of the space of the object from the position and the area.
  2. The state detection apparatus according to claim 1, wherein the state detection apparatus includes one or more brightness detection units.
  3. Change detecting means for detecting whether the brightness signal of the space has changed to a predetermined threshold value or more;
    The state detection device according to claim 1, wherein the state detection unit detects the state of the object when a change of the brightness signal in the space that is greater than or equal to the predetermined threshold is detected.
  4. The information detection apparatus according to claim 1 , wherein the relation information creating unit creates the relation information using a captured image output by an imaging unit that captures an image of the space.
  5. And the background image generating means for generating a background image from the previous Symbol captured image,
    Object image extraction means for extracting an object image which is an image of the object using the captured image and the background image;
    The state detection device according to claim 4 , further comprising :
  6. The relationship information creating means obtains a brightness signal representing an average value of the brightness of the space when the object exists in the space based on the background image and the object image, and based on the brightness signal, The state detection apparatus according to claim 5 , wherein the relation information is created.
  7. The state detection apparatus according to claim 5 , wherein the relationship information creating unit creates the relationship information based on the object image and a brightness signal output from the brightness detection unit.
  8. The state detection apparatus according to claim 5 , wherein the relationship information creating unit further updates the created relationship information.
  9. Change detecting means for detecting whether the brightness signal of the space has changed to a predetermined threshold value or more ;
    When a change equal to or greater than the predetermined threshold is detected for the brightness signal of the space,
    The imaging means starts imaging in the space,
    The state detection apparatus according to claim 4 , wherein the relation information creating unit creates the relation information using the captured image.
  10. A state detection method for detecting a position and an area of an object on an image obtained by imaging an object in space,
    An acquisition step of detecting the brightness of the space and acquiring the brightness signal from brightness detection means for outputting a brightness signal representing an average value of the brightness;
    Relation information representing the relationship between the position and area of the object on the image and the brightness signal for the space in which the object exists is specified from the position and area of the object on the image. A relation information creating step for registering and creating the corresponding brightness signal at a point;
    Based on the brightness signal of the relation information corresponding to the brightness signal obtained from the brightness detection means, the position and area of the object on the image are specified, and the object on the specified image is identified. A state detection step of detecting a position of the object in the space from the position and the area.
  11. A program for causing a computer to perform state detection processing for detecting the position and area of the object on an image obtained by imaging an object in space,
    An acquisition step of detecting the brightness of the space and acquiring the brightness signal from brightness detection means for outputting a brightness signal representing an average value of the brightness;
    Relation information representing the relationship between the position and area of the object on the image and the brightness signal for the space in which the object exists is specified from the position and area of the object on the image. A relation information creating step for registering and creating the corresponding brightness signal at a point;
    Based on the brightness signal of the relation information corresponding to the brightness signal obtained from the brightness detection means, the position and area of the object on the image are specified, and the object on the specified image is identified. And a state detecting step of detecting a position of the space of the object from the position and the area.
  12. A program recording medium on which a program for causing a computer to perform state detection processing for detecting the position and area of the object on an image obtained by imaging an object in space is recorded,
    An acquisition step of detecting the brightness of the space and acquiring the brightness signal from brightness detection means for outputting a brightness signal representing an average value of the brightness;
    Relation information representing the relationship between the position and area of the object on the image and the brightness signal for the space in which the object exists is specified from the position and area of the object on the image. A relation information creating step for registering and creating the corresponding brightness signal at a point;
    Based on the brightness signal of the relation information corresponding to the brightness signal obtained from the brightness detection means, the position and area of the object on the image are specified, and the object on the specified image is identified. And a state detecting step of detecting a position of the object in the space from the position and the area. A program recording medium on which a program is recorded.
  13. Background image creation means for creating a background image from the captured image output by the imaging means for imaging the space;
    Object image extraction means for extracting an object image that is an image of an object existing in the space, using the captured image and the background image;
    Based on the object image, relationship information representing a relationship between the position and area of the object on the captured image and a brightness signal indicating an average value of the brightness of the space in which the object exists is represented by the position of the object. And an associated information creating means for registering and creating the corresponding brightness signal at a point specified from the area.
  14. The relationship information creating means obtains a brightness signal representing an average value of the brightness of the space when the object exists in the space based on the background image and the object image, and uses the brightness signal. the image processing apparatus according to claim 1 3, characterized in that to create the relationship information.
  15. Further comprising an acquisition means for detecting the brightness in the space and acquiring the brightness signal from a brightness detection means for outputting a brightness signal representing an average value of the brightness;
    The relationship information generating means, the image processing apparatus according to claim 1 3, wherein the object image, based on the brightness signal, wherein the brightness detection unit outputs, characterized in that to create the relationship information.
  16. Change detecting means for detecting whether the brightness signal of the space has changed to a predetermined threshold value or more ;
    When a change equal to or greater than the predetermined threshold is detected for the brightness signal of the space,
    The imaging means starts imaging in the space,
    The relationship information generating means uses the captured image, the image processing apparatus according to claim 1 3, characterized in that to create the relationship information.
  17. A background image creating step for creating a background image from a captured image output by an imaging means for imaging the space;
    An object image extraction step of extracting an object image that is an image of an object existing in the space using the captured image and the background image;
    Based on the object image, relationship information representing a relationship between the position and area of the object on the captured image and a brightness signal indicating an average value of the brightness of the space in which the object exists is represented by the position of the object. And a relation information creating step of registering and creating the corresponding brightness signal at a point specified from the area.
  18. A background image creating step for creating a background image from a captured image output by an imaging means for imaging the space;
    An object image extraction step of extracting an object image that is an image of an object existing in the space using the captured image and the background image;
    Based on the object image, relationship information representing a relationship between the position and area of the object on the captured image and a brightness signal indicating an average value of the brightness of the space in which the object exists is represented by the position of the object. And a relation information creating step of registering and creating the corresponding brightness signal at a point specified from the area.
  19. A background image creating step for creating a background image from a captured image output by an imaging means for imaging the space;
    An object image extraction step of extracting an object image that is an image of an object existing in the space using the captured image and the background image;
    Based on the object image, relationship information representing a relationship between the position and area of the object on the captured image and a brightness signal indicating an average value of the brightness of the space in which the object exists is represented by the position of the object. And a relation information creating step of registering and creating the corresponding brightness signal at a point specified from the area. A program recording medium on which a computer-readable program is recorded.
JP2003281478A 2003-07-29 2003-07-29 Status detection apparatus and method, image processing apparatus and method, program, program recording medium, data structure, and data recording medium Expired - Fee Related JP4525019B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003281478A JP4525019B2 (en) 2003-07-29 2003-07-29 Status detection apparatus and method, image processing apparatus and method, program, program recording medium, data structure, and data recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2003281478A JP4525019B2 (en) 2003-07-29 2003-07-29 Status detection apparatus and method, image processing apparatus and method, program, program recording medium, data structure, and data recording medium

Publications (2)

Publication Number Publication Date
JP2005051511A JP2005051511A (en) 2005-02-24
JP4525019B2 true JP4525019B2 (en) 2010-08-18

Family

ID=34266967

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003281478A Expired - Fee Related JP4525019B2 (en) 2003-07-29 2003-07-29 Status detection apparatus and method, image processing apparatus and method, program, program recording medium, data structure, and data recording medium

Country Status (1)

Country Link
JP (1) JP4525019B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105630450A (en) * 2015-12-28 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Method and device for displaying background image and terminal

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871186A (en) * 2012-12-17 2014-06-18 博立码杰通讯(深圳)有限公司 Security and protection monitoring system and corresponding warning triggering method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001052277A (en) * 1999-08-05 2001-02-23 Yoshio Masuda Behavior remote monitor system and h system
JP2002345766A (en) * 2001-03-19 2002-12-03 Fuji Electric Co Ltd Condition detector

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09297057A (en) * 1996-03-07 1997-11-18 Matsushita Electric Ind Co Ltd Pyroelectric type infrared-ray sensor and pyroelectric type infrared-ray sensor system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001052277A (en) * 1999-08-05 2001-02-23 Yoshio Masuda Behavior remote monitor system and h system
JP2002345766A (en) * 2001-03-19 2002-12-03 Fuji Electric Co Ltd Condition detector

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105630450A (en) * 2015-12-28 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Method and device for displaying background image and terminal
CN105630450B (en) * 2015-12-28 2018-12-25 宇龙计算机通信科技(深圳)有限公司 Display methods, display device and the terminal of background image

Also Published As

Publication number Publication date
JP2005051511A (en) 2005-02-24

Similar Documents

Publication Publication Date Title
US20170309144A1 (en) Monitoring system for a photography unit, monitoring method, computer program, and storage medium
JP2014002744A (en) Event-based image processing apparatus and method using the same
JP6467787B2 (en) Image processing system, imaging apparatus, image processing method, and program
CN102761706B (en) Imaging device and imaging method
US6606115B1 (en) Method and apparatus for monitoring the thermal characteristics of an image
EP2381419B1 (en) Image capturing apparatus, method of detecting tracking object, and computer program product
CN101212571B (en) Image capturing apparatus and focusing method
US6812835B2 (en) Intruding object monitoring method and intruding object monitoring system
US5479206A (en) Imaging system, electronic camera, computer system for controlling said electronic camera, and methods of controlling same
KR20120072350A (en) Digital image stabilization device
DE60320169T2 (en) Monitoring system and method and associated program and recording medium
US7477781B1 (en) Method and apparatus for adaptive pixel correction of multi-color matrix
US7839444B2 (en) Solid-state image-pickup device, method of driving solid-state image-pickup device and image-pickup apparatus
US8014632B2 (en) Super-resolution device and method
US7384160B2 (en) Automatic focus adjustment for projector
US8179466B2 (en) Capture of video with motion-speed determination and variable capture rate
JP4727117B2 (en) Intelligent feature selection and pan / zoom control
EP2228776B1 (en) Information processing system, information processing apparatus and information processing method, program, and recording medium
WO2016112704A1 (en) Method and device for adjusting focal length of projector, and computer storage medium
US9367734B2 (en) Apparatus, control method, and storage medium for setting object detection region in an image
JP3849645B2 (en) Monitoring device
JP4614653B2 (en) Monitoring device
US6947082B2 (en) Image-taking apparatus and image-taking method
US7304681B2 (en) Method and apparatus for continuous focus and exposure in a digital imaging device
JP4356689B2 (en) Camera system, camera control device, panorama image creation method, and computer program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20060728

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20090619

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20090728

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20090917

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100105

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100226

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20100511

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20100524

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130611

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130611

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees