WO2003058971A1 - Unmanned monitoring system - Google Patents

Unmanned monitoring system Download PDF

Info

Publication number
WO2003058971A1
WO2003058971A1 PCT/KR2002/000983 KR0200983W WO03058971A1 WO 2003058971 A1 WO2003058971 A1 WO 2003058971A1 KR 0200983 W KR0200983 W KR 0200983W WO 03058971 A1 WO03058971 A1 WO 03058971A1
Authority
WO
WIPO (PCT)
Prior art keywords
video signal
camera
compression
image
video
Prior art date
Application number
PCT/KR2002/000983
Other languages
French (fr)
Inventor
Hyun-Geun Kim
Original Assignee
Kusung El. & Tel. Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kusung El. & Tel. Co., Ltd. filed Critical Kusung El. & Tel. Co., Ltd.
Priority to AU2002309291A priority Critical patent/AU2002309291A1/en
Publication of WO2003058971A1 publication Critical patent/WO2003058971A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/917Television signal processing therefor for bandwidth reduction

Definitions

  • the present invention relates to an unmanned monitoring system or a security system, and more particularly to an unmanned monitoring system in which when a monitoring camera tracks an object by means of a motion vector corresponding to a variation of the obtained image, all the image data for an object monitoring area are stored in various storage media in such a manner as to be stored to correspond to a variation of the image for the tracked object, and the image is reproduced by combining all the stored image data, thereby reducing a data storage capacity and enabling a monitoring of actual object.
  • an unmanned monitoring system or a security system refers to a system which conveniently secures and guards any place needed to be secured by the unmanned picturing of the place in order to guard and secure various buildings such as banks, public offices, etc., and more to secure the property and life of a person.
  • the most distinct characteristic of such a system is that the system continuously observes a predetermined region to be secured or guarded by a monitoring camera, reads a corresponding image to store it, or judges if there occurs any emergency or not to warns it in case of emergency .
  • the applicable examples of a monitoring camera with this characteristic include a simple observation camera for checking if there is any person who comes in or out and identifying the person without opening a front door of an entrance by being installed at the door, a closed camera used in banks, large stores, prisons, enterprises, public institutions and the like, and a traffic camera for controlling an automobile speed violation and a parking violation and a traffic volume, etc.
  • a monitoring camera is just classified as a camera for monitoring objects in terms of its use purpose as implied by its name, while it is included in a category of a photoelectric conversion camera which converts an optical image into an electric signal through a photoelectric conversion process to pick up the image on an image sensor in a broad sense.
  • a monitoring camera operates based on the same image pickup principle as that of a typical photoelectric conversion camera like a video camera, an RGB camera for broadcasting, etc., and shares most of the functions the photoelectric conversion camera owns.
  • monitoring cameras There are many kinds of monitoring cameras; for example, a monitoring camera where an angle of view and an installing position are fixed, a monitoring camera where the view angle is variable but the installing position is fixed and a monitoring camera which moves according to a predetermined traveling path and has a variable view angle.
  • a monitoring camera where an angle of view and an installing position are fixed
  • a monitoring camera where the view angle is variable but the installing position is fixed
  • a monitoring camera which moves according to a predetermined traveling path and has a variable view angle.
  • Examples of an earlier application associated therewith include the Korean Patent Laid-Open Publication No. 90-20717 entitled Moving Monitoring Camera System, the Korea Patent Laid-Open Publication No. 90-18774 entitled Auto Tracking Device for a Photographing System and the Korea Patent Laid-Open Publication No. 91-11623 entitled Object Auto-Tracking Device of a Camcorder.
  • the spatial compression method is intended to perform a conversion of the picture elements (pixels) within a specific frame of a video signal in accordance to a compression algorism to compress information so that the amount of the information required to reproduce the frame is reduced.
  • the temporal domain compression method takes into account the change of information according to a lapse of time.
  • a conventional MPEG encoder controls the quantization of the information of a specific frame so as to modify a degree of spatial compression to preserve a memory.
  • Such an encoder functions to detect the motion of an image from a frame to a frame and control a degree of temporal compression, i.e., a motion vector.
  • the motion of an object within a screen monitored by a video camera can occur as a result of the movement
  • motion information When an image moves, motion information must be extracted to create a motion vector.
  • a conventional system for example, a system using an MPEG-type compression
  • a conventional system for performing a temporary processing to transmit the motion information requires relatively large memory space and data processing ability.
  • To store an image data obtained as mentioned above needs a relatively large storage capacity. Therefore, there is a need for a technique which allows a user to selectively use a storage medium that he or she desires to use or needs, but does not limit the storage medium to one specific medium, as well as allows a camera to track an object and the data generated corresponding to the track process to be stored and recorded in an associated storage medium.
  • an object of the present invention to provide an unmanned monitoring system in which when a monitoring camera tracks an object by means of a motion vector corresponding to a variation of the obtained image, all the image data for an object monitoring area are stored in various storage media in such a manner as to be stored to correspond to a variation of the image for the tracked object, and the image is reproduced by combining all the stored image data, thereby reducing a data storage capacity and enabling a monitoring of actual object.
  • the present invention provides an unmanned monitoring system, including: a camera having a lens and adapted to convert an optical image for an object inputted through the lens into an electric video signal for pickup to generate a video signal containing a plurality of video images; a video signal processor adapted to delay the video signal obtained from the camera for a predetermined frame unit time period and detect a difference image signal between the previous video signal obtained by the delay operation and a video signal inputted currently to generate a motion vector to attain a temporal compression; compression means adapted to attain a spatial compression for the video signal generated from the camera based on the initial video signal obtained from the camera and the difference image signal detected by the video signal processor under an optional condition to generate a compressed video signal; a camera drive controller adapted to drive the camera to track the object based on the motion vector generated from the video signal processor; and a processor adapted to transmit an instruction for changing a degree of the temporal or spatial compression to the compression means in response to an adjustment indicating signal.
  • a storage medium for storing the compressed video signal generated by the compression means may include an image storage means such as a DVD-RW player, a CD-RW player, a VTR, a VCR, etc. It is preferred that the unmanned monitoring system further include a restoring means adapted to restore a video image based on an optional initial frame and a subsequent difference image signal by accessing data, under an optional condition, stored in a storage means for storing the compressed video signal generated by the compression means.
  • the present invention is intended to store an image picked up based on data generated upon the camera's tracking of an object and restore the image by a reverse processing later so that a consumption amount of a storage medium is reduced.
  • FIG.l is a block diagram illustrating the construction of an entire system to which an unmanned monitoring system according to the present invention is appplied;
  • FIG.2 is a block diagram illustrating an unmanned monitoring system according to a preferred embodiment of the present invention to be implemented in the entire system shown in FIG. 1;
  • FIG. 3 is a block diagram illustrating an unmanned monitoring system according to another preferred embodiment of the present invention to be implemented in the entire system shown FIG. 1.
  • FIG.2 is a block diagram illustrating an unmanned monitoring system according to a preferred embodiment of the present invention to be implemented in the entire system shown in FIG. 1
  • FIG. 2 there is shown a combined configuration of an image processing central processor (a reference numeral not attached thereto) and a video camera device 1 which is not shown in FIG. 1.
  • the configuration includes a pan-tilt-zoom (hereinafter, referred to as PTZ) camera 10 consisting of an A/D color space converter 20 and a pan-tilt-zoom mechanism 18, a control panel 30 having a user input 32, a control analyzer 40 (a properly programmed microprocessor having a corresponding memory) and a compression unit 50.
  • PTZ pan-tilt-zoom
  • the PTZ camera 10 generates a video signal (made of video images containing pixels) for application to the A/D color space converter 20 which, in turn, outputs digitalized chrominance and luminance signals (Cr, Cb and Y) through its output terminal 52.
  • the PTZ camera 10 includes a zoom lens 12 consisting of a focus control mechanism 14 and a zoom control mechanism 16.
  • the PTZ mechanism 18 enables the PTZ camera 10 to perform the panning, tilting and zooming operations by means of an instruction inputted by the user through the control panel 30.
  • the control panel 30 and the control analyzer 40 may preferably be included in a unit based on a single-chip microprocessor available from Touch Tracker of Sernsormatic Electronics Corp., Deerfield Beach, Florida, U.S.A.
  • the camera 10 including the lens 12, the PTZ mechanism 18 and the A/D color space converter 20 may preferably be included in an integral and self-contained dome usable as Speed Dome of Sernsormatic Electronics Corp .
  • the compression unit 50 is a typical video compression unit including compression algorithm, preferably hardware and software for implementing a well- known MPEG system which has been described in the MPEG standard.
  • the MPEG standard describes a system for achieving a degree of compression (including spatial and temporal compression) .
  • a degree of compression there may be used a system in which a degree of compression can be changed.
  • a known system which has a compression filter (having a predetermined length, coefficient and type of the compression filter) which controls the length, the coefficient and the type of the compression filter to change a degree of spatial compression.
  • Such a system can be regarded as an equivalent of the compression unit 50. Since video compression hardware and software is known to those skilled in the art, only an aspect that is closely related with the present invention will be described hereinafter.
  • the compression unit 50 has an input terminal 53 connected to an output terminal 52 of the A/D color space converter 20 for receiving the digitalized chrominance signal (Cr, Cb) and luminance signals (Y) from the PTZ camera 10 and an input terminal 55 for receiving a motion vector calculated by the control analyzer 40 from its output terminal 54.
  • An input terminal 57 of the compression unit 50 receives an instruction for a degree of spatial compression from an output terminal 56 of the control analyzer 40, and its detailed description will follow.
  • the compression unit 50 has an output terminal 58 for outputting a compressed video signal for transmission through a communication channel.
  • the compression unit 50 preferably includes, as its basic component parts, a subtracter 60, a discrete cosine transform (DCT) unit 62, a quantizer 64, a variable length coder (VLC) 66, a de-quantizer 68, an inverse discrete cosine transform (DCT) unit 70, an adder 72 and an image storage/predictor 7 .
  • the quantizer 64 serves to quantize a discrete cosine transformed signal supplied from the discrete cosine transform unit 62.
  • a degree to which the quantizer 64 attains a spatial compression relative to the supplied discrete cosine transformed signal is a variable.
  • the quatizer 64 has at least two quantized matrixes generating different degrees of spatial compression.
  • the writing of a variable to a register 65 through the input terminal of the compression unit 50 selects one of the two quantized matrixes.
  • the compression unit 50 compresses information within video frames generated by the video camera 10.
  • Each of the video frames carries images consisting of a number of pixels.
  • a motion vector is created to describe a change in a moving distance of an image or picture from one frame to another frame.
  • the motion vector represents an indication for a motion of the images carried by the video frame .
  • a difference between frames of the video signal generated from the PTZ video camera 10 upon the stopping of the PTZ video camera 10 is smaller than that upon the panning, tilting, zooming or focusing of the camera 10.
  • the eyes of a person can much better distinguish details of an image when the camera 10 operates as compared with when it is stopped. Therefore, in the video compression, the details of the image within each frame must be even more transmitted when the camera 10 is stopped as compared with when it operates. Namely, when the PTZ video camera 10 is stopped, a degree of spatial compression must always low. In case of a preferred processing system described herein, such a spatial compression degree corresponds to a low degree of quantization. In order for a signal to be precisely reproduced when the PTZ camera 10 moves, is zoomed or focused, a compression operation always requires a transfer of much more information according to a change of an image. This requires a even lager bandwidth as compared with when the camera 10 is stopped.
  • An increase in a degree of spatial compression corresponds to panning, tilting, zooming or focusing of the camera 10 to eliminate a limitation in a bandwidth for a temporal compression (i.e., a creation of a motion vector) .
  • a temporal compression i.e., a creation of a motion vector
  • the control analyzer 40 does not perform a temporal compression (that is, not calculate a motion vector of the object) while a degree of spatial compression is low. That is, a quantized matrix providing a low degree of quatization is selected by writing a suitable variable value to the register 65. This is a result that represents a degree of details relative to a transmission of a compressed signal from the output terminal 58 of the compression unit 50.
  • the video signal inputted to the input terminal 53 of the compression unit 50 from the output terminal 52 of the A/D color space converter 20 is compressed using a degree of spatial compression set by the control analyzer 40 according to MPEG algorithm.
  • This compressed video signal can be used as an output signal from the output terminal 58 of the compression unit 50.
  • This signal is applied to a multiplexer 80 which, in turn, transmits it to a storage device through its output terminal 82 or to the outside via a communication channel .
  • the control panel 30, the control analyzer 40 and the PTZ mechanism 18 constitute a camera control system.
  • the control panel 30 When a user instructs the PTZ camera 10 to perform a panning, tilting, zooming or focusing operation through a user input, the control panel 30 generates a control signal from its output terminal 31 for application to an input terminal 41 of the control analyzer 40.
  • the control analyzer 40 generates an adjustment indicating signal from its output terminal 42 for application to an input terminal 43 of the PTZ mechanism 18 to allow the camera 10 to perform the panning, tilting, zooming or focusing operation.
  • the control analyzer 40 generates a series of motion vectors in response to the adjustment indicating signal.
  • the motion vector explains how an image created by the camera 10 is changed by an instruction inputted to the control panel 30 from a user.
  • the motion vector is outputted in the format indicated by the MPEG standard to attain the temporal compression, and is stored in a lookup table of a memory within the control analyzer 40.
  • a motion vector indicating a combination of the panning, tilting, zooming and focusing operation is obtained by multiplying each of the specific motion vectors associated with the specific panning, tilting, zooming or focusing degree by the others, respectively.
  • the motion vector is supplied to the multiplexer 80 which, in turn, multiplexes the motion vector and the compressed video signal outputted, as a result of the spatial compression, from the output terminal 58 of the compression unit 50.
  • the control analyzer 40 When the camera control system controls the operation (panning, tilting, zooming or focusing) of the camera 10, the control analyzer 40 outputs an instruction to the compression unit 50 to increase a degree of spatial compression.
  • the control analyzer 40 allows a suitable variable value to be written to the register 65 so that the quantizer 64 is instructed to select a quantized matrix producing a high degree of spatial compression to adjust a degree of quantization thereof to a high level.
  • the control analyzer 40 outputs an instruction to the compression unit 50 to increase or decrease a degree of spatial compression at an appropriate rate. Accordingly, when the PTZ camera 10 moves, is zoomed or focused with respect to its surroundings, the compression operation aims at the change from frame to frame (temporal compression) rather than the details of respective frames (spatial compression) .
  • the control analyzer 40 interrupts a calculation of the motion vector, which causes a degree of the compression to be adjusted to an appropriate low level.
  • the described system also permits a change in a degree of spatial compression depending on the video camera. This permits a tradeoff between the degrees of spatial and temporal compressions based on a previous knowledge whether the camera is panned, tilt, zoomed or focused.
  • the present system has been described with reference to a system which controls a quantization of the MPEG type compression system to change a degree of spatial compression.
  • the present invention may be implemented in a manner different from the embodiment shown in FIG. 2, another example of which is shown in FIG. 3.
  • FIG. 3 is a block diagram illustrating the construction of an automatic object-sensing/pickup device of a monitoring camera for an unmanned monitoring system according to a preferred embodiment of the present invention.
  • the automatic object-sensing/pickup device including a lens section 100, an image sensor 110, a preprocessor 120, a object sensor 200, a focus adjusting section 300, an object zooming section 500 and an object sensing/pickup controller 400.
  • the lens section 100 functions to collect an optical image.
  • the image sensor 110 functions to pick up the optical image inputted thereto from the lens section 100 through a photoelectric conversion.
  • the 120 serves to receive, as a video signal, an output signal generated from the image sensor 110 to perform the preprocessing operation such as an automatic gain control
  • the object sensor 200 receives the video signal generated from the preprocessor 120 to store it in a field memory 210 thereof, obtains a difference image signal between the stored previous video signal and a video signal inputted currently from the preprocessor 120, and senses an area where the pixel value of the difference image signal is relatively large as a target object area where a target object causing a transient data change exists.
  • the focus adjusting section 300 acts to adjust a focus to increase the amount of contrast proportional information based on the target object area.
  • the object zooming section 500 acts to zoom in to magnify the target object area after adjusting the focus.
  • the object sensing/pickup controller 400 receives information about the target object area from the object sensor 200 as the object sensor 200 senses the target object area and controls the focus adjusting section 300 and the object zooming section 500 to adjust the focus and magnify the target object area.
  • the object sensor 200 includes a field memory 210 for receiving the video signal generated from the preprocessor 120 and storing it during one field to generate the previous image signal, a difference image calculator 220 for finding a difference between the previous video signal from the field memory 210 and a video signal inputted currently from the preprocessor 120 and taking an absolute value for the difference to calculate a difference image signal, and an object area setting section 230 for setting an area where the pixel value of the difference image signal is relatively large as a target object area where a target object causing a transient data change exists.
  • the focus adjusting section 300 includes a bandpass filtering section 310, a focus adjusting motor driver 330, and a focus adjusting controller 320.
  • the bandpass filtering section 310 receives the video signal from the preprocessor 120 to output a bandpass signal filtered by a bandpass filtering process.
  • the focus adjusting motor driver 330 drives a focus adjusting lens moving motor as focus adjusting lens moving means for adjustment of a focus.
  • the focus adjusting controller 320 controls the focus adjusting motor driver 330 to increase a gain of the bandpass signal.
  • the bandpass filtering section 310 includes a low pass filter 311, a first high pass filter 312 and a second high pass filter 313.
  • the low pass filter 311 performs a low passband filtering operation for the video signal inputted from the preprocessor 120 with respect to a first baseband frequency.
  • the first high pass filter 312 performs a high passband filtering operation for an output of the low pass filter 311 with respect to a second baseband frequency relatively small than the first baseband frequency to generate a first bandpass signal obtained by filtering the video signal.
  • the second high pass filter 313 performs a high passband filtering operation for an output of the low pass filter 311 with respect to a third baseband frequency relatively small than the first baseband frequency, but large than the second baseband frequency to generate a second bandpass signal obtained by filtering the video signal .
  • the first baseband frequency is 200 MHz
  • the second baseband frequency is 300 MHz
  • the third baseband frequency is 850 MHz.
  • the focus adjusting controller 320 includes a first focus adjusting controller 321 for determining the movement direction of the focus adjusting lens moving motor as it controls the focus adjusting motor driver 330 to increase a gain of the first bandpass signal, and a second focus adjusting controller 322 for performing a focus-in adjustment as it controls the focus adjusting motor driver 330 to increase a gain of the second bandpass signal.
  • the object zooming section 500 includes a zoom motor driver 520 and a zooming controller 510.
  • the zoom motor driver 520 functions to drive a zoom lens moving motor (not shown) as zoom lens moving means for zooming- in an object.
  • the zooming controller 510 functions to zoom in to magnify the target object area when the focus adjusting section 300 completes a focus adjustment.
  • a signal indicated by a reference numeral 82 is stored in a storage medium so that a storage capacity is reduced.
  • an image outputted from the difference image calculator indicated by a reference numeral 220 is stored in a storage medium so that a storage capacity is reduced.
  • an unmanned monitoring system of the present invention when a monitoring camera tracks an object by means of a motion vector corresponding to a variation of the obtained image, all the image data for an object monitoring area are stored in various storage media in such a manner as to be stored to correspond to a variation of the image for the tracked object, and the image is reproduced by combining all the stored image data, thereby reducing a data storage capacity and enabling a monitoring of actual object.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

Disclosed is an unmanned monitoring system or a security system, and more particularly to an unmanned monitoring system in which when a monitoring camera tracks an object by means of a motion vector corresponding to a variation of the obtained image, all the image data for an object monitoring area are stored in various storage media in such a manner as to be stored to correspond to a variation of the image for the tracked object, and the image is reproduced by combining all the stored image data, thereby reducing a data storage capacity and enabling a monitoring of actual object.

Description

UNMANNED MONITORING SYSTEM
TECHNICAL FIELD
The present invention relates to an unmanned monitoring system or a security system, and more particularly to an unmanned monitoring system in which when a monitoring camera tracks an object by means of a motion vector corresponding to a variation of the obtained image, all the image data for an object monitoring area are stored in various storage media in such a manner as to be stored to correspond to a variation of the image for the tracked object, and the image is reproduced by combining all the stored image data, thereby reducing a data storage capacity and enabling a monitoring of actual object.
BACKGROUND ART
In general, an unmanned monitoring system or a security system refers to a system which conveniently secures and guards any place needed to be secured by the unmanned picturing of the place in order to guard and secure various buildings such as banks, public offices, etc., and more to secure the property and life of a person.
The most distinct characteristic of such a system is that the system continuously observes a predetermined region to be secured or guarded by a monitoring camera, reads a corresponding image to store it, or judges if there occurs any emergency or not to warns it in case of emergency . The applicable examples of a monitoring camera with this characteristic include a simple observation camera for checking if there is any person who comes in or out and identifying the person without opening a front door of an entrance by being installed at the door, a closed camera used in banks, large stores, prisons, enterprises, public institutions and the like, and a traffic camera for controlling an automobile speed violation and a parking violation and a traffic volume, etc. Thus, the applicable objects of such a monitoring camera and their functions have been expanded increasingly. Particularly, with a rapid development of the techniques regarding an automatic control, an image recognition and a three- dimensional (3D) computer vision, it is expected that an intelligence watch camera will emerge which can automatically track and photograph an object which appears non-periodically, recognize the object, provide the information about the recognized object and control the object.
A monitoring camera is just classified as a camera for monitoring objects in terms of its use purpose as implied by its name, while it is included in a category of a photoelectric conversion camera which converts an optical image into an electric signal through a photoelectric conversion process to pick up the image on an image sensor in a broad sense. Thus, such a monitoring camera operates based on the same image pickup principle as that of a typical photoelectric conversion camera like a video camera, an RGB camera for broadcasting, etc., and shares most of the functions the photoelectric conversion camera owns. There are many kinds of monitoring cameras; for example, a monitoring camera where an angle of view and an installing position are fixed, a monitoring camera where the view angle is variable but the installing position is fixed and a monitoring camera which moves according to a predetermined traveling path and has a variable view angle. As a user's demand for its higher performance and function surges increasingly, a high performance and a multi-function have been being pursued along with a miniature and light-weight and a low power consumption. That is, in order to improve a functionality of the monitoring camera, the monitoring camera adopts automation functions represented by an auto white balance (A B) function for controlling the balance of colors by compensation a color temperature of an image and an auto focusing (AF) function for an automatic focusing of the camera, and also selectively adopts the functions for adaptively corresponding to the change of the photographing conditions caused by the change in either a climate or the amount of the sunshine, or the dynamic change of an object to be monitored.
Examples of an earlier application associated therewith include the Korean Patent Laid-Open Publication No. 90-20717 entitled Moving Monitoring Camera System, the Korea Patent Laid-Open Publication No. 90-18774 entitled Auto Tracking Device for a Photographing System and the Korea Patent Laid-Open Publication No. 91-11623 entitled Object Auto-Tracking Device of a Camcorder.
In addition, with the development of a control technique of a monitoring camera as described above, the technique for storing the picked-up image has been further developed with the aid of the development of the data processing technique. A well-known video compression system includes two basic types of a spatial (space) domain compression method and a temporal (time) domain compression method.
The spatial compression method is intended to perform a conversion of the picture elements (pixels) within a specific frame of a video signal in accordance to a compression algorism to compress information so that the amount of the information required to reproduce the frame is reduced. On the contrary, the temporal domain compression method takes into account the change of information according to a lapse of time.
Accordingly, Taking the image change occurring between frames into account enables a decrease in the amount of information necessary for reproduction of a frame. These changes affect a created motion vector so that they are transmitted instead of the actual content of a video frame. The description of implementation of the spatial and temporal compression method can be found in MPEG compression recommendation ISO/IEC 1172- 2 (hereinafter, referred to as MPEG standard) .
The MPEG standard is one of well-known various video processing standards .
Thus, a conventional MPEG encoder controls the quantization of the information of a specific frame so as to modify a degree of spatial compression to preserve a memory. Such an encoder functions to detect the motion of an image from a frame to a frame and control a degree of temporal compression, i.e., a motion vector. The motion of an object within a screen monitored by a video camera can occur as a result of the movement
(for example, a person passes a field-of-view area of a camera) of the object, or the operations (for example, panning, tilting, zooming or focusing) of the camera itself .
When an image moves, motion information must be extracted to create a motion vector.
A conventional system (for example, a system using an MPEG-type compression) for performing a temporary processing to transmit the motion information requires relatively large memory space and data processing ability. To store an image data obtained as mentioned above needs a relatively large storage capacity. Therefore, there is a need for a technique which allows a user to selectively use a storage medium that he or she desires to use or needs, but does not limit the storage medium to one specific medium, as well as allows a camera to track an object and the data generated corresponding to the track process to be stored and recorded in an associated storage medium.
DISCLOSURE OF THE INVENTION
Accordingly, it is an object of the present invention to provide an unmanned monitoring system in which when a monitoring camera tracks an object by means of a motion vector corresponding to a variation of the obtained image, all the image data for an object monitoring area are stored in various storage media in such a manner as to be stored to correspond to a variation of the image for the tracked object, and the image is reproduced by combining all the stored image data, thereby reducing a data storage capacity and enabling a monitoring of actual object. To achieve the above object, the present invention provides an unmanned monitoring system, including: a camera having a lens and adapted to convert an optical image for an object inputted through the lens into an electric video signal for pickup to generate a video signal containing a plurality of video images; a video signal processor adapted to delay the video signal obtained from the camera for a predetermined frame unit time period and detect a difference image signal between the previous video signal obtained by the delay operation and a video signal inputted currently to generate a motion vector to attain a temporal compression; compression means adapted to attain a spatial compression for the video signal generated from the camera based on the initial video signal obtained from the camera and the difference image signal detected by the video signal processor under an optional condition to generate a compressed video signal; a camera drive controller adapted to drive the camera to track the object based on the motion vector generated from the video signal processor; and a processor adapted to transmit an instruction for changing a degree of the temporal or spatial compression to the compression means in response to an adjustment indicating signal.
Preferably, a storage medium for storing the compressed video signal generated by the compression means may include an image storage means such as a DVD-RW player, a CD-RW player, a VTR, a VCR, etc. It is preferred that the unmanned monitoring system further include a restoring means adapted to restore a video image based on an optional initial frame and a subsequent difference image signal by accessing data, under an optional condition, stored in a storage means for storing the compressed video signal generated by the compression means.
To briefly study a technical concept to be applied to the present invention, it is focused on the use of known information relative to a motion of a video image caused by the operations of the camera itself to reduce an overload of a memory and an operation necessary for compression of a video data.
That is, the present invention is intended to store an image picked up based on data generated upon the camera's tracking of an object and restore the image by a reverse processing later so that a consumption amount of a storage medium is reduced.
BRIEF DESCRIPTION OF THE INVENTION Further objects and advantages of the invention can be more fully understood from the following detailed description taken in conjunction with the accompanying drawing in which: FIG.l is a block diagram illustrating the construction of an entire system to which an unmanned monitoring system according to the present invention is appplied;
FIG.2 is a block diagram illustrating an unmanned monitoring system according to a preferred embodiment of the present invention to be implemented in the entire system shown in FIG. 1; and
FIG. 3 is a block diagram illustrating an unmanned monitoring system according to another preferred embodiment of the present invention to be implemented in the entire system shown FIG. 1.
BEST MODES FOR CARRYING OUT THE INVENTION
The present invention will now be described in detail in connection with preferred embodiments with reference to the accompanying drawings. For reference, like reference characters designate corresponding parts throughout several views .
FIG.2 is a block diagram illustrating an unmanned monitoring system according to a preferred embodiment of the present invention to be implemented in the entire system shown in FIG. 1
Referring FIG. 2, there is shown a combined configuration of an image processing central processor (a reference numeral not attached thereto) and a video camera device 1 which is not shown in FIG. 1. The configuration includes a pan-tilt-zoom (hereinafter, referred to as PTZ) camera 10 consisting of an A/D color space converter 20 and a pan-tilt-zoom mechanism 18, a control panel 30 having a user input 32, a control analyzer 40 (a properly programmed microprocessor having a corresponding memory) and a compression unit 50. The PTZ camera 10 generates a video signal (made of video images containing pixels) for application to the A/D color space converter 20 which, in turn, outputs digitalized chrominance and luminance signals (Cr, Cb and Y) through its output terminal 52. In addition, the PTZ camera 10 includes a zoom lens 12 consisting of a focus control mechanism 14 and a zoom control mechanism 16.
The PTZ mechanism 18 enables the PTZ camera 10 to perform the panning, tilting and zooming operations by means of an instruction inputted by the user through the control panel 30. The control panel 30 and the control analyzer 40 may preferably be included in a unit based on a single-chip microprocessor available from Touch Tracker of Sernsormatic Electronics Corp., Deerfield Beach, Florida, U.S.A.
Also, the camera 10 including the lens 12, the PTZ mechanism 18 and the A/D color space converter 20 may preferably be included in an integral and self-contained dome usable as Speed Dome of Sernsormatic Electronics Corp .
The compression unit 50 is a typical video compression unit including compression algorithm, preferably hardware and software for implementing a well- known MPEG system which has been described in the MPEG standard. The MPEG standard describes a system for achieving a degree of compression (including spatial and temporal compression) .
There may be used a system in which a degree of compression can be changed. For example, a known system may be used which has a compression filter (having a predetermined length, coefficient and type of the compression filter) which controls the length, the coefficient and the type of the compression filter to change a degree of spatial compression. Such a system can be regarded as an equivalent of the compression unit 50. Since video compression hardware and software is known to those skilled in the art, only an aspect that is closely related with the present invention will be described hereinafter.
Further, the compression unit 50 has an input terminal 53 connected to an output terminal 52 of the A/D color space converter 20 for receiving the digitalized chrominance signal (Cr, Cb) and luminance signals (Y) from the PTZ camera 10 and an input terminal 55 for receiving a motion vector calculated by the control analyzer 40 from its output terminal 54.
The generation -and object of the motion vector will be described hereinafter. An input terminal 57 of the compression unit 50 receives an instruction for a degree of spatial compression from an output terminal 56 of the control analyzer 40, and its detailed description will follow. The compression unit 50 has an output terminal 58 for outputting a compressed video signal for transmission through a communication channel.
The compression unit 50 preferably includes, as its basic component parts, a subtracter 60, a discrete cosine transform (DCT) unit 62, a quantizer 64, a variable length coder (VLC) 66, a de-quantizer 68, an inverse discrete cosine transform (DCT) unit 70, an adder 72 and an image storage/predictor 7 . The quantizer 64 serves to quantize a discrete cosine transformed signal supplied from the discrete cosine transform unit 62. A degree to which the quantizer 64 attains a spatial compression relative to the supplied discrete cosine transformed signal is a variable. As a result of the attainment of the quantization, the quatizer 64 has at least two quantized matrixes generating different degrees of spatial compression.
The writing of a variable to a register 65 through the input terminal of the compression unit 50 selects one of the two quantized matrixes. These components are known to those skilled in the art, and is described in detail in an MPEG manual .
As described above, other compression systems and the MPEG standard includes two compression modes, i.e., the spatial and temporal compression modes. In the spatial compression mode, the compression unit 50 compresses information within video frames generated by the video camera 10. Each of the video frames carries images consisting of a number of pixels. In the temporal compression mode, a motion vector is created to describe a change in a moving distance of an image or picture from one frame to another frame. Thus, the motion vector represents an indication for a motion of the images carried by the video frame . A difference between frames of the video signal generated from the PTZ video camera 10 upon the stopping of the PTZ video camera 10 is smaller than that upon the panning, tilting, zooming or focusing of the camera 10. Furthermore, the eyes of a person can much better distinguish details of an image when the camera 10 operates as compared with when it is stopped. Therefore, in the video compression, the details of the image within each frame must be even more transmitted when the camera 10 is stopped as compared with when it operates. Namely, when the PTZ video camera 10 is stopped, a degree of spatial compression must always low. In case of a preferred processing system described herein, such a spatial compression degree corresponds to a low degree of quantization. In order for a signal to be precisely reproduced when the PTZ camera 10 moves, is zoomed or focused, a compression operation always requires a transfer of much more information according to a change of an image. This requires a even lager bandwidth as compared with when the camera 10 is stopped. An increase in a degree of spatial compression (i.e., an increase in a degree of spatial quantization) corresponds to panning, tilting, zooming or focusing of the camera 10 to eliminate a limitation in a bandwidth for a temporal compression (i.e., a creation of a motion vector) . Although this results in a less detailed image when a compressed signal is reproduced, it becomes a permissible approach since the eyes of a person is less susceptible to a moving object than a stopped object.
When the PTZ video camera 10 is stopped, it is focused on an object and the zoom lens 12 is not zoomed. At this point, the control analyzer 40 does not perform a temporal compression (that is, not calculate a motion vector of the object) while a degree of spatial compression is low. That is, a quantized matrix providing a low degree of quatization is selected by writing a suitable variable value to the register 65. This is a result that represents a degree of details relative to a transmission of a compressed signal from the output terminal 58 of the compression unit 50. The video signal inputted to the input terminal 53 of the compression unit 50 from the output terminal 52 of the A/D color space converter 20 is compressed using a degree of spatial compression set by the control analyzer 40 according to MPEG algorithm. This compressed video signal can be used as an output signal from the output terminal 58 of the compression unit 50. This signal is applied to a multiplexer 80 which, in turn, transmits it to a storage device through its output terminal 82 or to the outside via a communication channel . The control panel 30, the control analyzer 40 and the PTZ mechanism 18 constitute a camera control system. When a user instructs the PTZ camera 10 to perform a panning, tilting, zooming or focusing operation through a user input, the control panel 30 generates a control signal from its output terminal 31 for application to an input terminal 41 of the control analyzer 40. At this time, the control analyzer 40 generates an adjustment indicating signal from its output terminal 42 for application to an input terminal 43 of the PTZ mechanism 18 to allow the camera 10 to perform the panning, tilting, zooming or focusing operation.
The control analyzer 40 generates a series of motion vectors in response to the adjustment indicating signal. The motion vector explains how an image created by the camera 10 is changed by an instruction inputted to the control panel 30 from a user. The motion vector is outputted in the format indicated by the MPEG standard to attain the temporal compression, and is stored in a lookup table of a memory within the control analyzer 40.
Accordingly, there is a set of specific motion vectors associated with an optional specific panning, tilting, zooming or focusing degree of the camera 10 in the look-up table. A motion vector indicating a combination of the panning, tilting, zooming and focusing operation is obtained by multiplying each of the specific motion vectors associated with the specific panning, tilting, zooming or focusing degree by the others, respectively. The motion vector is supplied to the multiplexer 80 which, in turn, multiplexes the motion vector and the compressed video signal outputted, as a result of the spatial compression, from the output terminal 58 of the compression unit 50.
When the camera control system controls the operation (panning, tilting, zooming or focusing) of the camera 10, the control analyzer 40 outputs an instruction to the compression unit 50 to increase a degree of spatial compression. In a preferred embodiment, the control analyzer 40 allows a suitable variable value to be written to the register 65 so that the quantizer 64 is instructed to select a quantized matrix producing a high degree of spatial compression to adjust a degree of quantization thereof to a high level.
As a panning, tilting, zooming or focusing degree of the PTZ camera 10 is increased or decreased, the control analyzer 40 outputs an instruction to the compression unit 50 to increase or decrease a degree of spatial compression at an appropriate rate. Accordingly, when the PTZ camera 10 moves, is zoomed or focused with respect to its surroundings, the compression operation aims at the change from frame to frame (temporal compression) rather than the details of respective frames (spatial compression) .
When the panning, tilting, zooming or focusing is stopped, the control analyzer 40 interrupts a calculation of the motion vector, which causes a degree of the compression to be adjusted to an appropriate low level. The described system also permits a change in a degree of spatial compression depending on the video camera. This permits a tradeoff between the degrees of spatial and temporal compressions based on a previous knowledge whether the camera is panned, tilt, zoomed or focused.
As mentioned above, the present system has been described with reference to a system which controls a quantization of the MPEG type compression system to change a degree of spatial compression. The present invention may be implemented in a manner different from the embodiment shown in FIG. 2, another example of which is shown in FIG. 3.
FIG. 3 is a block diagram illustrating the construction of an automatic object-sensing/pickup device of a monitoring camera for an unmanned monitoring system according to a preferred embodiment of the present invention.
Referring to FIG. 3, there is shown the automatic object-sensing/pickup device including a lens section 100, an image sensor 110, a preprocessor 120, a object sensor 200, a focus adjusting section 300, an object zooming section 500 and an object sensing/pickup controller 400.
The lens section 100 functions to collect an optical image. The image sensor 110 functions to pick up the optical image inputted thereto from the lens section 100 through a photoelectric conversion. The preprocessor
120 serves to receive, as a video signal, an output signal generated from the image sensor 110 to perform the preprocessing operation such as an automatic gain control
(AGC) , a correction, a correlation double sampling (CDS) , etc., for the received image signal. The object sensor 200 receives the video signal generated from the preprocessor 120 to store it in a field memory 210 thereof, obtains a difference image signal between the stored previous video signal and a video signal inputted currently from the preprocessor 120, and senses an area where the pixel value of the difference image signal is relatively large as a target object area where a target object causing a transient data change exists. The focus adjusting section 300 acts to adjust a focus to increase the amount of contrast proportional information based on the target object area. The object zooming section 500 acts to zoom in to magnify the target object area after adjusting the focus. The object sensing/pickup controller 400 receives information about the target object area from the object sensor 200 as the object sensor 200 senses the target object area and controls the focus adjusting section 300 and the object zooming section 500 to adjust the focus and magnify the target object area. Herein, the object sensor 200 includes a field memory 210 for receiving the video signal generated from the preprocessor 120 and storing it during one field to generate the previous image signal, a difference image calculator 220 for finding a difference between the previous video signal from the field memory 210 and a video signal inputted currently from the preprocessor 120 and taking an absolute value for the difference to calculate a difference image signal, and an object area setting section 230 for setting an area where the pixel value of the difference image signal is relatively large as a target object area where a target object causing a transient data change exists.
The focus adjusting section 300 includes a bandpass filtering section 310, a focus adjusting motor driver 330, and a focus adjusting controller 320. The bandpass filtering section 310 receives the video signal from the preprocessor 120 to output a bandpass signal filtered by a bandpass filtering process. The focus adjusting motor driver 330 drives a focus adjusting lens moving motor as focus adjusting lens moving means for adjustment of a focus. And, the focus adjusting controller 320 controls the focus adjusting motor driver 330 to increase a gain of the bandpass signal.
At this time, the bandpass filtering section 310 includes a low pass filter 311, a first high pass filter 312 and a second high pass filter 313. The low pass filter 311 performs a low passband filtering operation for the video signal inputted from the preprocessor 120 with respect to a first baseband frequency. The first high pass filter 312 performs a high passband filtering operation for an output of the low pass filter 311 with respect to a second baseband frequency relatively small than the first baseband frequency to generate a first bandpass signal obtained by filtering the video signal. The second high pass filter 313 performs a high passband filtering operation for an output of the low pass filter 311 with respect to a third baseband frequency relatively small than the first baseband frequency, but large than the second baseband frequency to generate a second bandpass signal obtained by filtering the video signal . At this time, preferably the first baseband frequency is 200 MHz, the second baseband frequency is 300 MHz and the third baseband frequency is 850 MHz.
Also, the focus adjusting controller 320 includes a first focus adjusting controller 321 for determining the movement direction of the focus adjusting lens moving motor as it controls the focus adjusting motor driver 330 to increase a gain of the first bandpass signal, and a second focus adjusting controller 322 for performing a focus-in adjustment as it controls the focus adjusting motor driver 330 to increase a gain of the second bandpass signal.
The object zooming section 500 includes a zoom motor driver 520 and a zooming controller 510. The zoom motor driver 520 functions to drive a zoom lens moving motor (not shown) as zoom lens moving means for zooming- in an object. The zooming controller 510 functions to zoom in to magnify the target object area when the focus adjusting section 300 completes a focus adjustment.
Accordingly, in case of using a technique shown in FIG. 2 to implement the system of FIG. 1 according to the present invention, a signal indicated by a reference numeral 82 is stored in a storage medium so that a storage capacity is reduced. Alternatively, in case of using a technique shown in FIG. 3 to implement the system of FIG. 1, an image outputted from the difference image calculator indicated by a reference numeral 220 is stored in a storage medium so that a storage capacity is reduced.
Industrial Applicability
As described above, according to an unmanned monitoring system of the present invention when a monitoring camera tracks an object by means of a motion vector corresponding to a variation of the obtained image, all the image data for an object monitoring area are stored in various storage media in such a manner as to be stored to correspond to a variation of the image for the tracked object, and the image is reproduced by combining all the stored image data, thereby reducing a data storage capacity and enabling a monitoring of actual object.
While the present invention has been shown and described with reference to the particular illustrative embodiments, it is not to be restricted by the embodiments but only by the appended claims. It is to be appreciated that those skilled in the art can change or modify the embodiments without departing from the scope and spirit of the present invention.

Claims

1. 1. An unmanned monitoring system, comprising: a camera having a lens and adapted to convert an optical image for an object inputted through the lens into an electric video signal for pickup to generate a video signal containing a plurality of video images; a video signal processor adapted to delay the video signal obtained from the camera for a predetermined frame unit time period and detect a difference image signal between the previous video signal obtained by the delay operation and a video signal inputted currently to generate a motion vector to attain a temporal compression; compression means adapted to attain a spatial compression for the video signal generated from the camera based on the initial video signal obtained from the camera and the difference image signal detected by the video signal processor under an optional condition to generate a compressed video signal; a camera drive controller adapted to drive the camera to track the object based on the motion vector generated from the video signal processor; and a processor adapted to transmit an instruction for changing a degree of the temporal or spatial compression to the compression means in response to an adjustment indicating signal.
2. The unmanned monitoring system according to claim 1, wherein a storage medium for storing the compressed video signal generated by the compression means may include an image storage means such as a DVD-RW player, a CD-RW player, a VTR, a VCR, etc.
3. The unmanned monitoring system according to claim 1, further comprising a restoring means adapted to restore a video image based on an optional initial frame and a subsequent difference image signal by accessing data, under an optional condition, stored in a storage means for storing the compressed video signal generated by the compression means.
PCT/KR2002/000983 2002-01-14 2002-05-24 Unmanned monitoring system WO2003058971A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002309291A AU2002309291A1 (en) 2002-01-14 2002-05-24 Unmanned monitoring system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR2002/0002041 2002-01-14
KR1020020002041A KR20030061513A (en) 2002-01-14 2002-01-14 An unmanned a monitor a system

Publications (1)

Publication Number Publication Date
WO2003058971A1 true WO2003058971A1 (en) 2003-07-17

Family

ID=19718447

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2002/000983 WO2003058971A1 (en) 2002-01-14 2002-05-24 Unmanned monitoring system

Country Status (3)

Country Link
KR (1) KR20030061513A (en)
AU (1) AU2002309291A1 (en)
WO (1) WO2003058971A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR930019033A (en) * 1992-02-28 1993-09-22 강진구 Surveillance Camera System
KR19990009081U (en) * 1997-08-12 1999-03-05 구자홍 Time-lapse video recording and playback device
KR20000000610A (en) * 1998-06-01 2000-01-15 구자홍 Method for compressively recording intermittent image and method for replaying compressed intermittent image
JP2000115762A (en) * 1998-10-06 2000-04-21 Hitachi Ltd Monitor

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5926209A (en) * 1995-07-14 1999-07-20 Sensormatic Electronics Corporation Video camera apparatus with compression system responsive to video camera adjustment
KR970056955A (en) * 1995-12-29 1997-07-31 김광호 Digital Surveillance Recorder Reflecting Motion Estimation
KR100584537B1 (en) * 1999-10-01 2006-05-30 삼성전자주식회사 Method for storeing data of monitoring camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR930019033A (en) * 1992-02-28 1993-09-22 강진구 Surveillance Camera System
KR19990009081U (en) * 1997-08-12 1999-03-05 구자홍 Time-lapse video recording and playback device
KR20000000610A (en) * 1998-06-01 2000-01-15 구자홍 Method for compressively recording intermittent image and method for replaying compressed intermittent image
JP2000115762A (en) * 1998-10-06 2000-04-21 Hitachi Ltd Monitor

Also Published As

Publication number Publication date
AU2002309291A1 (en) 2003-07-24
KR20030061513A (en) 2003-07-22

Similar Documents

Publication Publication Date Title
US20090110058A1 (en) Smart image processing CCTV camera device and method for operating same
JP4235259B2 (en) Video compression equipment
JP3870124B2 (en) Image processing apparatus and method, computer program, and computer-readable storage medium
JP4687404B2 (en) Image signal processing apparatus, imaging apparatus, and image signal processing method
EP1404134B1 (en) Camera-integrated video recording and reproducing apparatus, and record control method thereof
EP1311123B1 (en) Controlling a video camera
EP1441529A1 (en) Image-taking apparatus and image-taking system
JPH11509701A (en) Video compression equipment
EP2311256B1 (en) Communication device with peripheral viewing means
JP2000253386A (en) Control method of video camera for monitor and recorder
US20020114390A1 (en) Image coding apparatus and method of the same
JP2005175970A (en) Imaging system
CN100515036C (en) Intelligent image process closed circuit TV camera device and its operation method
KR100420620B1 (en) Object-based digital video recording system)
WO2003058971A1 (en) Unmanned monitoring system
JP2003134386A (en) Imaging apparatus and method therefor
KR200273661Y1 (en) An unmanned a monitor a system
KR20070045428A (en) An unmanned a monitor a system
KR100391266B1 (en) Method for background setup in object-based compression moving-image
JP2006180200A (en) Monitoring device
JPH09284620A (en) Image pickup device
WO2003052711A1 (en) Method and device for identifying motion
JPH10276359A (en) Tracking device and tracking method
WO2003052712A1 (en) Method and device for automatic zooming

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP