WO2009139723A1 - Method and device for analyzing video signals generated by a moving camera - Google Patents

Method and device for analyzing video signals generated by a moving camera Download PDF

Info

Publication number
WO2009139723A1
WO2009139723A1 PCT/SG2008/000188 SG2008000188W WO2009139723A1 WO 2009139723 A1 WO2009139723 A1 WO 2009139723A1 SG 2008000188 W SG2008000188 W SG 2008000188W WO 2009139723 A1 WO2009139723 A1 WO 2009139723A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
frame
shifting
changes
group
Prior art date
Application number
PCT/SG2008/000188
Other languages
French (fr)
Other versions
WO2009139723A8 (en
Inventor
Ofer Miller
Original Assignee
Artivision Technologies Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Artivision Technologies Pte Ltd filed Critical Artivision Technologies Pte Ltd
Priority to EP08754025A priority Critical patent/EP2289045A1/en
Priority to CN2008801288362A priority patent/CN102203828A/en
Priority to PCT/SG2008/000188 priority patent/WO2009139723A1/en
Priority to AU2008356238A priority patent/AU2008356238A1/en
Publication of WO2009139723A1 publication Critical patent/WO2009139723A1/en
Publication of WO2009139723A8 publication Critical patent/WO2009139723A8/en
Priority to IL207770A priority patent/IL207770A0/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present invention relates in general to the field of image processing, and in particular to the analysis of video signals which are generated by a moving camera.
  • GB 200507525 discloses a security monitoring system that uses video inputs to distinguish objects in motion from stationary objects.
  • the system may set an alarm.
  • the video stream is first processed on a frame by frame basis. Each frame is subject to edge detection processing, and then, groups of consecutive frames are compared to determine which of the detected edges persist from frame to frame. Any edges that do not persist are discarded. This removes data related to moving objects in the scene, such as people.
  • US 2008002771 discloses a video segment which is analyzed to determine if it displays a scene that is stationary or has motion. When the video segment displays a scene with motion, the segment is further analyzed to determine if the motion resulted from camera movement, or from movement of the object that has been captured.
  • This disclosure refers to two types of movements. The first one being a controlled movement, such as panning, tilting, zooming, rotation or forward or backward movement of the camera, while the second type is an unstable camera movement.
  • both types of movements refer to certain camera movements that effect specific frames, this disclosure is far from providing a solution to the problem of extracting or identifying a moving object by analyzing a video steam created by a moving camera.
  • a method for detecting one or more moving objects within a video signal generated by a moving camera comprising:
  • moving camera as used herein and throughout the specification and claims, is used to denote a camera where there is a relative movement between the camera and the area/background being captured in the frame.
  • the relative movement can be either due to the motion of the camera relatively to the area or due to the motion of the area relatively to the camera. Therefore, any reference to movement of the camera such as when the background shifting is described as being caused by the camera movement, any such reference should be understood that it may be interpreted as referring to the relative movement between the camera and the background.
  • the camera being the entity that moves relatively to the background.
  • shifting intensity value is used to denote the value of a parameter assigned to one or more pixels which location(s) in two different frames is/are known.
  • the value of the shifting intensity indicates the change in the pixel(s) location(s) between the two frames.
  • the changes determined in step (iii) had originated from a movement of the one or more objects in the area captured by the moving camera.
  • the at least a second plurality of pixels is essentially identical to the first plurality of pixels.
  • the method provided further comprises repeating steps (ii) to (vi) and wherein said first and second pluralities of pixels are associated with frames that respectively precede said first and second frames, enabling to detect the one or more moving objects within the video signal.
  • the method provided further comprises a step of predicting shifting of background pixels and/or movement of said one or more moving objects within future frames, based on information derived from a present frame and its one or more preceding frames.
  • the method provided by the present invention further comprises a step of re-identifying the at least one connected component by repeating steps (vi) and (vii).
  • the predicted shifting of background pixels and/or of the movement of the one or more moving objects in future frames is used in identifying the at least a second plurality of pixels from among the first plurality of pixels in the second frame. For example, by knowing the location of the one or more moving object in future frames it is possible to project that knowledge to preceding frame(s) when that future frame becomes the present frame.
  • the method may further comprise the step of: classifying the one or more moving objects within the video signal based on a relative movement identified between the one group of pixels and the background shifting.
  • the step of classifying the one or more objects may be carried out by comparing the movement of the one connected component with that of objects that are stationary relative to the background (i.e. the changes in these objects' location, are the same as the changes in the location of the background pixels), as there are cases where it is easier to classify a moving object when compared to a background object rather than conducting the comparison with background pixels.
  • the at least one connected component comprises at least two groups of pixels
  • the method provided further comprises the step of: classifying the one or more moving objects within the video signal according to a relative movement identified between the at least two groups of pixels. For example, when identifying a person waiving his hand while walking, the pixels comprising the hand may be one group, the body of that person another group, while the relative movement between the two groups (as the hand is moved differently than the body) may provide a better means to classify that connected component (the person).
  • the detection of the moving objects is carried out essentially in a real time detection process.
  • the video signal is a live signal, but as those skilled in the art may appreciate, the same method mutates mutandis may be implemented on any video signal.
  • the method is adapted to receive data which comprises information related to the camera movement (e.g. its velocity and/or direction) and incorporate the received data in the analysis process.
  • a computer-readable medium comprising instructions that perform a method, when executed by a processor, for establishing a computerized process for detecting one or more moving objects within a video signal generated by a moving camera, which comprises:
  • the computer-readable medium comprising instructions that perform a method may be for example a CD embedding software so that when it is inserted in a computer and operated, enables the detection of the one or more moving objects.
  • a computer program product comprising a computer useable medium having computer readable program code embodied therein for detecting one or more moving objects within a video signal generated by a moving camera, the computer program product comprising: (i) computer readable program code for causing the computer to receive a video signal comprising a plurality of consecutive frames; (ii) computer readable program code for causing the computer to select a first plurality of pixels comprised in one of the plurality of consecutive frames being a first frame, and from among the first plurality of pixels to identify at least a second plurality of pixels comprised in a preceding frame, being a second frame;
  • Fig. 2 - illustrates a schematic representation of a video signal
  • Fig. 3 - demonstrates the first and second plurality of pixels in the example illustrated in Fig. 2;
  • Fig. 4 - illustrates another schematic representation of a video signal with complex movement.
  • Fig. 1 refers to three scenarios demonstrating the application of the "moving camera” concept according to the present invention, in which:
  • Fig. 1A illustrates two points of references 110 and 120.
  • the first point of reference 110 is located in camera 112, which is a stationary surveillance camera as known in the prior art
  • the other reference point, 120 is located in one of the fixed objects comprised in the area captured by the camera, in this example, fence 122.
  • the area being captured further comprises a tree 124, a rock 126 and a walking person 128. Since in this example the relative movement between the two points of reference is zero, there is no relative movement between the camera and the background, hence this example is a prior art example which is not encompassed by the present invention.
  • Point of reference 130 is located in camera 132 which is airborne on airplane 134
  • point of reference 140 is located in one of the fixed objects comprised in the area captured by the camera, in the example in rock 142.
  • the area being captured further comprises a tree 148, a walking person 146 and a driving car 144.
  • this example there is a relative movement between the camera and the background which results from the movement of the camera, hence this case serves as an example of a moving camera which falls under the definition of a moving camera of the present invention, and the method described by the present invention allows detecting the two moving objects 144 and 146.
  • Fig. 1C illustrates two points of references 150 and 160, wherein the first, 150 is located in camera 152 which is placed on a watching tower, and the latter, 160, is on a boat deck 166 which is included in the area captured by the camera.
  • Point of reference 160 is located in one of the fixed objects on the deck, e.g., anchor 162.
  • the area being captured further comprises two lifebuoys 164, and two walking people 168.
  • there is relative movement between the two points of reference caused by the motion of the sailing boat, hence although the camera is position on top of the tower it should be considered as a moving camera encompassed by the present invention due to the relative motion existing between the camera and the boat.
  • the moving objects i.e. the people 168 walking on the deck may be differentiated and be detected separately from the moving boat.
  • a video signal which comprises a plurality of consecutive frames.
  • the video signal of the Fig. 2 example comprises a plurality of N consecutive frames.
  • Frames 210-217 illustrated in this Fig. are only some of the frames comprised in the video signal.
  • the frames are taken consecutively and describe the time evolution of a falling ball (226).
  • the frames are taken so that the time gap between each two frames is identical to any time gap occurred while taking any other two consecutive frames.
  • the method should not be understood as being restricted to a given number N of frames, and as will be further discussed, the video signal may be a live broadcast where the value of N may change with time to include additional information to improve the accuracy of the analysis results.
  • a first plurality of pixels comprised in one of the plurality of consecutive frames being a first frame is selected and a second plurality of pixels is identified from among the first plurality of pixels in a preceding frame, being a second frame.
  • a shifting intensity value is calculated for one or more of the identified pixels based on the changes determined.
  • the shifting intensity value is a parameter assigned for each identified pixel in the second plurality of pixels, and is calculated from the respective pixel's transition in its location.
  • all pixels in the second plurality i.e. included in the grey area of Fig. 3B
  • the pixels comprising ball (226) will have the same shifting intensity value, which in fact will be derived based on the velocity at which the camera is moved. Since the ball is the only object that moved, the pixels associated therewith will have different shifting intensity value.
  • a vector is generated for one or more of the identified pixels.
  • the vector is associated with the location of the one or more of the identified pixels, and its calculated shifting intensity value.
  • a vector is generated, where such vectors comprise data of the respective pixel's location at the first frame, and its shifting intensity value that was calculated in the previous step.
  • one (or more) connected component is identified.
  • the example illustrated in Fig. 2 is a rather simple one in several aspects.
  • the ball movement is homogenous and this object (the ball) does not include parts that move differently from one another. Therefore, this step in the present example includes identifying only one group of pixels associated with a connected component, as the ball pixels will be in one group.
  • the moving object i.e. the ball, is detected by associating the one connected component therewith.
  • steps 2 to 6 may be further repeated mutates mutandis as long as required.
  • the process may be continued quite easily, where every time a new first frame is defined and its preceding frame is considered as being the second frame, for the process described above.
  • the method provided further comprises a step of predicting the background shifting and the movement of the one or more moving object.
  • the predicted background shifting can further be used as an indication of a change in the movement of the camera. For example, if the camera is mounted on a vehicle that moves in a straight direction and all of a sudden the vehicle takes a sharp turn, the actual background shifting will be substantially different from the predicted one, and based on this difference one may derive some conclusions regarding the camera movement. Certain changes in the camera movement cause changes in the moving objects (e.g. from having the front view of the moving object to having its side view). As explained above, the example illustrated in Fig.
  • FIG. 4A in which the moving objects are more than one ball (as is the case in the example of Fig .2).
  • Fig. 4A three frames (410-412) are demonstrated out of a complete video signal. In these frames, one may observe a road (425) a person (420) and a bird (430). The camera in this part of the video stream is moving to the right causing the background to shift to the left. In addition, the person is walking to the right while waving his hand and a bird is flying in the sky.
  • step Vl one more time. In this step, at least one connected component is identified.
  • the connected component is defined as one which comprises at least one group of pixels from among the second plurality of pixels and wherein a change in each of the pixels comprised in the at least one group of pixels is associated with a change in each of the remaining pixels of that at least one group of pixels, and wherein the pixels comprised in each of the at least one group has a distinctive shifting intensity value thereby indicating a movement of the connected component relative to background shifting caused by the movement of the camera.
  • Fig.4A there are two moving objects (the person and the bird), and neither one of them moves homogenously, therefore in this step we identify connected components that will eventually comprise these two moving objects. Let us first focus on the bird (Fig. 4B). According to the shifting intensity value three connected components may be identified in the bird.
  • Each of the connected components corresponds to the above definition as it comprises one or more groups of pixels, where a change in one pixel in a group indicates a change in the rest of the pixels associated with that group.
  • the left wing is a connected component having two groups of pixels (431" and 431")
  • the body has only one group of pixels
  • the connected component of the right wing also comprises two such groups of pixels (433 1 and 433"). Similar analysis may be conducted for the walking person.
  • the method provided further comprises a step of classifying the moving objects. Two very distinct classifications may easily be demonstrated.
  • the first one is when the moving object comprises only one connected component, which comprises only one group of pixels. This is a case of a homogenous movement which could be the ball in the example of Fig .2 or a car, a truck, and the like.
  • the second type of classification is when the moving object comprises one or more connected components, and at least one of these connected elements comprises at least two different groups of pixels, where this type is associated with a more complex movement such as a person walking, flying bird and the like. Obviously, the latter classification may be further classified by depending on relationship between the various groups, number of groups etc.
  • the various embodiments of the present invention may be carried out for real time detection of one or more moving objects or for non-real time analysis of video streams.
  • first and second frames referred to herein and through the specification and claims may be in fact non-consecutive frames, but instead either representative frames or certain chosen frames (whether the choice is made arbitrary or not), e.g. when the movement of the moving object is not a very rapid one.
  • iterative analysis explained hereinbefore does not have to be restricted to the selection of the first frame so that all iterations are in respect of the first selected frame, and different frames may be used throughout the analysis.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A method is provided for detecting moving objects in a video signal generated by a moving camera. A first plurality of pixels is selected from a first frame and a second plurality of pixels comprised in that first plurality, is identified in a preceding frame. Based upon the second plurality of pixels, changes occurred in pixels belonging to the first plurality of pixels are identified, and a shifting intensity value of pixels for which changes have been identified, is calculated. Then, a vector which is associated with the pixels' locations and calculated shifting intensity value is generated, and a connected component which comprises a group of pixels comprised in the second plurality of pixels, is identified. The group is characterized in that a change in each of its pixels is associated with a change in each of the remaining pixels of that group, and that pixels comprised in the group have a distinctive shifting intensity value, indicating a movement of the connected component which is associated with the moving object, relative to background shifting caused by the camera movement.

Description

Method and Device for Analyzing Video Signals Generated By a Moving
Camera
Field of the Invention
The present invention relates in general to the field of image processing, and in particular to the analysis of video signals which are generated by a moving camera.
Background of the Invention
The use of image processing is found to be extremely useful in surveillance and security cameras. Many methods are known in the art to detect motion within a certain area that is covered by one or more fixed cameras. Basically the common way to analyze a video stream is by dividing it into a number of frames and from the comparison of consecutive frames while using change detection algorithms, it is possible to eliminate the background and to focus on changes occurring by a motion of certain object(s) as captured by the video stream. The importance of a reliable computerized system being capable of identifying movements, is by saving manpower (e.g. no need to monitor each camera), and by overcoming the challenges of human fatigue or human errors. Moreover, some of these systems are even capable of indicating movements that are not visible to human eye.
For example, GB 200507525 discloses a security monitoring system that uses video inputs to distinguish objects in motion from stationary objects. In addition, when an object captured in motion becomes stationary, the system may set an alarm. According to this disclosure, the video stream is first processed on a frame by frame basis. Each frame is subject to edge detection processing, and then, groups of consecutive frames are compared to determine which of the detected edges persist from frame to frame. Any edges that do not persist are discarded. This removes data related to moving objects in the scene, such as people.
US 2008002771 discloses a video segment which is analyzed to determine if it displays a scene that is stationary or has motion. When the video segment displays a scene with motion, the segment is further analyzed to determine if the motion resulted from camera movement, or from movement of the object that has been captured. This disclosure refers to two types of movements. The first one being a controlled movement, such as panning, tilting, zooming, rotation or forward or backward movement of the camera, while the second type is an unstable camera movement. Although both types of movements refer to certain camera movements that effect specific frames, this disclosure is far from providing a solution to the problem of extracting or identifying a moving object by analyzing a video steam created by a moving camera.
The major hurdle to overcome is caused due to the fact that the moving camera produces changes all over the image. One of the few publications that make an attempt to solve problems associated with moving camera is US 2006078162. This publication discloses a system that includes a moveable camera adapted to obtain a sequence of video images of an object and to determine the object area and a background area based on the object border. The border is determined through the use of optical flow estimations or through a user drawing an object border around an object using a selection device, such as a mouse or a joystick. Then the camera is moved in order to allow tracking of the specific object based on the object's motion model and the camera's motion model. However, one of the major drawbacks of this solution is that it can only allow tracking of certain objects when the objects' borders are well defined.
Summary of the invention
It is therefore an object of the present invention to provide a method for detecting one or more moving objects within a video signal generated by a moving camera. Other objects of the invention will become apparent as the description of the invention proceeds.
According to a first embodiment of the invention, there is provided a method for detecting one or more moving objects within a video signal generated by a moving camera, the method comprising:
(i) providing a video signal comprising a plurality of consecutive frames; (ii) for one of the plurality of consecutive frames, being a first frame, selecting a first plurality of pixels comprised in said first frame, and identifying in a preceding frame being a second frame, at least a second plurality of pixels comprised in the first plurality of pixels; (iii) based on the at least second plurality of pixels, identifying changes that have occurred in pixels that belong to the first frame; (iv) calculating a shifting intensity value for one or more of the pixels for which changes have been identified, wherein the shifting intensity value is based on said changes;
(v) generating a vector for one or more of the pixels for which changes have been identified, wherein the vector is associated with at least the location of the one or more of the pixels for which changes have been identified, and the calculated shifting intensity value thereof; (vi) identifying at least one connected component which comprises at least one group of pixels from among said at least second plurality of pixels and wherein a change in each of the pixels comprised in the at least one group of pixels is associated with a change in each of the remaining pixels of said at least one group of pixels, and wherein the pixels comprised in each of the at least one group has a distinctive shifting intensity value thereby indicating a movement of the at least one connected component relative to background shifting caused by the camera movement; and
(vii) detecting said one or more moving objects within the video signal by associating the at least one connected component therewith. The term "moving camera" as used herein and throughout the specification and claims, is used to denote a camera where there is a relative movement between the camera and the area/background being captured in the frame. The relative movement can be either due to the motion of the camera relatively to the area or due to the motion of the area relatively to the camera. Therefore, any reference to movement of the camera such as when the background shifting is described as being caused by the camera movement, any such reference should be understood that it may be interpreted as referring to the relative movement between the camera and the background. For the convenience of the reader, it has been usually referred to in the application as the camera being the entity that moves relatively to the background.
The term "shifting intensity value" as used herein and throughout the specification and claims, is used to denote the value of a parameter assigned to one or more pixels which location(s) in two different frames is/are known. The value of the shifting intensity indicates the change in the pixel(s) location(s) between the two frames.
According to another preferred embodiment of the present invention the changes determined in step (iii) had originated from a movement of the one or more objects in the area captured by the moving camera.
By another embodiment of the present invention, the at least a second plurality of pixels is essentially identical to the first plurality of pixels.
According to still another preferred embodiment of the present invention, the method provided further comprises repeating steps (ii) to (vi) and wherein said first and second pluralities of pixels are associated with frames that respectively precede said first and second frames, enabling to detect the one or more moving objects within the video signal.
In accordance with yet another embodiment of the present invention, the method provided further comprises a step of predicting shifting of background pixels and/or movement of said one or more moving objects within future frames, based on information derived from a present frame and its one or more preceding frames.
According to another embodiment of this aspect of the invention, if the predicted shifting of background pixels in a future frame is different from their actual shifting in that frame, which means that an unexpected movement of the camera has occurred when taking that frame, e.g. when the camera turns 45° thereby causing the moving object to change its profile, the method provided by the present invention further comprises a step of re-identifying the at least one connected component by repeating steps (vi) and (vii).
In accordance with still another embodiment of the present invention, the predicted shifting of background pixels and/or of the movement of the one or more moving objects in future frames, is used in identifying the at least a second plurality of pixels from among the first plurality of pixels in the second frame. For example, by knowing the location of the one or more moving object in future frames it is possible to project that knowledge to preceding frame(s) when that future frame becomes the present frame.
According to yet another preferred embodiment of the invention, when the one connected component comprises only one group of pixels, the method may further comprise the step of: classifying the one or more moving objects within the video signal based on a relative movement identified between the one group of pixels and the background shifting.
Thus, if for example one group of pixels has been identified where all of the pixels belonging to that group are moving simultaneously and differently from the background movement, it may indicate that the moving object is a vehicle.
In the alternative, the step of classifying the one or more objects may be carried out by comparing the movement of the one connected component with that of objects that are stationary relative to the background (i.e. the changes in these objects' location, are the same as the changes in the location of the background pixels), as there are cases where it is easier to classify a moving object when compared to a background object rather than conducting the comparison with background pixels.
In accordance with the still another embodiment of the invention, the at least one connected component comprises at least two groups of pixels, and the method provided further comprises the step of: classifying the one or more moving objects within the video signal according to a relative movement identified between the at least two groups of pixels. For example, when identifying a person waiving his hand while walking, the pixels comprising the hand may be one group, the body of that person another group, while the relative movement between the two groups (as the hand is moved differently than the body) may provide a better means to classify that connected component (the person).
According to yet another embodiment of the invention, the detection of the moving objects is carried out essentially in a real time detection process. Preferably, the video signal is a live signal, but as those skilled in the art may appreciate, the same method mutates mutandis may be implemented on any video signal.
By still another embodiment of the invention the method is adapted to receive data which comprises information related to the camera movement (e.g. its velocity and/or direction) and incorporate the received data in the analysis process.
In accordance with still aspect of the present invention there is provided a computer-readable medium comprising instructions that perform a method, when executed by a processor, for establishing a computerized process for detecting one or more moving objects within a video signal generated by a moving camera, which comprises:
(i) receiving a video signal comprising a plurality of consecutive frames; (ii) for one of the plurality of consecutive frames, being a first frame, selecting a first plurality of pixels comprised in that first frame, and identifying in a preceding frame, being a second frame, at least a second plurality of pixels comprised in the first plurality of pixels; (iii) based on the at least second plurality of pixels, identifying changes that have occurred in pixels that belong to the first frame; (iv) calculating a shifting intensity value for one or more of the pixels for which changes have been identified, wherein the shifting intensity value is based on these changes; (v) generating a vector for one or more of the pixels for which changes have been identified, wherein the vector is associated with at least the location of the one or more of the pixels for which changes have been identified, and the calculated shifting intensity value thereof; (vi) identifying at least one connected component which comprises at least one group of pixels from among the at least second plurality of pixels and wherein a change in each of the pixels comprised in the at least one group of pixels is associated with a change in each of the remaining pixels of the at least one group of pixels, and wherein the pixels comprised in each of the at least one group has a distinctive shifting intensity value thereby indicating a movement of the at least one connected component relative to background shifting caused by the camera movement; and
(vii) detecting the one or more moving objects within the video signal by associating the at least one connected component therewith. The computer-readable medium comprising instructions that perform a method may be for example a CD embedding software so that when it is inserted in a computer and operated, enables the detection of the one or more moving objects.
In accordance with still another embodiment of the present invention there is provided a computer program product comprising a computer useable medium having computer readable program code embodied therein for detecting one or more moving objects within a video signal generated by a moving camera, the computer program product comprising: (i) computer readable program code for causing the computer to receive a video signal comprising a plurality of consecutive frames; (ii) computer readable program code for causing the computer to select a first plurality of pixels comprised in one of the plurality of consecutive frames being a first frame, and from among the first plurality of pixels to identify at least a second plurality of pixels comprised in a preceding frame, being a second frame;
(iii) computer readable program code for causing the computer to identify based on the at least second plurality of pixels, changes that have occurred in pixels belonging to the first plurality of pixels;
(iv) computer readable program code for causing the computer to calculate a shifting intensity value for one or more of the pixels where changes have been identified, where the shifting intensity value is based upon these changes; (v) computer readable program code for causing the computer to generate a vector for one or more of the pixels for which changes have been identified, and wherein the vector is associated with at least the location of said one or more of the pixels associated with changes that have been identified, and the calculated shifting intensity value thereof; (vi) computer readable program code for causing the computer to identify at least one connected component which comprises at least one group of pixels from among the at least second plurality of pixels and wherein a change in each of the pixels comprised in the at least one group of pixels is associated with a change in each of the remaining pixels of the at least one group of pixels, and wherein the pixels comprised in each of the at least one group has a distinctive shifting intensity value thereby indicating a movement of the at least one connected component relative to background shifting caused by the camera movement; and (vii) computer readable program code for causing the computer to detect the one or more moving objects within said video signal based on the at least one connected component identified.
Brief description of figures For a more complete understanding of the present invention, reference is made to the following detailed description taken in conjunction with the accompanying drawings wherein: Figs. 1 A to 1 C - present different examples of uses of moving camera;
Fig. 2 - illustrates a schematic representation of a video signal;
Fig. 3 - demonstrates the first and second plurality of pixels in the example illustrated in Fig. 2; and
Fig. 4 - illustrates another schematic representation of a video signal with complex movement.
Detailed Description of the Invention
A better understanding of the present invention is obtained when the following non-limiting detailed description is considered in conjunction with the figures.
The examples presented in the following description demonstrate certain ways of carrying out embodiments of the present invention, by which a video signal generated by a moving camera is processed in order to detect one or more moving objects within the video signal.
Fig. 1 refers to three scenarios demonstrating the application of the "moving camera" concept according to the present invention, in which:
Fig. 1A illustrates two points of references 110 and 120. The first point of reference 110 is located in camera 112, which is a stationary surveillance camera as known in the prior art, whereas the other reference point, 120, is located in one of the fixed objects comprised in the area captured by the camera, in this example, fence 122. The area being captured further comprises a tree 124, a rock 126 and a walking person 128. Since in this example the relative movement between the two points of reference is zero, there is no relative movement between the camera and the background, hence this example is a prior art example which is not encompassed by the present invention.
In Fig. 1B there are again points of reference 130 and 140. Point of reference 130 is located in camera 132 which is airborne on airplane 134, and point of reference 140 is located in one of the fixed objects comprised in the area captured by the camera, in the example in rock 142. The area being captured further comprises a tree 148, a walking person 146 and a driving car 144. In this example there is a relative movement between the camera and the background which results from the movement of the camera, hence this case serves as an example of a moving camera which falls under the definition of a moving camera of the present invention, and the method described by the present invention allows detecting the two moving objects 144 and 146.
Fig. 1C illustrates two points of references 150 and 160, wherein the first, 150 is located in camera 152 which is placed on a watching tower, and the latter, 160, is on a boat deck 166 which is included in the area captured by the camera. Point of reference 160 is located in one of the fixed objects on the deck, e.g., anchor 162. The area being captured further comprises two lifebuoys 164, and two walking people 168. In this example there is relative movement between the two points of reference caused by the motion of the sailing boat, hence although the camera is position on top of the tower it should be considered as a moving camera encompassed by the present invention due to the relative motion existing between the camera and the boat. Here again, by the method provided by the present invention, the moving objects, i.e. the people 168 walking on the deck may be differentiated and be detected separately from the moving boat.
In order to better understand the method provided by the present invention, let us consider Fig .2 which presents a schematic example of frames comprised in a video signal that were generated by a moving camera, while reviewing various steps of a method exercised in accordance with an embodiment of the present invention. By the first step of the method, a video signal is provided which comprises a plurality of consecutive frames. The video signal of the Fig. 2 example comprises a plurality of N consecutive frames. Frames 210-217 illustrated in this Fig. are only some of the frames comprised in the video signal. The frames are taken consecutively and describe the time evolution of a falling ball (226). Preferably but not necessarily, the frames are taken so that the time gap between each two frames is identical to any time gap occurred while taking any other two consecutive frames. Moreover the method should not be understood as being restricted to a given number N of frames, and as will be further discussed, the video signal may be a live broadcast where the value of N may change with time to include additional information to improve the accuracy of the analysis results.
In the next step, a first plurality of pixels comprised in one of the plurality of consecutive frames being a first frame, is selected and a second plurality of pixels is identified from among the first plurality of pixels in a preceding frame, being a second frame.
Let us now take an arbitrary choice of frame n = i (214) as our first frame. This frame where n = i comprises a table (220), a vase (222), two lamps (2241, 224") and a ball (226). For easier understanding let us select all the pixels in frame n = i as the first plurality of pixels (shown as the grey area in Fig. 3A). Now let us consider its preceding frame, frame n = i-\ (213). Because of the camera movement, some of the pixels from that first plurality (of frame 214) still do not appear in frame n = i-\ (e.g. only a partial view of lamp 224" is shown). Therefore, all the pixels that are shown in frame 214 and can also be identified in frame n = i-\ (213) will be referred to as the second plurality of pixels (the grey area shown in Fig. 3B), and frame 213 will be referred to as the second frame.
In the next step, based on the second plurality of pixels, changes that have occurred in pixels belonging to the first plurality of pixels and included in the second plurality of pixels are identified. Thus, based on the second plurality of pixels (the grey area in Fig. 3B), the changes that had occurred in the pixels belonging to the first plurality of pixels (the grey area in Fig. 3A), are identified.
Let us assume that the changes originated from the movement of the camera. It is easy see in this example that most of the changes in the pixels caused by the movement of the camera to the left direction, as the background is shifted to the right.
For the next step of this embodiment, a shifting intensity value is calculated for one or more of the identified pixels based on the changes determined. The shifting intensity value is a parameter assigned for each identified pixel in the second plurality of pixels, and is calculated from the respective pixel's transition in its location. In our example all pixels in the second plurality (i.e. included in the grey area of Fig. 3B), except for the pixels comprising ball (226) will have the same shifting intensity value, which in fact will be derived based on the velocity at which the camera is moved. Since the ball is the only object that moved, the pixels associated therewith will have different shifting intensity value.
Then, a vector is generated for one or more of the identified pixels. The vector is associated with the location of the one or more of the identified pixels, and its calculated shifting intensity value. For each pixel in the first plurality (grey area in Fig. 3A) that have an analogous pixel in the second plurality of pixels (grey area in Fig. 3B), a vector is generated, where such vectors comprise data of the respective pixel's location at the first frame, and its shifting intensity value that was calculated in the previous step.
For the next step, one (or more) connected component is identified. The example illustrated in Fig. 2 is a rather simple one in several aspects. There is only one moving object (the ball), the ball movement is homogenous and this object (the ball) does not include parts that move differently from one another. Therefore, this step in the present example includes identifying only one group of pixels associated with a connected component, as the ball pixels will be in one group. Finally, the moving object, i.e. the ball, is detected by associating the one connected component therewith.
According to an embodiment of the invention steps 2 to 6 may be further repeated mutates mutandis as long as required. For example, for the first repetition we shall refer to frame « = /-1 (213) as the first frame whereas frame n = i-2 (212) shall be the second as the second frame, so that we can now proceed with the above described steps in detecting the moving ball. The process may be continued quite easily, where every time a new first frame is defined and its preceding frame is considered as being the second frame, for the process described above.
Let us now consider another embodiment of the invention, whereby the method provided further comprises a step of predicting the background shifting and the movement of the one or more moving object. Let us return to our non limiting example of Fig .2 where the first frame is frame n = i , and the above described steps are applied to frame n = i and its preceding frames. Based on the available information regarding the background shifting and the movement of the moving object derived from the processed frames, it is possible to deduce the shifting of the background (or of stationary objects included therein such as table 220 and vase 222) and the location of the moving object (ball 226) in frame n = i+\ . The predicted background shifting and the predicted location of the moving objects may be used to optimize the identification in step (ii) when taking frame n = i + l as the first frame. The predicted background shifting can further be used as an indication of a change in the movement of the camera. For example, if the camera is mounted on a vehicle that moves in a straight direction and all of a sudden the vehicle takes a sharp turn, the actual background shifting will be substantially different from the predicted one, and based on this difference one may derive some conclusions regarding the camera movement. Certain changes in the camera movement cause changes in the moving objects (e.g. from having the front view of the moving object to having its side view). As explained above, the example illustrated in Fig. 2 is a rather simplified example and is far from being a true representative of the full potential of the present invention. To appreciate some of the additional potential, let us consider Fig. 4A, in which the moving objects are more than one ball (as is the case in the example of Fig .2). In Fig. 4A three frames (410-412) are demonstrated out of a complete video signal. In these frames, one may observe a road (425) a person (420) and a bird (430). The camera in this part of the video stream is moving to the right causing the background to shift to the left. In addition, the person is walking to the right while waving his hand and a bird is flying in the sky. Let us review step Vl one more time. In this step, at least one connected component is identified. The connected component is defined as one which comprises at least one group of pixels from among the second plurality of pixels and wherein a change in each of the pixels comprised in the at least one group of pixels is associated with a change in each of the remaining pixels of that at least one group of pixels, and wherein the pixels comprised in each of the at least one group has a distinctive shifting intensity value thereby indicating a movement of the connected component relative to background shifting caused by the movement of the camera. In the example illustrated in Fig .4A, there are two moving objects (the person and the bird), and neither one of them moves homogenously, therefore in this step we identify connected components that will eventually comprise these two moving objects. Let us first focus on the bird (Fig. 4B). According to the shifting intensity value three connected components may be identified in the bird. The right wing (431), the bird's body (432) and the bird's left wing (433). Each of the connected components corresponds to the above definition as it comprises one or more groups of pixels, where a change in one pixel in a group indicates a change in the rest of the pixels associated with that group. In the bird example, the left wing is a connected component having two groups of pixels (431" and 431"), the body has only one group of pixels and the connected component of the right wing also comprises two such groups of pixels (4331 and 433"). Similar analysis may be conducted for the walking person. According to an embodiment of the invention, the method provided further comprises a step of classifying the moving objects. Two very distinct classifications may easily be demonstrated. The first one is when the moving object comprises only one connected component, which comprises only one group of pixels. This is a case of a homogenous movement which could be the ball in the example of Fig .2 or a car, a truck, and the like. The second type of classification is when the moving object comprises one or more connected components, and at least one of these connected elements comprises at least two different groups of pixels, where this type is associated with a more complex movement such as a person walking, flying bird and the like. Obviously, the latter classification may be further classified by depending on relationship between the various groups, number of groups etc.
As will be appreciated by those skilled in the art, the various embodiments of the present invention may be carried out for real time detection of one or more moving objects or for non-real time analysis of video streams.
It is to be understood that the above description only includes some embodiments of the invention and serves for its illustration. Numerous other ways of carrying out the methods provided by the present invention may be devised by a person skilled in the art without departing from the scope of the invention, and are thus encompassed by the present invention. For example, it should be clear to any person skilled in the art that the steps defining the method and process of the present invention may be carried out in a different order, so that it should be understood that any such shifting of the order in which the various steps are carried out is a matter of simple selection and can be done without departing from the scope of the present invention. In addition, the first and second frames referred to herein and through the specification and claims, may be in fact non-consecutive frames, but instead either representative frames or certain chosen frames (whether the choice is made arbitrary or not), e.g. when the movement of the moving object is not a very rapid one. Also, the iterative analysis explained hereinbefore does not have to be restricted to the selection of the first frame so that all iterations are in respect of the first selected frame, and different frames may be used throughout the analysis.
The present invention has been described using non-limiting detailed descriptions of preferred embodiments thereof that are provided by way of example and are not intended to limit the scope of the invention. It should be understood that features described with respect to one embodiment may be used with other embodiments and that not all embodiments of the invention have all of the features shown in a particular figure. Variations of embodiments described will occur to persons of the art. Furthermore, the terms "comprise," "include," "have" and their conjugates, shall mean, when used in the claims, "including but not necessarily limited to." The scope of the invention is limited only by the following claims:

Claims

Claims
1. A method for detecting one or more moving objects in a video signal generated by a moving camera, the method comprising: a providing a video signal comprising a plurality of consecutive frames; (ii) for one of said plurality of consecutive frames, being a first frame, selecting a first plurality of pixels comprised in said first frame, and identifying in a preceding frame being a second frame, at least a second plurality of pixels comprised in said first plurality of pixels; (iii) based on said at least second plurality of pixels, identifying changes that have occurred in pixels that belong to said first frame;
(iv) calculating a shifting intensity value for one or more of the pixels for which changes have been identified, wherein the shifting intensity value is based on said changes; (v) generating a vector for one or more of the pixels for which changes have been identified, wherein said vector is associated with at least the location of the one or more of the pixels for which changes have been identified, and the calculated shifting intensity value thereof; (vi) identifying at least one connected component which comprises at least one group of pixels from among said at least second plurality of pixels and wherein a change in each of the pixels comprised in said at least one group of pixels is associated with a change in each of the remaining pixels of said at least one group of pixels, and wherein the pixels comprised in each of the at least one group has a distinctive shifting intensity value thereby indicating a movement of said at least one connected component relative to background shifting caused by the camera movement; and
(vii) detecting said one or more moving objects within said video signal by associating the at least one connected component therewith.
2. A method according to claim 1 , wherein the changes determined in step (iii) originated from a movement of said one or more objects.
3. A method according to claim 2, further comprising repeating steps (ii) to (vi) and wherein each of said pluralities of pixels is associated with a frame that respectively precedes said first and second frames, thereby detecting said one or more moving objects within said video signal.
4. A method according to claim 2 or 3, further comprising a step of predicting background shifting and/or movement of said one or more moving objects within at least one future frame, based on information derived from a present frame and its one or more preceding frames.
5. A method according to claim 4, wherein said predicted background shifting within said at least one future frame is different from respective actual identified background shifting within said at least one future frame, and wherein said at least one connected component is re-identified in said at least one future frame by repeating steps (vi) and (vii).
6. A method according to claim 4, wherein said predicted background shifting and/or movement of said one or more moving objects within said at least one future frame is used in identifying said at least second plurality of pixels from among said first plurality of pixels in said second frame.
7. A method according to claim 3, wherein said one or more moving objects comprising one connected component comprising one group of pixels, and wherein said method further comprising the step of: (viii) classifying said one or more moving objects within said video signal according to a relative movement between said one group of pixels to the background shifting.
8. A method according to claim 3, wherein said one or more moving objects comprises at least one connected component comprising at least two groups of pixels, and wherein said method further comprising the step of: (viii) classifying said one or more moving objects within said video signal according to a relative movement detected between said at least two groups of pixels.
9. A method according to claim 1 , characterized in that said one or more moving objects are detected essentially in real time.
10. A method according to claim 1, wherein the method further comprising a step of providing information related to the movement of said moving camera and utilizing said information in the detecting process.
11. A computer-readable medium comprising instructions that perform a method, when executed by a processor, for establishing a computerized process for detecting one or more moving objects within a video signal generated by a moving camera, which comprises:
(i) receiving a video signal comprising a plurality of consecutive frames; (ii) for one of said plurality of consecutive frames, being a first frame, selecting a first plurality of pixels comprised in said first frame, and identifying in a preceding frame being a second frame, at least a second plurality of pixels comprised in said first plurality of pixels; (iii) based on said at least second plurality of pixels, identifying changes that have occurred in pixels that belong to said first frame; (iv) calculating a shifting intensity value for one or more of the pixels for which changes have been identified, wherein the shifting intensity value is based on said changes;
(v) generating a vector for one or more of the pixels for which changes have been identified, wherein said vector is associated with at least the location of the one or more of the pixels for which changes have been identified, and the calculated shifting intensity value thereof; (vi) identifying at least one connected component which comprises at least one group of pixels from among said at least second plurality of pixels and wherein a change in each of the pixels comprised in said at least one group of pixels is associated with a change in each of the remaining pixels of said at least one group of pixels, and wherein the pixels comprised in each of the at least one group has a distinctive shifting intensity value thereby indicating a movement of said at least one connected component relative to background shifting caused by the camera movement; and
(vii) detecting said one or more moving objects within said video signal by associating the at least one connected component therewith.
12. A computer program product comprising a computer useable medium having computer readable program code embodied therein for detecting one or more moving objects within a video signal generated by a moving camera, the computer program product comprising: (i) computer readable program code for causing the computer to receive a video signal comprising a plurality of consecutive frames; (ii) computer readable program code for causing the computer to select a first plurality of pixels comprised in one of the plurality of consecutive frames being a first frame, and from among said first plurality of pixels to identify at least a second plurality of pixels comprised in a preceding frame, being a second frame;
(iii) computer readable program code for causing the computer to identify based on said at least second plurality of pixels, changes that have occurred in pixels belonging to said first plurality of pixels; (iv) computer readable program code for causing the computer to calculate a shifting intensity value for one or more of the pixels where changes have been identified, where said shifting intensity value is based upon said changes; (v) computer readable program code for causing the computer to generate a vector for one or more of the pixels for which changes have been identified, and wherein said vector is associated with at least the location of said one or more of the pixels associated with changes that have been identified, and the calculated shifting intensity value thereof; (vi) computer readable program code for causing the computer to identify at least one connected component which comprises at least one group of pixels from among the at least second plurality of pixels and wherein a change in each of the pixels comprised in said at least one group of pixels is associated with a change in each of the remaining pixels of said at least one group of pixels, and wherein the pixels comprised in each of said at least one group has a distinctive shifting intensity value thereby indicating a movement of the at least one connected component relative to background shifting caused by the camera movement; and
(vii) computer readable program code for causing the computer to detect said one or more moving objects within said video signal based on said at least one connected component identified.
PCT/SG2008/000188 2008-05-16 2008-05-16 Method and device for analyzing video signals generated by a moving camera WO2009139723A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP08754025A EP2289045A1 (en) 2008-05-16 2008-05-16 Method and device for analyzing video signals generated by a moving camera
CN2008801288362A CN102203828A (en) 2008-05-16 2008-05-16 Method and device for analyzing video signals generated by a moving camera
PCT/SG2008/000188 WO2009139723A1 (en) 2008-05-16 2008-05-16 Method and device for analyzing video signals generated by a moving camera
AU2008356238A AU2008356238A1 (en) 2008-05-16 2008-05-16 Method and device for analyzing video signals generated by a moving camera
IL207770A IL207770A0 (en) 2008-05-16 2010-08-24 Method and device for analyzing video signals generated by a moving camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2008/000188 WO2009139723A1 (en) 2008-05-16 2008-05-16 Method and device for analyzing video signals generated by a moving camera

Publications (2)

Publication Number Publication Date
WO2009139723A1 true WO2009139723A1 (en) 2009-11-19
WO2009139723A8 WO2009139723A8 (en) 2010-01-14

Family

ID=40200908

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2008/000188 WO2009139723A1 (en) 2008-05-16 2008-05-16 Method and device for analyzing video signals generated by a moving camera

Country Status (5)

Country Link
EP (1) EP2289045A1 (en)
CN (1) CN102203828A (en)
AU (1) AU2008356238A1 (en)
IL (1) IL207770A0 (en)
WO (1) WO2009139723A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859436A (en) * 2010-06-09 2010-10-13 王巍 Large-amplitude regular movement background intelligent analysis and control system
EP3336660A3 (en) * 2016-12-14 2018-10-10 Immersion Corporation Automatic haptic generation based on visual odometry

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799876B (en) * 2010-04-20 2011-12-14 王巍 Video/audio intelligent analysis management control system
CN111381357B (en) * 2018-12-29 2021-07-20 中国科学院深圳先进技术研究院 Image three-dimensional information extraction method, object imaging method, device and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004038659A2 (en) * 2002-10-21 2004-05-06 Sarnoff Corporation Method and system for performing surveillance
US20040201706A1 (en) * 2001-10-26 2004-10-14 Katsutoshi Shimizu Corrected image generating apparatus and corrected image generating program storage medium
US20050104964A1 (en) * 2001-10-22 2005-05-19 Bovyrin Alexandr V. Method and apparatus for background segmentation based on motion localization
US20050225637A1 (en) * 2004-04-13 2005-10-13 Globaleye Network Intelligence Ltd. Area monitoring
US20060078162A1 (en) * 2004-10-08 2006-04-13 Dynapel, Systems, Inc. System and method for stabilized single moving camera object tracking

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022505A (en) * 2007-03-23 2007-08-22 中国科学院光电技术研究所 Mobile target in complex background automatic testing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050104964A1 (en) * 2001-10-22 2005-05-19 Bovyrin Alexandr V. Method and apparatus for background segmentation based on motion localization
US20040201706A1 (en) * 2001-10-26 2004-10-14 Katsutoshi Shimizu Corrected image generating apparatus and corrected image generating program storage medium
WO2004038659A2 (en) * 2002-10-21 2004-05-06 Sarnoff Corporation Method and system for performing surveillance
US20050225637A1 (en) * 2004-04-13 2005-10-13 Globaleye Network Intelligence Ltd. Area monitoring
US20060078162A1 (en) * 2004-10-08 2006-04-13 Dynapel, Systems, Inc. System and method for stabilized single moving camera object tracking

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859436A (en) * 2010-06-09 2010-10-13 王巍 Large-amplitude regular movement background intelligent analysis and control system
EP3336660A3 (en) * 2016-12-14 2018-10-10 Immersion Corporation Automatic haptic generation based on visual odometry
US10600290B2 (en) 2016-12-14 2020-03-24 Immersion Corporation Automatic haptic generation based on visual odometry

Also Published As

Publication number Publication date
AU2008356238A1 (en) 2009-11-19
IL207770A0 (en) 2010-12-30
WO2009139723A8 (en) 2010-01-14
EP2289045A1 (en) 2011-03-02
CN102203828A (en) 2011-09-28

Similar Documents

Publication Publication Date Title
TWI750498B (en) Method and device for processing video stream
Zhang et al. Wide-area crowd counting via ground-plane density maps and multi-view fusion cnns
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
EP2330557B1 (en) Moving body detection method and moving body detection device
Lazaridis et al. Abnormal behavior detection in crowded scenes using density heatmaps and optical flow
Wojek et al. Monocular 3d scene understanding with explicit occlusion reasoning
US20160191795A1 (en) Method and system for presenting panoramic surround view in vehicle
JP2016099941A (en) System and program for estimating position of object
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
US10853949B2 (en) Image processing device
JP5832910B2 (en) Image monitoring device
US20120155707A1 (en) Image processing apparatus and method of processing image
KR101548639B1 (en) Apparatus for tracking the objects in surveillance camera system and method thereof
KR101472674B1 (en) Method and apparatus for video surveillance based on detecting abnormal behavior using extraction of trajectories from crowd in images
CN110222616B (en) Pedestrian abnormal behavior detection method, image processing device and storage device
CN103391424A (en) Method for analyzing object in image captured by monitoring camera and object analyzer
Selver et al. Camera based driver support system for rail extraction using 2-D Gabor wavelet decompositions and morphological analysis
Minoura et al. Crowd density forecasting by modeling patch-based dynamics
EP2289045A1 (en) Method and device for analyzing video signals generated by a moving camera
Saif et al. Real time vision based object detection from UAV aerial images: a conceptual framework
Burkert et al. People tracking and trajectory interpretation in aerial image sequences
US11544926B2 (en) Image processing apparatus, method of processing image, and storage medium
WO2012153868A1 (en) Information processing device, information processing method and information processing program
US9183448B2 (en) Approaching-object detector, approaching object detecting method, and recording medium storing its program
KR20140045834A (en) Method and apparatus for monitoring video for estimating size of single object

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200880128836.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08754025

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 207770

Country of ref document: IL

Ref document number: 2008754025

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2008356238

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 5348/CHENP/2010

Country of ref document: IN

ENP Entry into the national phase

Ref document number: 2008356238

Country of ref document: AU

Date of ref document: 20080516

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE