CN102203828A - Method and device for analyzing video signals generated by a moving camera - Google Patents

Method and device for analyzing video signals generated by a moving camera Download PDF

Info

Publication number
CN102203828A
CN102203828A CN2008801288362A CN200880128836A CN102203828A CN 102203828 A CN102203828 A CN 102203828A CN 2008801288362 A CN2008801288362 A CN 2008801288362A CN 200880128836 A CN200880128836 A CN 200880128836A CN 102203828 A CN102203828 A CN 102203828A
Authority
CN
China
Prior art keywords
pixel
frame
pixels
mobile
intensity value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2008801288362A
Other languages
Chinese (zh)
Inventor
奥弗·米勒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ARTIVISION TECHNOLOGIES Ltd
Original Assignee
ARTIVISION TECHNOLOGIES Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ARTIVISION TECHNOLOGIES Ltd filed Critical ARTIVISION TECHNOLOGIES Ltd
Publication of CN102203828A publication Critical patent/CN102203828A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A method is provided for detecting moving objects in a video signal generated by a moving camera. A first plurality of pixels is selected from a first frame and a second plurality of pixels comprised in that first plurality, is identified in a preceding frame. Based upon the second plurality of pixels, changes occurred in pixels belonging to the first plurality of pixels are identified, and a shifting intensity value of pixels for which changes have been identified, is calculated. Then, a vector which is associated with the pixels' locations and calculated shifting intensity value is generated, and a connected component which comprises a group of pixels comprised in the second plurality of pixels, is identified.; The group is characterized in that a change in each of its pixels is associated with a change in each of the remaining pixels of that group, and that pixels comprised in the group have a distinctive shifting intensity value, indicating a movement of the connected component which is associated with the moving object, relative to background shifting caused by the camera movement.

Description

Analyze the method and apparatus of the vision signal of mobile camera generation
Technical field
The present invention relates to technical field of image processing on the whole, relates in particular to the analysis of the vision signal of mobile camera generation.
Background technology
Flame Image Process is very useful in monitoring and security protection video camera, has had the method for the activity of the specific region under the one or more hard-wired video cameras coverings of many detections in this area.Basically, the method for the general analysis of video flowing is to be divided into some frames and by using change detection algorithm to come the comparison successive frame, and it can be removed background and focus in the variation that certain objects motion that video flowing gathers taken place.The importance of the system of this reliable computer operation is that it can discern motion, saves manpower (for example not needing to monitor every video camera), overcomes the problem that human-body fatigue and mistake are brought.And, some in this system even can identify invisible to the human eye move.
For example, a kind of safety monitoring system that utilizes video input to distinguish mobile object and stationary object is disclosed for BrP GB200507525 number.In addition, when the object that collects is become when static by motion, system may trigger an alarm.According to the content of this piece patent disclosure, video flowing is at first handled by frame base one frame one frame ground, and each frame compares the successive frame group which tested edge of judgement then and keeps continuously between frame and frame through edge detection process.All one the discontinuous edge of keeping is abandoned, like this can with the mobile object that relates in the scene for example people's data remove.
U.S. Pat discloses for No. 2008002771 a kind ofly judges that by analyzing its scene that presents is a video segment static or motion.When video segment present be moving scene the time, described fragment can be further analyzed with the moving scene of judging scene and stem from moving of the motion of video camera or the object that is collected.Thisly openly relate to two kinds of mode of motion, first kind is that the controlled of video camera moves, and as the bat that chases after of video camera, tilts, zoom, rotate or move forward and backward, and second kind to be unsettled video camera move.Though the mode of motion of two kinds of video cameras that relate to may have influence on particular frame, the above-mentioned open problem of extracting and discern moving object by the video flowing of analyzing the mobile camera generation that still do not have to solve.
The major obstacle that need overcome stems from the variation that mobile camera brings for entire image, and one of open file of the problem that trial solution mobile camera few in number produces is the patent US2006078162 of the U.S..It discloses and has a kind ofly obtained a series of video image of object and the system of distinguishing the domain of object and background area based on the material frame with movable camera, the judgement of frame is to carry out with selecting instrument such as mouse or rocking bar to draw around object by the light stream estimation or by the user, then, video camera moves through the move mode of object and the move mode of video camera is followed the trail of specific object in order.Yet the shortcoming of this solution is that it can only follow the trail of the certain objects of clear-cut.
Summary of the invention
Therefore, one object of the present invention is to provide a kind of method that detects the one or more mobile objects in the vision signal that mobile camera produces.
Along with further specifying of the present invention, other purpose of the present invention also can come into focus.
First kind of embodiment according to the present invention, it provides a kind of method that detects the one or more mobile objects in the vision signal that mobile camera produces, and this method comprises:
(i) provide a vision signal that includes a plurality of successive frames;
(ii) from a plurality of successive frames, select one as first frame, from described first frame, choose the first plural pixel, choose one group be included in first frame and the pixel that in its preceding frame, identifies as the second plural pixel, in the first plural pixel, contain one second plural pixel at least;
(iii) on the basis of at least one second plural pixel, identify the variation that pixel took place that is comprised in first frame;
(iv) calculate the transition intensity value that has been identified the one or more pixels that change, transition intensity value wherein is with the described basis that is changed to;
(v) for being identified the one or more pixels that change sets up a vector, vector wherein is relevant with identified one or more locations of pixels that change and transition intensity value at least;
(vi) from described at least one second plural pixel, identify the connected component that at least one comprises at least one group of pixel, wherein the variation of remaining pixel is associated in the variation of each pixel at least one group of pixel groups and the described at least one group of pixel, and each pixel at least one group of pixel all has the transition intensity value of a uniqueness, by this unique transition intensity value can indicate since mobile caused this at least one connected component of video camera with respect to the background displacement all taken place mobile; And
(vii) detect one or more mobile objects described in the vision signal by this at least one connected component associated therewith.
Here and the term that uses in whole instructions and claims " mobile camera " be the video camera that is used for representing that the zone that collects with picture or background have relative motion, this relatively moves can be the motion of video camera with respect to the background area, also can be the background area with respect to video camera motion, therefore, the motion of every mentioned video camera for example since the mobile caused background of video camera move, perhaps suchlike camera motion is appreciated that the relative motion between video camera and the background, for convenience of the reader, moving of the mentioned video camera among the application is entity moving with respect to background.
Here employed term " transition intensity value " is to distribute to the parameter value of one or more pixels of the location aware in two different frames and in whole instructions and claims.The transition intensity value has shown the change in location of pixel in two images.
The variation that preferred implementation according to another preferred, step are judged in (iii) stems from the motion of the one or more objects in the mobile camera institute pickup area.
Embodiment according to another preferred, this at least one the second plural pixel and the first plural pixel are basic identical.
According to another preferred implementation of the present invention, the method that is provided further comprise repeating step (ii) to step (vi), wherein, the described first and second plural pixels are associated with the preceding frame of described first frame and the preceding frame of second frame respectively, thereby can detect one or more mobile objects in the vision signal.
According to another embodiment of the invention, this method also further comprises the step of predicting the motion of the conversion of background pixel in the future frame and/or one or more described mobile objects according to the information in present frame or the one or more frames before.
According to another embodiment of the invention, if the conversion of the background pixel in the future frame of prediction in conversion and the actual frame of background pixel is different, mean that then the motion of beyong contemplation has taken place for when taking this frame video camera, for example, when video camera rotation 45 is spent, caused the background transitions of mobile object, method provided by the present invention further comprise by repeating step (vi) and step (vii) come to discern again the step of this at least one connected component.
According to another embodiment of the invention, the motion of the conversion of the background pixel in the predicted future frame and/or described one or more mobile objects is the second plural pixels that are used for identifying from the first plural pixel of second frame this at least one.For example, by the position of one or more mobile objects in future frame of understanding, when future frame becomes present frame, the content of the frame before can projecting.
According to a further advantageous embodiment of the invention, when a connected component only comprised a pixel groups, this method can further comprise:
On the basis of the relative motion between this pixel groups that identifies and the background transitions, the step that the one or more mobile objects in the vision signal are classified.
Like this, for example work as in the pixel groups of being discerned must all pixels and background move when keeping synchronous or asynchronous, just can indicate this object that moves is a car.
Or selectively, can by this connected component relatively with come these one or more sorting objects (for example the change in location of the evolution of these objects and background pixel is identical) with respect to the motion between the object of stationary background because mobile object is classified compared to wanting easy with the background pixel contrast by comparing with background objects.
According to another embodiment of the invention, this at least one connected component comprises two groups of pixels at least, and the method that this provided further comprises according to the relative motion between this two groups pixel of having discerned at least comes step that the one or more mobile objects in the vision signal are classified.For example, when identifying a people and wave while walking, the pixel that comprises hand may be in a group, and the pixel that comprises health is in another group, and the relative motion (because the motion of hand is different from the motion of health) between two groups of pixels can be the classification of this connected component (people) a kind of better method is provided.
According to another embodiment of the invention, the detection of mobile object is carried out in a real-time trace routine substantially, and preferably, this vision signal is an on-site signal, as skilled in the art will recognize that, also can be applicable to arbitrarily on the vision signal for the equivalent transformation of this method.
According to another embodiment of the invention, this method is used to receive contain with the data (for example, its speed and/or direction) of the relevant information of camera motion and with the data that received and incorporates in the routine analyzer.
Another object of the present invention provides a kind of computer readable medium that includes the instruction of carrying out method of the present invention, when being carried out by processor, it can set up computerized program for the one or more mobile objects in the vision signal that detects the mobile camera generation, and it comprises:
(i) receive a vision signal that includes a plurality of successive frames;
(ii) with one of described a plurality of successive frames as first frame, choose the first plural pixel that is included in described first frame and in preceding frame, identifies as the second plural pixel, the described first plural pixel comprises one second plural pixel at least;
(iii) on the basis of at least one the second plural pixel that is comprised, identify the variation that pixel took place that is comprised in first frame;
(iv) calculate the transition intensity value that has been identified the pixel that changes, transition intensity value wherein is with the described basis that is changed to;
(, wherein should vector relevant with identified one or more locations of pixels that change and the transition intensity value that calculated at least v) for being identified the one or more pixels that change sets up a vector;
(vi) from the described at least the second plural pixel, identify the connected component that comprises one group of pixel at least, and wherein in variation and described at least one pixel groups of each pixel at least one group of pixel groups the variation of remaining pixel be associated, and each pixel at least one group of pixel groups all has the transition intensity value of a uniqueness, by this unique transition intensity value can indicate since mobile caused at least one connected component of video camera with respect to the background displacement all be moved; And
(vii) detect one or more mobile object in the vision signal by at least one connected component associated therewith.This computer readable medium that comprises the instruction of carrying out this method can be a CD embedded software, when it inserts in the computing machine and moves, can detect one or more mobile objects.
According to still another embodiment of the invention, it provides a kind of computer program that includes the computing machine available media, this computing machine available media includes embedding being used for wherein and detects the computer-readable program coding of one or more mobile objects of the vision signal that mobile camera produces, and this computer program comprises:
(i) be used to make computing machine to receive a computer-readable program coding that includes the vision signal of a plurality of successive frames;
(ii) be used for the first included plural pixel of first frame that makes computing machine choose a plurality of successive frames, and from this first plural pixel, identify be included in before the second plural pixel in the frame as the computer-readable program coding of second frame.
The computer-readable program that (iii) is used for making computing machine identify the variation that pixel took place of the first plural pixel according at least the second plural pixel is encoded;
(iv) be used to make COMPUTER CALCULATION to go out the computer-readable program coding of transition intensity value of the pixel of identified variation, the transition intensity value is changed to the basis with these; (v) being used to make computing machine is the computer-readable program coding of one or more pixels foundation vectors of identified variation, wherein, this vector is associated with one or more locations of pixels of described identified variation and the transition intensity value that calculates at least.
(vi) be used for making computing machine to identify the computer-readable program coding of the connected component that includes at least one group of pixel from this at least one second plural pixel, wherein each pixel in this at least one group of pixel groups all has the transition intensity value of a uniqueness, can indicate because mobile caused at least one connected component of video camera all is moved with respect to the background displacement by this unique transition intensity value; And
(vii) be used to make computing machine to come to go out the computer-readable program coding of one or more mobile objects according to this at least one connected component that has identified from video signal detection.
Description of drawings
In order further intactly to understand the present invention, the present invention will be further described in detail below in conjunction with Figure of description, wherein:
Figure 1A is to the different embodiments of 1C for the employing mobile computer;
Fig. 2 is the synoptic diagram of vision signal;
Fig. 3 is the first and second plural pixels among Fig. 2 embodiment;
Fig. 4 is another synoptic diagram with vision signal of compound movement.
Embodiment
The present invention will be further described in detail with the embodiment of indefiniteness below in conjunction with Figure of description.
Following embodiment has done detailed explanation to implementing ad hoc fashion of the present invention, and by them, the vision signal that mobile camera produces is through handling to detect the one or more mobile object in the vision signal.
Fig. 1 is for using three scenes of " mobile camera " of the present invention notion, wherein:
Figure 1A shows two reference point 110 and 120, it is static rig camera of the prior art that first reference point 110 is arranged in video camera 112, and another reference point 120 is arranged in a fixing object that changes under the camera acquisition zone, reference point 122 in the present embodiment for example, pickup area further comprises one tree 124, the people 128 of a stone 126 and a walking; Because the relative motion between two reference point in the present embodiment is zero, so do not have relative motion between video camera and the background, therefore, this embodiment of the prior art does not comprise in the present invention.
Reference point 130 and 140 are also arranged among Figure 1B, and reference point 130 is arranged in video camera airborne on the aircraft 134 132, and reference point 140 is arranged on the fixed object in camera acquisition zone, i.e. stone 142 in this example.The people 146 who further comprises 148, one walkings of one tree in the pickup area, and a car that travels 144.Exist the video camera that causes owing to camera motion and the relative motion between the background in the present embodiment.Therefore, video camera has in this case just fallen within the category of definition of mobile camera of the present invention, and method described in the invention can be used to detect this two mobile objects 144 and 146.
Fig. 1 C shows two reference point 150 and 160, and wherein, first reference point 150 is positioned at the video camera 152 that monitors on the tower, and then a reference point 160 is positioned on the boat deck 166 under the camera acquisition zone.Reference point 160 is positioned on the fixed object such as anchor 162 on the deck.The zone of being gathered further comprises two ring life buoys 164, and the people 168 of two walkings.In the present embodiment because the motion of sailing boat has caused the relative motion between two reference point, so even it also should be considered to mobile camera among the present invention owing to exist relative motion between video camera and the ship in cat head in the position for video camera.Here, by method provided by the invention, mobile object, promptly walking people 168 above deck can be identified and be detected respectively with the ship that moves.
In order to understand method provided by the present invention better, let us consider Fig. 2 by mobile camera produce vision signal in the frame synoptic diagram that comprises, recall each performed step of method of the present invention according to an embodiment of the invention simultaneously.
In first step of this method, provide a vision signal that comprises successive frame 210-217.Vision signal among the embodiment shown in Figure 2 has comprised a plurality of N successive frames, and frame 210-217 shown in this Fig is the part in the frame that comprises of this vision signal.These frames are filmed continuously, have described the temporal evolution of the ball (226) of a whereabouts.Preferably but not necessarily, the time interval of per two frames shooting of these frames equated with the time interval of taking any two successive frames.And this method should not be construed as and only is confined to given N frame, and just as will be discussed, vision signal may be a live broadcasting, and the value of N wherein may comprise extra information along with the variation of time to improve precision of analysis.
Next step selects the i.e. first plural pixel that contained of first frame of the frame that is included in a plurality of successive frames, and frame promptly identifies the second plural pixel in the first plural pixel of second frame in the past.
Now we select n=i (214) arbitrarily frame as first frame, the frame of this n=i comprises a desk (220), a vase (222), two lamps (2241,224 ") and balls (226).For easy to understand, we select all pixels in the n=i frame as the first plural pixel (gray area as shown in Figure 3A).Frame n=i-1 (213) before let us is looked at now.Because the motion of video camera, some pixel of the first plural frame of pixels (being frame 214) do not appear among the frame n=i-1 (as only showing lamp 224 " partial graph).Therefore, in the frame 214 and all pixels that can in frame n=i-1 (213), identify will be as the second plural pixel, and frame 213 will be as second frame.
Next step, on the basis of the second plural pixel, the variation that is included in the second plural pixel that pixel took place of the first plural pixel is identified, therefore, on the basis in the second plural pixel (the gray shade zone among Fig. 3 B), the variation that pixel took place in the first plural pixel (the gray shade zone among Fig. 3 A) is identified.
We suppose that these variations stem from moving of video camera, can be easy to see from embodiment by video camera to moving to left, and the move right major part of the pixel that causes of background changes.
Next step of present embodiment is, on the basis of these variations of being judged, calculate the transition intensity value of one or more identified pixels, the transition intensity value is to distribute to a parameter of each pixel in the second plural pixel, obtains from each locations of pixels that identifies shifts respectively.All pixels of the second plural pixel in our embodiment (being all the included pixels of gray area among Fig. 3 B), all have identical transition intensity value except the pixel that comprises ball (226), this transition intensity value in fact obtains according to the translational speed two of video camera.Because ball is only mobile object, and the pixel relevant with ball will have different transition intensity values.
Then, for the one or more pixels that identify variation are created vector, this vector transition intensity value relevant with the one or more locations of pixels that identify variation and that calculate is relevant.Be that in the first plural pixel (gray area among Fig. 3 A) each has the pixel of corresponding similar pixel to set up in the second plural pixel all to set up a vector, these vectors have comprised each locations of pixels data in first frame, and its transition intensity value has been calculated in previous step.
Next step identifies (or a plurality of) connected component.Embodiment among Fig. 2 is comparatively simple one in many ways.A mobile object (ball) is only arranged among this embodiment, the motion of ball be same nature and this object (ball) do not comprise part with different motion character.So present embodiment comprises the step of discerning only one group of pixel that is associated with this connected component.At last, move object, promptly ball is detected by the connection vector that is associated with it.
According to one embodiment of present invention, further repeating step 2 to 6 or make equivalent transformation as required.For example, as repetition first, we should be with frame n=i-1 (213) as first frame, and frame n=i-2 (212) is as second frame, and we just can proceed the step of the above-mentioned mobile ball of detection like this.In above-mentioned process, we at every turn in first new frame of definition with its former frame as second frame, said process will be easier to carry out.
Now let us is looked at that another one embodiment of the present invention, comprises further in the method that is provided that projected background moves the step of moving with a plurality of mobile objects of institute.Let us is got back among the embodiment of Fig. 2 of indefiniteness, and wherein first frame is frame n=i, and above-mentioned step is applicable to frame n=i and its preceding frame.According to the information relevant with the mobile object motion in the processed frame with background transitions, can infer conversion and the motion of mobile object and the position of the mobile object among the frame n=i+1 (ball 226) of background (perhaps stationary object in the background such as desk 220 and vase 222), the conversion of the background that is doped and the position of mobile object can be used in the optimization step identification (ii) during as first frame with frame n=i+1.Predicted background transitions can further be used for representing the variation of camera motion.For example, if video camera is placed on the vehicle of straight-line travelling, if car takes a sudden turn suddenly, actual background transitions that takes place and the difference that essence is arranged that predicts can draw some conclusions about camera motion on the basis of this difference.The particular camera motion change causes the variation (for example from the front elevation that obtains mobile object to side view) of moving object.
As explaining above, embodiment shown in Figure 2 is an embodiment who simplifies to a certain extent, and he is far from the authentic representative of the whole potential of performance invention.For understanding some extra potential of the present invention, let us is looked at Fig. 4 A, and mobile object wherein is far above in a ball (as the situation among the embodiment of Fig. 2).Three frames (410-412) that come from the complete video signal have been shown among Fig. 4 A.In these frames, we can see Yi Tiaolu (425), a people (420) and a bird (430).Video camera in this part video flowing moves right and has caused that background is moved to the left.In addition, this people limit walks the limit to the right and waves, and a bird circles in the air on high.Let us is looked back step V1 again.In this step, can identify a connected component at least, this connected component is defined as comprising at least the connected component of one group of pixel in the second plural pixel.Wherein the variation of remaining pixel is associated in the variation of each pixel that is comprised at least one group of pixel and this group pixel, and each pixel that is included at least one group of pixel all has specific can indicating because this connected component that causes that moves of video camera moves the mobile transition intensity value that is produced with respect to background.In the shown embodiment of diagram, two mobile objects (people and bird) are arranged, they do not do the motion of homogeneity.So we finally determine the connected component that is comprising two mobile objects in this step.Let us is primarily focused on this bird and goes up (Fig. 4 B).Can determine three connected components of this bird according to the transition intensity value, i.e. right flank (431), health of bird (432) and left wing (433).Each connected component is all corresponding with above-mentioned definition because they have comprised one or more groups pixel, and the variation of each pixel in each group has all shown the variation of remaining pixel in this group.In the embodiment of this bird, left wing is a connected component that contains two groups of pixels (431 " and 431 "), and the bird body is the connected component that contains one group of pixel, and the connected component of right flank also comprises two groups of pixels (4331 and 433 ").People for walking also can carry out same analysis.
According to one embodiment of the present invention, the method that is provided further comprises the step that mobile object is classified.There are two kinds of tangible sorting techniques to be easy to prove.First kind is that mobile object includes only the connected component that only contains one group of pixel.This is the congeniality motion conditions, and it can be ball or a car among Fig. 2 embodiment, a truck or suchlike.Second type classification is that the object that moves comprises one or more connected components, and the different pixel of at least two groups in these connected components, this type and the people who looks like walking, and bird that circles in the air or suchlike compound movement are relevant.Clearly, a kind of classification in back can be organized the relation of pixel and the quantity of pixel groups etc. and further divides according to each.
As those skilled in the art know, each embodiment of the present invention can be used for detecting one or more mobile objects in real time or being used for non real-time video flowing analysis.
Should be noted that above-mentioned explanation only is the part in the included embodiment of the present invention, and only be used to illustrate the present invention.For those skilled in the art, under the situation that does not depart from category of the present invention, can carry out the conversion of many other modes, and this conversion is also included within the present invention to method provided by the present invention.For example, the step of determining method of the present invention and process for a person skilled in the art also can be carried out according to different orders.Should be realized that any conversion of doing for the order of each step among the present invention all is simple thing and can makes under the situation that does not depart from spirit of the present invention.In addition, here and in whole instructions and claims mentioned first frame and second frame in fact also may and discrete frame, but also non-representative frame or specific selection frame (no matter whether selecting arbitrarily), for example, when the motion of mobile object is not very rapidly the time.And, to such an extent as to should not being confined to all replicate analysis of the selection of first frame, the analysis of the repetition of being done here all is not relevant to the first selected frame, other frame also can be used in the The whole analytical process.
The present invention has been described in detail by the preferred embodiment of indefiniteness, and these preferred embodiments are not used in restriction protection scope of the present invention.Should be noted that the feature among one of them embodiment also can be used in other embodiments, is not that all embodiment have shown all features of specific pattern.For a person skilled in the art, can carry out conversion to described embodiment of the present invention.Further, " comprise " that as " comprising " term " having " and their conjugate is construed as " including but not limited to " in the claims, the present invention only is subject to following claim.

Claims (12)

1. method that detects the one or more mobile objects in the vision signal that mobile camera produces, this method comprises:
A vision signal that includes a plurality of successive frames is provided;
(ii) from a plurality of successive frames, select one as first frame, from described first frame, choose the first plural pixel, choose one group and be included in pixel in first frame and that in its preceding frame, identify, in the first plural pixel, contain one second plural pixel at least as the second plural pixel;
(iii) on the basis of at least one second plural pixel, identify the variation that pixel took place that is comprised in first frame;
(iv) calculate the transition intensity value that has been identified the pixel that changes, transition intensity value wherein is with the described basis that is changed to;
(v) for being identified the one or more pixels that change sets up a vector, vector wherein is relevant with identified one or more locations of pixels that change and transition intensity value at least;
(vi) from the described at least the second plural pixel, identify the connected component that at least one comprises one group of pixel at least, wherein the variation of remaining pixel is associated in the variation of each pixel at least one group of pixel groups and the described at least one group of pixel, and each pixel at least one group of pixel all has the transition intensity value of a uniqueness, by this unique transition intensity value can indicate since mobile caused this at least one connected component of video camera with respect to the background displacement all taken place mobile; And
(vii) detect one or more mobile objects described in the vision signal by this at least one connected component associated therewith.
2. the method for claim 1, wherein the variation judged in (iii) of step comes from moving of described one or more objects.
3. the method for the one or more mobile objects in the detection vision signal as claimed in claim 2, this method comprises that further the step of repetition (ii) arrives step (vi), wherein, the described first and second plural pixels are associated with the preceding frame of first frame and second frame respectively.
4. as claim 2 or 3 described methods, it further comprises according to the information in present frame or the one or more frames before it predicts that background pixel at least one future frame moves and/or the step that moves of one or more described mobile objects.
5. method as claimed in claim 4, wherein if the conversion of background pixel is different with the conversion of the actual background pixel that takes place at least one future frame of being predicted, mean that then video camera has taken place by moving of beyong contemplation when taking described image, and by repeating step (vi) and step (vii) come to discern again the step of described at least one connected component in described at least one future frame.
6. method as claimed in claim 4, the background pixel conversion in wherein said predicted at least one future frame and/or the motion of described one or more mobile objects are to be used for identifying described at least one second plural pixel from the first plural pixel of second frame.
7. method as claimed in claim 3, wherein said one or more mobile object only contains a connected component that only has a pixel groups, wherein said method can comprise further that step (viii): according to relatively moving between described one group of pixel and the background transitions, the one or more mobile objects in the described vision signal are classified.
8. method as claimed in claim 3, wherein said mobile object comprises that at least one has the connected component of at least two group pixels, wherein, described method comprises that further a step (viii): according to relatively moving the described one or more mobile objects in the described vision signal are classified between the described at least two group pixels of being surveyed.
9. method according to claim 1 is characterized in that, described one or more mobile objects are substantially by detected in real time.
10. the method for claim 1, wherein this method further comprises provides information that moves about described mobile camera and the step of utilizing described information in testing process.
11. a computer readable medium that contains the instruction of carrying out a kind of method, when being carried out by processor, it can provide computerized program for the one or more mobile objects in the vision signal that detects the mobile camera generation, and it comprises:
(i) receive a vision signal that includes a plurality of successive frames;
(ii) with one of described a plurality of successive frames as first frame, choose and be included in the first plural pixel in described first frame and that in preceding frame, identify as the second plural pixel, the described first plural pixel comprises one second plural pixel at least;
(iii), identify the variation that pixel took place in described first frame according to the described at least the second plural pixel;
(iv) calculate the transition intensity value of one or more pixels of identified variation, wherein this transition intensity value is with the described basis that is changed to;
(v) set up a vector for one or more pixels of identified variation, wherein said vector gets the position with one or more pixels of this identified variation at least and the transition intensity value is relevant;
(vi) from the described at least the second plural pixel, identify the connected component that at least one contains at least one group of pixel, and wherein in variation and described at least one pixel groups of each pixel at least one group of pixel groups the variation of remaining pixel be associated, and each pixel in the described at least one group of pixel groups all has the transition intensity value of a uniqueness, by this unique transition intensity value can indicate since mobile caused at least one connected component of video camera with respect to the background displacement all be moved;
(vii) detect one or more objects in the vision signal by related with it connected component.
12. the computer program with Embedded computer-readable program calculation of coding machine available media of the one or more mobile objects that are used for detecting the vision signal that mobile camera produces, this computer program comprises:
(i) be used to make computing machine to receive the computer-readable program coding of the vision signal that includes a plurality of successive frames;
The first plural pixel of first frame that (ii) is used for making computing machine choose a plurality of successive frames, and from this first plural pixel, identify be included in before the i.e. computer-readable program coding of the second plural pixel in second frame of frame;
The computer-readable program coding of the variation that pixel took place that (iii) is used for making computing machine on the basis of the second plural pixel, identify the first plural pixel at least;
(iv) be used to make COMPUTER CALCULATION to go out the computer-readable program coding of transition intensity value of the pixel of identified variation, the transition intensity value is changed to the basis with these;
(v) being used to make computing machine is the computer-readable program coding of one or more pixels foundation vectors of identified variation, wherein, this vector is associated with one or more locations of pixels of described identified variation and the transition intensity value that calculates at least;
(vi) be used for making computing machine to identify the computer-readable program coding of the connected component that includes at least one group of pixel from least one second plural pixel, wherein each pixel at least one group of pixel groups all has the transition intensity value of a uniqueness, by this unique transition intensity value can indicate since mobile caused at least one connected component of video camera with respect to the background displacement all be moved; With
(vii) be used for making computing machine to detect the computer-readable program coding of one or more mobile objects of vision signal according at least one connected component that has identified.
CN2008801288362A 2008-05-16 2008-05-16 Method and device for analyzing video signals generated by a moving camera Pending CN102203828A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2008/000188 WO2009139723A1 (en) 2008-05-16 2008-05-16 Method and device for analyzing video signals generated by a moving camera

Publications (1)

Publication Number Publication Date
CN102203828A true CN102203828A (en) 2011-09-28

Family

ID=40200908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008801288362A Pending CN102203828A (en) 2008-05-16 2008-05-16 Method and device for analyzing video signals generated by a moving camera

Country Status (5)

Country Link
EP (1) EP2289045A1 (en)
CN (1) CN102203828A (en)
AU (1) AU2008356238A1 (en)
IL (1) IL207770A0 (en)
WO (1) WO2009139723A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111381357A (en) * 2018-12-29 2020-07-07 中国科学院深圳先进技术研究院 Image three-dimensional information extraction method, object imaging method, device and system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799876B (en) * 2010-04-20 2011-12-14 王巍 Video/audio intelligent analysis management control system
CN101859436B (en) * 2010-06-09 2011-12-14 王巍 Large-amplitude regular movement background intelligent analysis and control system
US10600290B2 (en) * 2016-12-14 2020-03-24 Immersion Corporation Automatic haptic generation based on visual odometry

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004038659A2 (en) * 2002-10-21 2004-05-06 Sarnoff Corporation Method and system for performing surveillance
US20050104964A1 (en) * 2001-10-22 2005-05-19 Bovyrin Alexandr V. Method and apparatus for background segmentation based on motion localization
US20060078162A1 (en) * 2004-10-08 2006-04-13 Dynapel, Systems, Inc. System and method for stabilized single moving camera object tracking
CN101022505A (en) * 2007-03-23 2007-08-22 中国科学院光电技术研究所 Mobile target in complex background automatic testing method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3886769B2 (en) * 2001-10-26 2007-02-28 富士通株式会社 Correction image generation apparatus and correction image generation program
GB0408208D0 (en) * 2004-04-13 2004-05-19 Globaleye Network Intelligence Area monitoring

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050104964A1 (en) * 2001-10-22 2005-05-19 Bovyrin Alexandr V. Method and apparatus for background segmentation based on motion localization
WO2004038659A2 (en) * 2002-10-21 2004-05-06 Sarnoff Corporation Method and system for performing surveillance
US20060078162A1 (en) * 2004-10-08 2006-04-13 Dynapel, Systems, Inc. System and method for stabilized single moving camera object tracking
CN101022505A (en) * 2007-03-23 2007-08-22 中国科学院光电技术研究所 Mobile target in complex background automatic testing method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111381357A (en) * 2018-12-29 2020-07-07 中国科学院深圳先进技术研究院 Image three-dimensional information extraction method, object imaging method, device and system
CN111381357B (en) * 2018-12-29 2021-07-20 中国科学院深圳先进技术研究院 Image three-dimensional information extraction method, object imaging method, device and system

Also Published As

Publication number Publication date
AU2008356238A1 (en) 2009-11-19
WO2009139723A1 (en) 2009-11-19
IL207770A0 (en) 2010-12-30
WO2009139723A8 (en) 2010-01-14
EP2289045A1 (en) 2011-03-02

Similar Documents

Publication Publication Date Title
JP4705090B2 (en) Smoke sensing device and method
EP2959454B1 (en) Method, system and software module for foreground extraction
CN109872341A (en) A kind of throwing object in high sky detection method based on computer vision and system
US20100080477A1 (en) System, computer program product and associated methodology for video motion detection using spatio-temporal slice processing
Lazaridis et al. Abnormal behavior detection in crowded scenes using density heatmaps and optical flow
JPWO2016114134A1 (en) Movement situation estimation apparatus, movement situation estimation method, and program
CN101258512A (en) Method and image evaluation unit for scene analysis
US20120155707A1 (en) Image processing apparatus and method of processing image
KR101472674B1 (en) Method and apparatus for video surveillance based on detecting abnormal behavior using extraction of trajectories from crowd in images
Dhaya CCTV surveillance for unprecedented violence and traffic monitoring
Ciampi et al. Counting Vehicles with Cameras.
KR101030257B1 (en) Method and System for Vision-Based People Counting in CCTV
CN102203828A (en) Method and device for analyzing video signals generated by a moving camera
KR101137110B1 (en) Method and apparatus for surveying objects in moving picture images
CN117132768A (en) License plate and face detection and desensitization method and device, electronic equipment and storage medium
CN114764895A (en) Abnormal behavior detection device and method
CN111753587B (en) Ground falling detection method and device
JP5864230B2 (en) Object detection device
KR20140045834A (en) Method and apparatus for monitoring video for estimating size of single object
Tarkowski et al. Efficient algorithm for blinking LED detection dedicated to embedded systems equipped with high performance cameras
KR20150033047A (en) Method and Apparatus for Preprocessing Image for Detecting Objects
KR102233109B1 (en) Mechanical diagnostic system based on image learning and method for mechanical diagnosis using the same
Ridwan Looming object detection with event-based cameras
CN112784813A (en) Motion recognition data set generation method and device based on image detection
JP5864231B2 (en) Moving direction identification device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110928