US7557835B2 - Method for calibrating at least two video cameras relatively to each other for stereoscopic filming and device therefor - Google Patents

Method for calibrating at least two video cameras relatively to each other for stereoscopic filming and device therefor Download PDF

Info

Publication number
US7557835B2
US7557835B2 US10/565,631 US56563104A US7557835B2 US 7557835 B2 US7557835 B2 US 7557835B2 US 56563104 A US56563104 A US 56563104A US 7557835 B2 US7557835 B2 US 7557835B2
Authority
US
United States
Prior art keywords
video
image
point
lines
fact
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US10/565,631
Other versions
US20070008405A1 (en
Inventor
Jerome Douret
Ryad Benosman
Jean Devars
Salah Bouzar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Citilog SAS
Original Assignee
Citilog SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Citilog SAS filed Critical Citilog SAS
Assigned to CITILOG reassignment CITILOG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DOURET, JEROME, BENOSMAN, RYAD, BOUZAR, SALAH, DEVARS, JEAN
Publication of US20070008405A1 publication Critical patent/US20070008405A1/en
Application granted granted Critical
Publication of US7557835B2 publication Critical patent/US7557835B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras

Definitions

  • the present invention relates to methods for calibrating at least two video cameras relative to each other when the two cameras make up a system for filming in stereo a portion of pathway along which bodies or items of any kind are liable to travel, in particular for the purpose of determining the state of occupation of said pathway portion and for detecting any incidents that might occur on said pathway portion.
  • This technique of using stereoscopic vision serves to determine a third dimension for the items, i.e. their relief, by lifting ambiguities due to shadows, reflections, etc. that might be found on the items, and can be most advantageous, particularly but not exclusively, in the field of monitoring road traffic.
  • the present invention also relates to apparatuses serving to implement the methods for calibrating at least two video cameras relative to each other when the two cameras form part of a system for stereoscopically filming a portion of a pathway of any kind.
  • apparatuses that comprise at least two video cameras for filming stereoscopically, and that technique is likewise well known in itself.
  • the technique consists in using at least two cameras that are pointed towards an item for filming, with or without a small angle between their optical axes, just like the two eyes of the optical system of a human being. That technique makes it possible to obtain views that appear to be “in relief” when they are viewed or analyzed using an appropriate technique, which since that technique is known in itself, is not repeated herein.
  • the two cameras In order to obtain a good pair of stereo images, the two cameras must naturally give images that are dimensioned in the same manner in the same frame of reference, i.e. the images must be very similar in terms of dimensions and it must be possible for them to be combined using the stereovision technique in order to facilitate stereoscopic viewing.
  • That technique is not easily adaptable to video cameras. Use must be made of two video cameras that are adjusted specifically one relative to the other so as to output images that are very similar to each other in order to obtain the stereoscopic effect, as is well known in itself.
  • the two cameras are calibrated in the factory, e.g. using calibration patterns. Calibration serves to determine the relative positioning of the cameras and also parameters that are intrinsic to each of them. Thereafter, the cameras are placed in a special protective housing serving to lock them in position relative to each other, and including means for tilting each camera generally about two or three orthogonal axes, and possibly also means for adjusting the focal lengths of each of the camera lenses. Once these adjustments have been carried out, they are locked and the housing is transported to the site where it is to be located using the adjustments set in the factory.
  • an object of the present invention is to provide a method of calibrating at least two video cameras relative to each other, when the two cameras constitute apparatus for stereoscopically filming a portion of pathway along which bodies of any kind might travel, in order to carry out surveillance of the state of occupation of said portion of pathway, and in particular in order to detect any incidents that might occur on said portion of pathway, which method is simpler than prior art methods in the same field and can thus be automated easily and can be applied in any location, thus enabling calibration of the two video cameras to be performed on site, and at any time should that be necessary, without it being necessary, for example, to dismantle the housing containing the cameras.
  • Another object of the present invention is to provide apparatus enabling said method to be implemented.
  • the present invention provides a method of calibrating at least two video cameras relative to each other when said two cameras constitute apparatus for stereoscopically filming a portion of pathway suitable for having any type of body traveling therealong, in order to detect the state of occupation of said portion of pathway, and in particular to detect incidents that might occur on said portion of pathway, the method being characterized in that it consists:
  • said plurality of marks (M 11 , M 12 , M 13 ; M 21 , M 22 , M 23 ; M 31 , M 32 , M 33 ) is at least nine in number, and it consists additionally in forming, in the first group of lines, a third geometrical line D 3 , and in the second group of lines, a sixth geometrical line D 6 , and in determining by approximation, in each of the video images, a first image meeting point (P 1i1 , P 1i2 ) constituted as being the point at which the first, second, and third image lines D 1i , D 2i , D Di meet, and a second image meeting point (P 2i1 , P 2i2 ) considered as being the point at which the fourth, fifth, and sixth image lines D 4i , D 5i , D 6i meet.
  • P 1i1 , P 1i2 constituted as being the point at which the first, second, and third image lines D 1i , D 2i , D Di meet
  • the present invention also provides apparatus for implementing the above-defined method, the apparatus being characterized in that it comprises:
  • the present invention also provides apparatus for implementing the above-defined method, the apparatus being characterized by the fact that it comprises:
  • FIG. 1 shows the first stage in implementing the method of the invention for calibrating at least two video cameras relative to each other, this stage consisting in applying some minimum number of marks on the portion of pathway that is to be subjected to surveillance, with FIG. 1 showing the marks after they have been applied to the portion of pathway;
  • FIG. 2 shows the view that ought then to be obtained with an optical camera, such as a video camera, assuming that the camera is perfect in structure and operation and assuming that the marks are accurately in alignment on the portion of pathway, as explained in the description below;
  • an optical camera such as a video camera
  • FIG. 3 shows another stage in the method, specifically that which consists in obtaining a “processed” image from a video image obtained by one of the two cameras;
  • FIG. 4 shows a shape for one of the marks on the portion of pathway, showing a possible state for a mark after it has been subjected to a certain amount of damage over time since being put into place initially in a correct state on said surface of the portion of pathway;
  • FIG. 5 shows by way of diagrammatic example three stages of the method of the invention in a single view, these stages being amongst the final stages in calibrating at least two video cameras relative to each other;
  • FIG. 6 is a theoretical diagram showing one embodiment of apparatus of the invention enabling the method of the invention to be implemented.
  • the method of the invention for calibrating the two video cameras relative to each other consists initially in placing a plurality of marks on the surface 5 of the portion of pathway 4 , there being at least four marks, these marks differing in appearance from the surface of the portion of pathway 4 and being distributed in a substantially ordered manner on a first group of first and second geometrical lines D 1 , D 2 meeting at a first point P 1 , and in such a manner that the given points belonging respectively to the marks having the same ordinate relative to the first point P 1 on said first and second geometrical lines D 1 , D 2 are situated on a second group of fourth and fifth geometrical lines D 4 , D 5 meeting at a second point P 2 that does not coincide with the first point P 1 .
  • the method then consists in using each of the two video cameras to form a video image of said portion of pathway that includes the marks, in defining in each of the two video images, a characteristic point P c for each mark image, in using the characteristic points P c to determine a pair of first and second image lines D 1i , D 2i , and a pair of fourth and fifth image lines D 4i , D 5i , in determining in each video image, a first image meeting point P 1i1 , P 1i2 between the first and second image lines D 1i , D 2i , and a second image meeting point P 2i1 , P 2i2 between the fourth and fifth images lines D 4i , D 5i , and in processing the video signals delivered by each video camera in such a manner that the signals are representative of two images suitable for forming a stereoscopic video image.
  • this method can also apply to apparatus having more than two cameras should that be necessary.
  • the person skilled in the art will have no difficulty in adapting the method described below to a number of cameras greater than two.
  • the method described above already gives good results, but in order to obtain results that are even more accurate, the method consists firstly, with reference to FIG. 1 , in placing on the surface 5 of the portion of pathway 4 , which portion is advantageously selected to be plane or relatively plane, a plurality of marks M 11 , M 12 , M 13 ; M 21 , M 22 , M 23 ; M 31 , M 32 , M 33 comprising at least nine marks.
  • the number of marks it is more advantageous for the number of marks to be a multiple of three and equal to not less than nine, so as to make it possible to determine at least three lines in at least two groups of different directions.
  • the marks when applied to a portion of roadway or the like, it being understood that the ground is gray or even black, the marks may be constituted, for example, by strips that are white or the like, e.g. being stuck to the ground, exactly in the same manner as the white marks that are placed on roads and highways to define traffic lanes, or warning strips, or the like.
  • these marks are nevertheless placed on the portion of pathway 4 so as to be distributed substantially in ordered manner on a first group of first, second, and third geometrical lines D 1 , D 2 , D 3 that meet at a first point P 1 and in such a manner that given points P d11 , P d21 , P d31 ; P d12 , P d32 ; P d13 , P d23 , P d33 belonging respectively to the marks having the same ordinates relative to the first point P 1 on said first, second, and third geometrical lines D 1 , D 2 , D 3 are situated on a second group of fourth, fifth, and sixth geometrical lines D 4 , D 5 , D 6 that meet at a second point P 2 that does not coincide with the first point P 1 .
  • marks is used to mean any signs, patterns, etc. of any kind which, when associated in groups of at least two, serve to define such lines.
  • the two points P 1 and P 2 may be situated at a finite distance away or at infinity.
  • This second option is advantageous since it makes it possible, when performing surveillance on roadways, to make use of marks on the ground in the form of white or yellow lines that are standardized as being rectangular in shape and that are already placed on the roadway, given that in any event they have a common length, a common width, and a common spacing. They can also be selected to be on portions of road that are rectilinear.
  • FIG. 1 is a diagrammatic view of a portion of pathway 4 having placed on its surface 5 nine marks at the intersections between the two groups of three geometrical lines each.
  • each mark is ordered on the lines D 1 , D 2 , D 3 , i.e. is given an order number counting from the first point of P 1 .
  • the first mark M 11 is given the number “1” on the first line D 1
  • the second mark M 12 is given the number “2” on said line D 1 , and so on, it being specified that the same applies for the marks on the other two lines D 2 and D 3 .
  • the points P d11 , P d12 , P d13 ; P d21 , P d22 , P d23 ; P d31 , P d32 , P d33 given by the marks as defined above can be selected in various ways.
  • these given points may either be the points where the diagonals of the marks intersect, or else one of the corners of the rectangles, etc.
  • the method consists, at any time after the above first stage has been accomplished, in using each of the two video cameras to form a respective still or moving image of the portion of pathway 4 containing the marks M 11 , M 12 , M 13 ; M 21 , M 22 , M 23 ; M 31 , M 32 , M 33 .
  • Such an image of the pathway is shown by way of example in FIG. 2 .
  • the images of the geometrical lines D 1 , D 2 , D 3 , and D 4 , D 5 , D 6 are shown as intersecting at points situated at finite distances away since it is clear that the cameras are disposed in the manner shown in FIG. 6 in direct view of the portion of pathway 4 so that their optical axes are pointing in a direction that is oblique relative to the surface 5 of the portion of pathway 4 .
  • the object points P 1 and P 2 situated at infinity as shown in FIG. 1 now correspond to image points P 1i and P 2i at distances that are finite.
  • the rectangular object marks M 11 , M 12 , M 13 ; M 21 , M 22 , M 23 ; M 31 , M 32 , M 33 as shown in FIG. 1 they correspond to image marks in the form of arbitrary quadrilaterals M 11i , M 12i , M 13i ; M 21i , M 22i , M 23i ; M 31i , M 32i , M 33i .
  • the method then consists in defining, in the video image given by each camera, a characteristic point P c ( FIG. 4 ) or for the set of images of the marks, characteristic points P c11 , P c12 , P c13 ; P c21 , P c22 , P c23 ; P c31 , P c32 , P c33 .
  • P c of each image of the mark in various ways. For example, it is possible to use the intersection of at least two lines interconnecting in respective pairs four non-coinciding points of the image of the mark, for example the diagonals of the quadrilateral constituting the image of the rectangular mark.
  • the characteristic point P c can be defined, for example, as the center of gravity of the color forming the image of the mark, or by the center of gravity of the total area of the image of the mark, etc.
  • the method then consists in using these characteristic points P c to determine a triplet of first, second, and third image lines D 1i , D 2i , D 3i and a triplet of fourth, fifth, and sixth image lines D 4i , D 5i , D 6i corresponding so to speak to images of the respective geometrical lines D 1 , D 2 , D 3 and D 4 , D 5 , D 6 .
  • these image lines D 1i , D 2i , D 3i and D 4i , D 5i , D 6i generally do not meet at respective single points since the characteristic points P c11 , P c12 , P c13 ; P c21 , P c22 , P c23 ; P c31 , P c32 , P c33 need not be accurately aligned in threes, for example because of uncertainties in image analysis, because of the poor quality of the images of the marks due to the marks being badly damaged, because of atmospheric conditions, etc.
  • the method consists in determining by approximation in each video image a first image meeting point P 1i1 , P 1i2 considered as being the point at which the first, second, third image lines D 1i , D 2i , D 3i are assumed to meet, and a second image meeting point P 2i1 , P 2i2 considered as being the point at which the fourth, fifth, and sixth image lines D 4i , D 5i , D 6i meet.
  • the above step consists in repositioning, in the video images, the two groups of three lines each, e.g. D 1i , D 2i , D 3i and D 4i , D 5i , D 6i in such a manner that the lines in each group do indeed intersect at a respective single point, where these meeting points define the image meeting points P 1i1 , P 1i2 and P 2i1 , P 2i2 .
  • the two video cameras deliver respective video signals representative of these video images with the first image meeting points P 1i1 , P 1i2 and the second image meeting points P 2i1 , P 2i2 .
  • These video signals are in fact representative of the calibration pattern constituted by the marks M 11 , M 12 , M 13 ; M 21 , M 22 , M 23 ; M 31 , M 32 , M 33 .
  • These video signals delivered by each of the cameras can be processed so that, when combined with each other, e.g. on being repositioned, they form two images suitable for forming a single stereoscopic video image, using the technique that is known in this field, as mentioned above.
  • the video signals are processed by computer, thereby constituting an implementation that is relatively inexpensive.
  • a programmable video signal processor unit e.g. of the microprocessor type, having inlet terminals connected to the outlets 12 , 13 of the two video cameras 1 , 2 , e.g. as shown in FIG. 5 .
  • the method consists in adjusting the two video cameras relative to each other until by repositioning the two video images given by the two video cameras, the first and second image meeting points P 1i1 , P 2i1 of one video image are at a given distance respectively from the first and second image meeting points P 1i2 , P 2i2 of the other video image, which distance can easily be determined by a person skilled in the art in order to obtain a stereoscopic effect.
  • this distance may even be of zero value.
  • the items to be filmed stereoscopically are situated between the surface 5 of said portion of pathway 4 and the lenses of the cameras, and as a result shifting the images taken by the two video cameras suffices on its own to obtain the stereoscopic effect.
  • the cameras can be adjusted relative to each other by modifying-one of the following parameters for each video camera: its elevation, its azimuth, and/or its tilt, its optical field of view, e.g. advantageously by adjusting the focal length of the lens of the camera, its resolution.
  • FIG. 5 is a diagram showing an example of how the two cameras can be adjusted as mentioned above.
  • the frame in FIG. 5 may represent the screen 28 of a video monitor 26 , as shown diagrammatically in FIG. 6 , where there are superposed the two images coming from the two cameras after they have been processed as mentioned above.
  • This frame shows the first pair of points P 1i1 and P 2i1 as defined by the image given by the first camera 1 , and the second pair of points P 1i2 and P 2i2 defined by the image given by the second camera 2 .
  • the points P 1i2 and P 2i2 of the second pair (represented by large black dots) firstly do not coincide with the points P 1i1 and P 2i1 of the first pair, and secondly they are further apart from each other than the distance between the points of the first pair.
  • the two video cameras can be adjusted, for example, as follows: firstly the optical field of the second camera 2 is reduced so as to move the points P 1i2 and P 2i2 towards each other along arrows f 1 until the distance between them is substantially equal to the distance between the points P 1i1 and P 2i1 (the points P 1i2 and P 2i2 in this position being represented by small circles), and then the second camera is pivoted about a vertical axis so that the same points P 1i2 and P 2i2 are moved along arrow f 2 until they come into register with the points P 1i1 and P 2i1 the points P 1i2 and P 2i2 in this position being represented by “+” signs), and finally, the same second camera is pivoted about a horizontal axis so that the points P 1i2 and P 2i2 are moved along arrow f 3 until they are superposed or substantially superposed on the points P 1i1 and P 2i1 .
  • the two images given by the two cameras can be used for monitoring the state of occupation of a roadway, for example for motor vehicles, using stereo techniques known in the prior art.
  • the adjustment or calibration of the cameras is then terminated.
  • only the parameters of the camera 2 are modified.
  • the same result could be obtained by modifying only the parameters of the camera 1 , or by modifying simultaneously parameters of both cameras 1 and 2 .
  • the apparatus comprises a plurality of marks situated on the surface 5 of a portion of pathway 4 to be monitored, corresponding respectively to the intersections of two groups of at least two geometrical lines each that meet at first and second points P 1 and P 2 , a support 11 suitable for being positioned in direct view of the portion of pathway 4 , at least two video cameras 1 , 2 mounted on the support, and each having a respective video signal outlet 12 , 13 delivering signals representative of video images given by the corresponding video camera, and a programmable video signal processor and analysis unit 25 having inlet terminals connected to the outlets 12 , 13 of the two video cameras.
  • the apparatus has nine marks M 11 , M 12 , M 13 ; M 21 , M 22 , M 23 ; M 31 , M 32 , M 33 situated on three geometrical lines intersecting at first and second points P 1 , P 2 , a support 11 suitable for being installed in direct view of the portion of pathway 4 , at least two video cameras 1 , 2 each having a respective outlet 12 , 13 for video signals representative of video images given by the corresponding video cameras, each camera having a lens of variable focal length 14 , 15 controllable via a control inlet 16 , 17 , controllable means 18 , 19 , e.g.
  • each of the two video cameras so as to be capable of being pivoted relative to the support 11 about at least two non-coinciding axes, each coupled for example to a corresponding drive motor, and these means 18 , 19 being suitable for being controlled from control inlets 20 , 21 , and a programmable video signal processor and analysis unit 25 , e.g.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

Methods for calibrating at least two video cameras for a stereoscopic device includes: providing, on the lane portion, nine marks of hue other than that of the lane, sequenced on a first set of three concurrent virtual straight lines in a first point and distributed in specific manner on a second set of concurrent virtual straight lines in a second point; forming, with each of the cameras an image of the lane portion; defining, in each of the two images, one characteristic point of each mark image; determining, with the characteristic points, six concurrent straight line images respectively in two concurrent points; and processing the video signals delivered by each video camera such that the signals are representative of two images suitable for forming a stereoscopic video image. The method is useful for determining the occupancy condition of a lane portion and detecting incidents.

Description

The present invention relates to methods for calibrating at least two video cameras relative to each other when the two cameras make up a system for filming in stereo a portion of pathway along which bodies or items of any kind are liable to travel, in particular for the purpose of determining the state of occupation of said pathway portion and for detecting any incidents that might occur on said pathway portion.
This technique of using stereoscopic vision serves to determine a third dimension for the items, i.e. their relief, by lifting ambiguities due to shadows, reflections, etc. that might be found on the items, and can be most advantageous, particularly but not exclusively, in the field of monitoring road traffic.
These methods find a particularly advantageous application in detecting incidents of any kind on portions of motor vehicle roadway, or the like, it being specified that they can also be used for surveillance of portions of pathways of any other type along which any kind of body might move, whether living bodies such as pedestrians or the like walking on sidewalks or the like, or items such as manufactured goods placed on transfer paths, such as conveyor belts, railway lines, or the like.
The present invention also relates to apparatuses serving to implement the methods for calibrating at least two video cameras relative to each other when the two cameras form part of a system for stereoscopically filming a portion of a pathway of any kind.
At present, in order to undertake surveillance of a pathway portion, such as a portion of roadway, use is made of a video camera which films said portion of pathway, optionally continuously. The images that are obtained are processed by a technique that is well known to the person skilled in the art and is referred to as “image analysis”. The initial techniques to be implemented made use essentially of a single camera. Numerous documents, in particular patents, have been published relating thereto, and that technique is indeed still in widespread use.
Nevertheless, in order to refine the surveillance of portions of pathway, apparatuses have been made that comprise at least two video cameras for filming stereoscopically, and that technique is likewise well known in itself.
It is recalled that the technique consists in using at least two cameras that are pointed towards an item for filming, with or without a small angle between their optical axes, just like the two eyes of the optical system of a human being. That technique makes it possible to obtain views that appear to be “in relief” when they are viewed or analyzed using an appropriate technique, which since that technique is known in itself, is not repeated herein.
In order to obtain a good pair of stereo images, the two cameras must naturally give images that are dimensioned in the same manner in the same frame of reference, i.e. the images must be very similar in terms of dimensions and it must be possible for them to be combined using the stereovision technique in order to facilitate stereoscopic viewing.
With a still camera, it is not very difficult to obtain stereoscopic views, e.g. by using the same objective lens and the same focal plane for taking the two views.
That technique is not easily adaptable to video cameras. Use must be made of two video cameras that are adjusted specifically one relative to the other so as to output images that are very similar to each other in order to obtain the stereoscopic effect, as is well known in itself. When such a device is provided for determining the traffic occupation state of a roadway, the two cameras are calibrated in the factory, e.g. using calibration patterns. Calibration serves to determine the relative positioning of the cameras and also parameters that are intrinsic to each of them. Thereafter, the cameras are placed in a special protective housing serving to lock them in position relative to each other, and including means for tilting each camera generally about two or three orthogonal axes, and possibly also means for adjusting the focal lengths of each of the camera lenses. Once these adjustments have been carried out, they are locked and the housing is transported to the site where it is to be located using the adjustments set in the factory.
It must the be hoped that all of the settings were initially carried out correctly, since if it is necessary to adjust them once the housing is on site, such adjustments can be difficult or even impossible, particularly given the location and/or the situation, of the housing containing the cameras relative to the portion of pathway.
In any event, as with any apparatus, it will be necessarily periodically to recalibrate the two cameras relative to each other, with the only acceptable solution being to return them to the factory to carry out the new adjustments.
Thus, an object of the present invention is to provide a method of calibrating at least two video cameras relative to each other, when the two cameras constitute apparatus for stereoscopically filming a portion of pathway along which bodies of any kind might travel, in order to carry out surveillance of the state of occupation of said portion of pathway, and in particular in order to detect any incidents that might occur on said portion of pathway, which method is simpler than prior art methods in the same field and can thus be automated easily and can be applied in any location, thus enabling calibration of the two video cameras to be performed on site, and at any time should that be necessary, without it being necessary, for example, to dismantle the housing containing the cameras.
Another object of the present invention is to provide apparatus enabling said method to be implemented.
More precisely, the present invention provides a method of calibrating at least two video cameras relative to each other when said two cameras constitute apparatus for stereoscopically filming a portion of pathway suitable for having any type of body traveling therealong, in order to detect the state of occupation of said portion of pathway, and in particular to detect incidents that might occur on said portion of pathway, the method being characterized in that it consists:
    • in placing a plurality of marks on the surface of the portion of pathway, said marks being distributed substantially:
      • in ordered manner on a first group of first and second geometrical lines D1, D2 meeting at a first point P1; and
      • in such a manner that given points belonging respectively to the marks having the same order relative to the first point P1 on said first and second geometrical lines D1, D2 are situated on a second group of fourth and fifth geometrical lines D4, D5 meeting at a second point P2 that does not coincide with the first point P1;
    • in forming a video image of said portion of pathway including said marks, using each of the two video cameras;
    • in defining a characteristic point Pc for each image of a mark in each of the two video images;
    • in determining first and second image lines D1i, D2i and fourth and fifth image lines D4i, D5i from said characteristic points Pc;
    • in determining a first image meeting point for the first and second image lines D1i, D2i and a second image meeting point for the fourth and fifth image lines D4i, D5i, in each of the video images; and
    • in processing the video signals delivered by each video camera in such a manner that these signals are representative of two images suitable for being processed by stereovision.
According to another characteristic of the method of the present invention, said plurality of marks (M11, M12, M13; M21, M22, M23; M31, M32, M33) is at least nine in number, and it consists additionally in forming, in the first group of lines, a third geometrical line D3, and in the second group of lines, a sixth geometrical line D6, and in determining by approximation, in each of the video images, a first image meeting point (P1i1, P1i2) constituted as being the point at which the first, second, and third image lines D1i, D2i, DDi meet, and a second image meeting point (P2i1, P2i2) considered as being the point at which the fourth, fifth, and sixth image lines D4i, D5i, D6i meet.
The present invention also provides apparatus for implementing the above-defined method, the apparatus being characterized in that it comprises:
    • a plurality of marks situated on the surface of a portion of pathway respectively at the points of intersection between two groups of at least two geometrical lines that meet at a first point P1 and at a second point P2;
    • a support suitable for being installed in direct view of said portion of pathway;
    • at least two video cameras mounted on said support, each camera having an outlet for video signals representative of video images given by the corresponding video camera; and
    • a programmable video signal processor and analysis unit having inlet terminals connected to the outlets of the two video cameras.
The present invention also provides apparatus for implementing the above-defined method, the apparatus being characterized by the fact that it comprises:
    • a plurality of marks situated on the surface of a portion of pathway respectively at the points of intersection between two groups of at least two geometrical lines that meet at a first point P1 and at a second point P2;
    • a support suitable for being installed in direct view of said portion of pathway;
    • at least two video cameras each having a respective outlet for video signals representative of video images given by the corresponding video camera, each camera having a variable focal length lens controllable from a control inlet;
    • controllable means for mounting each of the two video cameras to pivot relative to said support about at least two non-coincident axes, said means being suitable for being controlled from control inlets; and
    • a programmable video signal processor and analysis unit having inlet terminals connected to the outlets of the two video cameras, and outlet terminals connected to the control inlets of the controllable means for mounting each of the two video cameras to pivot relative to said support about at least two non-coincident axes, and to the control inlets of the variable focal length lens of each video camera.
Other characteristics and advantages of the invention appear from the following description given with reference to the accompanying drawings by way of non-limiting illustration, and in which:
FIG. 1 shows the first stage in implementing the method of the invention for calibrating at least two video cameras relative to each other, this stage consisting in applying some minimum number of marks on the portion of pathway that is to be subjected to surveillance, with FIG. 1 showing the marks after they have been applied to the portion of pathway;
FIG. 2 shows the view that ought then to be obtained with an optical camera, such as a video camera, assuming that the camera is perfect in structure and operation and assuming that the marks are accurately in alignment on the portion of pathway, as explained in the description below;
FIG. 3 shows another stage in the method, specifically that which consists in obtaining a “processed” image from a video image obtained by one of the two cameras;
FIG. 4 shows a shape for one of the marks on the portion of pathway, showing a possible state for a mark after it has been subjected to a certain amount of damage over time since being put into place initially in a correct state on said surface of the portion of pathway;
FIG. 5 shows by way of diagrammatic example three stages of the method of the invention in a single view, these stages being amongst the final stages in calibrating at least two video cameras relative to each other; and
FIG. 6 is a theoretical diagram showing one embodiment of apparatus of the invention enabling the method of the invention to be implemented.
In general, when two cameras 1 and 2 form part of stereoscopic filming apparatus 3 for filming a pathway portion 4 along which bodies of any type might travel, in order to detect the occupation state of said portion of pathway, and in particular in order to detect any incidents that might occur on said portion of pathway, the method of the invention for calibrating the two video cameras relative to each other, consists initially in placing a plurality of marks on the surface 5 of the portion of pathway 4, there being at least four marks, these marks differing in appearance from the surface of the portion of pathway 4 and being distributed in a substantially ordered manner on a first group of first and second geometrical lines D1, D2 meeting at a first point P1, and in such a manner that the given points belonging respectively to the marks having the same ordinate relative to the first point P1 on said first and second geometrical lines D1, D2 are situated on a second group of fourth and fifth geometrical lines D4, D5 meeting at a second point P2 that does not coincide with the first point P1.
The method then consists in using each of the two video cameras to form a video image of said portion of pathway that includes the marks, in defining in each of the two video images, a characteristic point Pc for each mark image, in using the characteristic points Pc to determine a pair of first and second image lines D1i, D2i, and a pair of fourth and fifth image lines D4i, D5i, in determining in each video image, a first image meeting point P1i1, P1i2 between the first and second image lines D1i, D2i, and a second image meeting point P2i1, P2i2 between the fourth and fifth images lines D4i, D5i, and in processing the video signals delivered by each video camera in such a manner that the signals are representative of two images suitable for forming a stereoscopic video image.
Nevertheless, it is specified that this method can also apply to apparatus having more than two cameras should that be necessary. The person skilled in the art will have no difficulty in adapting the method described below to a number of cameras greater than two.
The method described above already gives good results, but in order to obtain results that are even more accurate, the method consists firstly, with reference to FIG. 1, in placing on the surface 5 of the portion of pathway 4, which portion is advantageously selected to be plane or relatively plane, a plurality of marks M11, M12, M13; M21, M22, M23; M31, M32, M33 comprising at least nine marks. In general, it is more advantageous for the number of marks to be a multiple of three and equal to not less than nine, so as to make it possible to determine at least three lines in at least two groups of different directions.
For example, when applied to a portion of roadway or the like, it being understood that the ground is gray or even black, the marks may be constituted, for example, by strips that are white or the like, e.g. being stuck to the ground, exactly in the same manner as the white marks that are placed on roads and highways to define traffic lanes, or warning strips, or the like.
According to an important characteristic of the invention, these marks are nevertheless placed on the portion of pathway 4 so as to be distributed substantially in ordered manner on a first group of first, second, and third geometrical lines D1, D2, D3 that meet at a first point P1 and in such a manner that given points Pd11, Pd21, Pd31; Pd12, Pd32; Pd13, Pd23, Pd33 belonging respectively to the marks having the same ordinates relative to the first point P1 on said first, second, and third geometrical lines D1, D2, D3 are situated on a second group of fourth, fifth, and sixth geometrical lines D4, D5, D6 that meet at a second point P2 that does not coincide with the first point P1.
It should be understood that “marks” is used to mean any signs, patterns, etc. of any kind which, when associated in groups of at least two, serve to define such lines.
The two points P1 and P2 may be situated at a finite distance away or at infinity. This second option is advantageous since it makes it possible, when performing surveillance on roadways, to make use of marks on the ground in the form of white or yellow lines that are standardized as being rectangular in shape and that are already placed on the roadway, given that in any event they have a common length, a common width, and a common spacing. They can also be selected to be on portions of road that are rectilinear. By way of example, FIG. 1 is a diagrammatic view of a portion of pathway 4 having placed on its surface 5 nine marks at the intersections between the two groups of three geometrical lines each.
However, it is clear that the marks could also be placed specially on a pathway of any kind whatsoever so as to have the two points P1 and P2 located at a finite distance away.
In order to understand the present description, each mark is ordered on the lines D1, D2, D3, i.e. is given an order number counting from the first point of P1. For example, the first mark M11 is given the number “1” on the first line D1, and the second mark M12 is given the number “2” on said line D1, and so on, it being specified that the same applies for the marks on the other two lines D2 and D3.
As a result, according to a characteristic of the invention specified above, all of the marks having the same order number on the lines D1, D2, and D3 are situated respectively on the lines D4, D5, and D6 that intersect at the second point P2.
The points Pd11, Pd12, Pd13; Pd21, Pd22, Pd23; Pd31, Pd32, Pd33 given by the marks as defined above can be selected in various ways. For example, when the marks are substantially rectangular in shape, as is the general case on roadways, these given points may either be the points where the diagonals of the marks intersect, or else one of the corners of the rectangles, etc.
Thereafter, the method consists, at any time after the above first stage has been accomplished, in using each of the two video cameras to form a respective still or moving image of the portion of pathway 4 containing the marks M11, M12, M13; M21, M22, M23; M31, M32, M33. Such an image of the pathway is shown by way of example in FIG. 2.
In this view, the images of the geometrical lines D1, D2, D3, and D4, D5, D6 are shown as intersecting at points situated at finite distances away since it is clear that the cameras are disposed in the manner shown in FIG. 6 in direct view of the portion of pathway 4 so that their optical axes are pointing in a direction that is oblique relative to the surface 5 of the portion of pathway 4. By a perspective effect, the object points P1 and P2 situated at infinity as shown in FIG. 1 now correspond to image points P1i and P2i at distances that are finite. As for the rectangular object marks M11, M12, M13; M21, M22, M23; M31, M32, M33 as shown in FIG. 1, they correspond to image marks in the form of arbitrary quadrilaterals M11i, M12i, M13i; M21i, M22i, M23i; M31i, M32i, M33i.
The method then consists in defining, in the video image given by each camera, a characteristic point Pc (FIG. 4) or for the set of images of the marks, characteristic points Pc11, Pc12, Pc13; Pc21, Pc22, Pc23; Pc31, Pc32, Pc33.
It is possible to determine the characteristic point. Pc of each image of the mark in various ways. For example, it is possible to use the intersection of at least two lines interconnecting in respective pairs four non-coinciding points of the image of the mark, for example the diagonals of the quadrilateral constituting the image of the rectangular mark.
Nevertheless, in an advantageous implementation of the method for performing surveillance of a roadway, since marks on the ground M can suffer damage over time such as the damage shown by way of example in FIG. 4, and thus need not continue to remain accurately rectangular in shape, the characteristic point Pc can be defined, for example, as the center of gravity of the color forming the image of the mark, or by the center of gravity of the total area of the image of the mark, etc.
Once these characteristic points Pc11, Pc12, Pc13; Pc21, Pc22, Pc23; Pc31, Pc32, Pc33 have been defined, the method then consists in using these characteristic points Pc to determine a triplet of first, second, and third image lines D1i, D2i, D3i and a triplet of fourth, fifth, and sixth image lines D4i, D5i, D6i corresponding so to speak to images of the respective geometrical lines D1, D2, D3 and D4, D5, D6.
However, as shown in FIG. 3, these image lines D1i, D2i, D3i and D4i, D5i, D6i generally do not meet at respective single points since the characteristic points Pc11, Pc12, Pc13; Pc21, Pc22, Pc23; Pc31, Pc32, Pc33 need not be accurately aligned in threes, for example because of uncertainties in image analysis, because of the poor quality of the images of the marks due to the marks being badly damaged, because of atmospheric conditions, etc.
Thus, starting from these two groups of image lines, respectively D1i, D2i, D3i and D4i, D5i, D6i, the method consists in determining by approximation in each video image a first image meeting point P1i1, P1i2 considered as being the point at which the first, second, third image lines D1i, D2i, D3i are assumed to meet, and a second image meeting point P2i1, P2i2 considered as being the point at which the fourth, fifth, and sixth image lines D4i, D5i, D6i meet.
However, in a possible implementation of the method of the invention, the above step consists in repositioning, in the video images, the two groups of three lines each, e.g. D1i, D2i, D3i and D4i, D5i, D6i in such a manner that the lines in each group do indeed intersect at a respective single point, where these meeting points define the image meeting points P1i1, P1i2 and P2i1, P2i2.
Thus, the two video cameras deliver respective video signals representative of these video images with the first image meeting points P1i1, P1i2 and the second image meeting points P2i1, P2i2. These video signals are in fact representative of the calibration pattern constituted by the marks M11, M12, M13; M21, M22, M23; M31, M32, M33.
These video signals delivered by each of the cameras can be processed so that, when combined with each other, e.g. on being repositioned, they form two images suitable for forming a single stereoscopic video image, using the technique that is known in this field, as mentioned above.
In a preferred manner, in a first implementation of this last step of the method, the video signals are processed by computer, thereby constituting an implementation that is relatively inexpensive. Such an operation can be performed with a programmable video signal processor unit, e.g. of the microprocessor type, having inlet terminals connected to the outlets 12, 13 of the two video cameras 1, 2, e.g. as shown in FIG. 5.
Preparing such a program for the processor unit comes within the competence of the person skilled in the art, and since it does not form part of the invention, it is not described in detail herein.
Nevertheless, it is possible to implement this last step of the method, not by computer means, but in an electromechanical manner.
This second implementation of the last step of the method is described below since even though it is not the preferred implementation, in that it is relatively expensive given that it requires numerous specific means, it nevertheless makes it possible to explain this last step of the method in even more understandable manner, in particular concerning the above-defined implementation.
In this second implementation of the last step, the method consists in adjusting the two video cameras relative to each other until by repositioning the two video images given by the two video cameras, the first and second image meeting points P1i1, P2i1 of one video image are at a given distance respectively from the first and second image meeting points P1i2, P2i2 of the other video image, which distance can easily be determined by a person skilled in the art in order to obtain a stereoscopic effect.
In some cases, this distance may even be of zero value. For example, when applied to surveillance of a portion of roadway, the items to be filmed stereoscopically are situated between the surface 5 of said portion of pathway 4 and the lenses of the cameras, and as a result shifting the images taken by the two video cameras suffices on its own to obtain the stereoscopic effect.
By way of example, the cameras can be adjusted relative to each other by modifying-one of the following parameters for each video camera: its elevation, its azimuth, and/or its tilt, its optical field of view, e.g. advantageously by adjusting the focal length of the lens of the camera, its resolution.
FIG. 5 is a diagram showing an example of how the two cameras can be adjusted as mentioned above. The frame in FIG. 5 may represent the screen 28 of a video monitor 26, as shown diagrammatically in FIG. 6, where there are superposed the two images coming from the two cameras after they have been processed as mentioned above. This frame shows the first pair of points P1i1 and P2i1 as defined by the image given by the first camera 1, and the second pair of points P1i2 and P2i2 defined by the image given by the second camera 2. In this example, the points P1i2 and P2i2 of the second pair (represented by large black dots) firstly do not coincide with the points P1i1 and P2i1 of the first pair, and secondly they are further apart from each other than the distance between the points of the first pair.
Under such circumstances, the two video cameras can be adjusted, for example, as follows: firstly the optical field of the second camera 2 is reduced so as to move the points P1i2 and P2i2 towards each other along arrows f1 until the distance between them is substantially equal to the distance between the points P1i1 and P2i1 (the points P1i2 and P2i2 in this position being represented by small circles), and then the second camera is pivoted about a vertical axis so that the same points P1i2 and P2i2 are moved along arrow f2 until they come into register with the points P1i1 and P2i1 the points P1i2 and P2i2 in this position being represented by “+” signs), and finally, the same second camera is pivoted about a horizontal axis so that the points P1i2 and P2i2 are moved along arrow f3 until they are superposed or substantially superposed on the points P1i1 and P2i1.
It is then certain that the two images given by the two cameras can be used for monitoring the state of occupation of a roadway, for example for motor vehicles, using stereo techniques known in the prior art.
The adjustment or calibration of the cameras is then terminated. In the example described above, only the parameters of the camera 2 are modified. However the same result could be obtained by modifying only the parameters of the camera 1, or by modifying simultaneously parameters of both cameras 1 and 2.
The above-described method is easily implemented with apparatus, one embodiment of which is shown diagrammatically in FIG. 6, the apparatus being controlled by software means that can be prepared by the person skilled in the art aware of the description of the various steps of the method as given above.
In general manner, the apparatus comprises a plurality of marks situated on the surface 5 of a portion of pathway 4 to be monitored, corresponding respectively to the intersections of two groups of at least two geometrical lines each that meet at first and second points P1 and P2, a support 11 suitable for being positioned in direct view of the portion of pathway 4, at least two video cameras 1, 2 mounted on the support, and each having a respective video signal outlet 12, 13 delivering signals representative of video images given by the corresponding video camera, and a programmable video signal processor and analysis unit 25 having inlet terminals connected to the outlets 12, 13 of the two video cameras.
In the embodiment as shown in FIG. 6, the apparatus has nine marks M11, M12, M13; M21, M22, M23; M31, M32, M33 situated on three geometrical lines intersecting at first and second points P1, P2, a support 11 suitable for being installed in direct view of the portion of pathway 4, at least two video cameras 1, 2 each having a respective outlet 12, 13 for video signals representative of video images given by the corresponding video cameras, each camera having a lens of variable focal length 14, 15 controllable via a control inlet 16, 17, controllable means 18, 19, e.g. of the gimbals type for mounting each of the two video cameras so as to be capable of being pivoted relative to the support 11 about at least two non-coinciding axes, each coupled for example to a corresponding drive motor, and these means 18, 19 being suitable for being controlled from control inlets 20, 21, and a programmable video signal processor and analysis unit 25, e.g. of the microprocessor or analogous type, having inlet terminals connected to the outlets 12, 13 of the two video cameras 1, 2 and outlet terminals connected to the control inlets 20, 21 of the controllable means 18, 19 so as to mount each of the two video cameras relative to the support 11 so as to be capable of pivoting about at least two non-coinciding axes, and to the control inlets 16, 17 of the variable focal length lens 14, 15 of each of the video cameras, said programmable video signal processor and analysis unit 25 having a programming inlet 27 so as to enable the above-mentioned processing and analysis software to be loaded.

Claims (20)

1. A method of calibrating at least two video cameras (1, 2) relative to each other when said two cameras constitute apparatus for stereoscopically filming (3) a portion of pathway (4) suitable for having any type of body traveling therealong, in order to detect the state of occupation of said portion of pathway, and in particular to detect incidents that might occur on said portion of pathway, the method being characterized in that it consists:
in placing a plurality of marks on the surface (5) of the portion of pathway (4), said marks being distributed substantially:
in ordered manner on a first group of first and second geometrical lines D1, D2 meeting at a first point P1; and
in such a manner that given points belonging respectively to the marks having the same order relative to the first point P1 on said first and second geometrical lines D1, D2 are situated on a second group of fourth and fifth geometrical lines D4, D5 meeting at a second point P2 that does not coincide with the first point P1;
in forming a video image of said portion of pathway (4) including said marks, using each of the two video cameras;
in defining a characteristic point Pc for each image of a mark in each of the two video images;
in determining first and second image lines D1i, D2i and fourth and fifth image lines D4i, D5i from said characteristic points Pc;
in determining a first image meeting point for the first and second image lines D1i, D2i and a second image meeting point for the fourth and fifth image lines D4i, D5i in each of the video images; and
in processing the video signals delivered by each video camera in such a manner that these signals are representative of two images suitable for being processed by stereovision.
2. A method according to claim 1, characterized by the fact that said plurality of marks (M11, M12, M13; M21, M22, M23; M31, M32, M33) is at least nine in number, and that it consists additionally in forming, in the first group of lines, a third geometrical line D3, and in the second group of lines, a sixth geometrical line D6, and in determining by approximation, in each of the video images, a first image meeting point (P1i1, P1i2) constituted as being the point at which the first, second, and third image lines D1i, D2i, DDi meet, and a second image meeting point (P2i1, P2i2) considered as being the point at which the fourth, fifth, and sixth image lines D4i, D5i, D6i meet.
3. A method according to claim 1, characterized by the fact that the processing of the video signals delivered by each of the video cameras so that the signals are representative of two images suitable for forming a stereoscopic video image is performed by computer means.
4. A method according to claim 1, characterized by the fact that the processing of the video signals delivered by each of the video cameras so that the signals are representative of two images suitable for forming a stereoscopic video image consists in adjusting the two video cameras relative to each other until, by substantially superposing the two video images given by said two video cameras, the first and second image meeting points (P1i1, P2i1) of one video image are at a determined distance from the first and second image meeting points (P1i2, P2i2) of the other video image, in order to obtain a stereoscopic effect.
5. A method according to claim 2, characterized by the fact that it consists in defining the first, second, and third geometrical lines D1, D2, D3 in such a manner as that the first point P1 is situated at infinity.
6. A method according to claim 2, characterized by the fact that it consists in defining the fourth, fifth, and sixth geometrical lines D4, D5, D6 in such a manner that the second point P2 is situated at infinity.
7. A method according to claim 2, characterized by the fact that it consists in repositioning, in the video images, the two groups of three lines each, firstly D1i, D2i, D3i and secondly D4i, D5i, D6i, in such a manner that they intersect at a single point, said meeting points determining said image meeting points (P1i1, P1i2) and (P2i1, P2i2).
8. A method according to claim 2, characterized by the fact that it consists in defining said marks in such a manner that they are substantially identical to one another.
9. A method according to claim 7, characterized by the fact that it consists in distributing said marks (M11, M12, M13; M21, M22, M23; M31, M32, M33) in such a manner that they are situated on at least one of the first and second groups of geometrical lines D1, D2, D3 and D4, D4, D6 at equal distances from one another.
10. A method according to claim 4, characterized by the fact that it consists in adjusting each video camera (1, 2) by modifying at least one of the following of its parameters: its elevation, its azimuth, its optical field of view, its resolution.
11. A method according to claim 1, characterized by the fact that it consists in determining the characteristic point Pc of each mark image by using at least one of the following parameters: the intersection of at least two lines interconnecting four non-coincident points of the mark image respectively in pairs, the center of gravity of the tone of the mark image, the center of gravity of the total area of the mark image.
12. A method according to claim 1, characterized by the fact that it consists, when said marks are substantially rectangular in shape, in determining the given point (Pd11, Pd12, Pd13; Pd21, Pd22, Pd23; Pd31, Pd32, Pd33) by at least one of the following points: the point of intersection of the two diagonals of the rectangle of each mark, one of the vertices of the rectangle.
13. A device implementing the method according to claim 3, the device being characterized by the fact that it comprises:
a plurality of marks (M11, M12, M13; M21, M22, M23; M31, M32, M33) situated on the surface (5) of a portion of pathway (4) respectively at the points of intersection between two groups of at least two geometrical lines that meet at a first point P1 and at a second point P2;
a support (11) suitable for being installed in direct view of said portion of pathway;
at least two video cameras (1, 2) mounted on said support, each camera having an outlet (12, 13) for video signals representative of video images given by the corresponding video camera; and
a programmable video signal processor and analysis unit (25) having inlet terminals connected to the outlets (12, 13) of the two video cameras.
14. Apparatus for implementing the method according to claim 4, the apparatus being characterized by the fact that it comprises:
a plurality of marks (M11, M12, M13; M21, M22, M23; M31, M32, M33) situated on the surface (5) of a portion of pathway (4) respectively at the points of intersection between two groups of at least two geometrical lines that meet at a first point P1 and at a second point P2;
a support (11) suitable for being installed in direct view of said portion of pathway (4);
at least two video cameras (1, 2) each having a respective outlet (12, 13) for video signals representative of video images given by the corresponding video camera, each camera having a variable focal length lens (14, 15) controllable from a control inlet (16, 17);
controllable means (18, 19) for mounting each of the two video cameras to pivot relative to said support (11) about at least two non-coincident axes, said means being suitable for being controlled from control inlets (20, 21); and
a programmable video signal processor and analysis unit (25) having inlet terminals connected to the outlets (12, 13) of the two video cameras (1, 2), and outlet terminals connected to the control inlets (20, 21) of the controllable means (18, 19) for mounting each of the two video cameras to pivot relative to said support (11) about at least two non-coincident axes, and to the control inlets (16, 17) of the variable focal length lens (14, 15) of each video camera.
15. A method according to claim 2, characterized by the fact that the processing of the video signals delivered by each of the video cameras so that the signals are representative of two images suitable for forming a stereoscopic video image is performed by computer means.
16. A method according to claim 2, characterized by the fact that the processing of the video signals delivered by each of the video cameras so that the signals are representative of two images suitable for forming a stereoscopic video image consists in adjusting the two video cameras relative to each other until, by substantially superposing the two video images given by said two video cameras, the first and second image meeting points (P1i1, P2i1) of one video image are at a determined distance from the first and second image meeting points (P1i2, P2i2) of the other video image, in order to obtain a stereoscopic effect.
17. A method according to claim 3, characterized by the fact that it consists in defining the first, second, and third geometrical lines D1, D2, D3 in such a manner as that the first point P1 is situated at infinity.
18. A method according to claim 4, characterized by the fact that it consists in defining the first, second, and third geometrical lines D1, D2, D3 in such a manner as that the first point P1 is situated at infinity.
19. A method according to claim 3, characterized by the fact that it consists in defining the fourth, fifth, and sixth geometrical lines D4, D5, D6 in such a manner that the second point P2 is situated at infinity.
20. A method according to claim 4, characterized by the fact that it consists in defining the fourth, fifth, and sixth geometrical lines D4, D5, D6 in such a manner that the second point P2 is situated at infinity.
US10/565,631 2003-07-28 2004-07-09 Method for calibrating at least two video cameras relatively to each other for stereoscopic filming and device therefor Active 2025-10-29 US7557835B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR0309208A FR2858509B1 (en) 2003-07-28 2003-07-28 METHOD FOR CALIBRATING AT LEAST TWO VIDEO CAMERAS RELATIVE TO THE OTHER FOR STEROSCOPIC VIEWING AND DEVICE FOR PERFORMING THE METHOD
FR0309208 2003-07-28
PCT/FR2004/001824 WO2005022929A1 (en) 2003-07-28 2004-07-09 Method for calibrating at least two video cameras relatively to each other for stereoscopic filming and device therefor

Publications (2)

Publication Number Publication Date
US20070008405A1 US20070008405A1 (en) 2007-01-11
US7557835B2 true US7557835B2 (en) 2009-07-07

Family

ID=34043573

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/565,631 Active 2025-10-29 US7557835B2 (en) 2003-07-28 2004-07-09 Method for calibrating at least two video cameras relatively to each other for stereoscopic filming and device therefor

Country Status (5)

Country Link
US (1) US7557835B2 (en)
EP (1) EP1649699B1 (en)
KR (1) KR20060065657A (en)
FR (1) FR2858509B1 (en)
WO (1) WO2005022929A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110285856A1 (en) * 2010-05-24 2011-11-24 Kia Motors Corporation Image correction method for camera system
US9268979B2 (en) 2013-09-09 2016-02-23 Datalogic ADC, Inc. System and method for aiming and calibrating a data reader
US9445080B2 (en) 2012-10-30 2016-09-13 Industrial Technology Research Institute Stereo camera apparatus, self-calibration apparatus and calibration method
US9519810B2 (en) 2012-07-31 2016-12-13 Datalogic ADC, Inc. Calibration and self-test in automated data reading systems
EP3128482A1 (en) * 2015-08-07 2017-02-08 Xovis AG Method for calibration of a stereo camera

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9906838B2 (en) 2010-07-12 2018-02-27 Time Warner Cable Enterprises Llc Apparatus and methods for content delivery and message exchange across multiple content delivery networks
US9987743B2 (en) 2014-03-13 2018-06-05 Brain Corporation Trainable modular robotic apparatus and methods
US9533413B2 (en) 2014-03-13 2017-01-03 Brain Corporation Trainable modular robotic apparatus and methods
US9840003B2 (en) 2015-06-24 2017-12-12 Brain Corporation Apparatus and methods for safe navigation of robotic devices
FR3052278B1 (en) * 2016-06-07 2022-11-25 Morpho METHOD FOR SELF-CALIBRATION OF A NETWORK OF CAMERAS AND CORRESPONDING INSTALLATION
CN111416973A (en) * 2019-01-08 2020-07-14 三赢科技(深圳)有限公司 Three-dimensional sensing device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5757674A (en) * 1996-02-26 1998-05-26 Nec Corporation Three-dimensional position detecting apparatus
US5905568A (en) * 1997-12-15 1999-05-18 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Stereo imaging velocimetry
EP1089054A2 (en) 1999-09-22 2001-04-04 Fuji Jukogyo Kabushiki Kaisha Camera mounting and alignment arrangement
WO2002050770A1 (en) 2000-12-21 2002-06-27 Robert Bosch Gmbh Method and device for compensating for the maladjustment of an image producing device
US7103212B2 (en) * 2002-11-22 2006-09-05 Strider Labs, Inc. Acquisition of three-dimensional images by an active stereo technique using locally unique patterns

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3436085B2 (en) 1997-07-04 2003-08-11 エヌオーケー株式会社 Tetrafluoroethylene resin composition
JP3958115B2 (en) 2002-05-28 2007-08-15 ナイルス株式会社 Rotation detector

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5757674A (en) * 1996-02-26 1998-05-26 Nec Corporation Three-dimensional position detecting apparatus
US5905568A (en) * 1997-12-15 1999-05-18 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Stereo imaging velocimetry
EP1089054A2 (en) 1999-09-22 2001-04-04 Fuji Jukogyo Kabushiki Kaisha Camera mounting and alignment arrangement
WO2002050770A1 (en) 2000-12-21 2002-06-27 Robert Bosch Gmbh Method and device for compensating for the maladjustment of an image producing device
US7103212B2 (en) * 2002-11-22 2006-09-05 Strider Labs, Inc. Acquisition of three-dimensional images by an active stereo technique using locally unique patterns

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110285856A1 (en) * 2010-05-24 2011-11-24 Kia Motors Corporation Image correction method for camera system
US8368761B2 (en) * 2010-05-24 2013-02-05 Hyundai Motor Company Image correction method for camera system
US9519810B2 (en) 2012-07-31 2016-12-13 Datalogic ADC, Inc. Calibration and self-test in automated data reading systems
US9445080B2 (en) 2012-10-30 2016-09-13 Industrial Technology Research Institute Stereo camera apparatus, self-calibration apparatus and calibration method
US9268979B2 (en) 2013-09-09 2016-02-23 Datalogic ADC, Inc. System and method for aiming and calibrating a data reader
EP3128482A1 (en) * 2015-08-07 2017-02-08 Xovis AG Method for calibration of a stereo camera
WO2017025214A1 (en) * 2015-08-07 2017-02-16 Xovis Ag Method for calibration of a stereo camera
US20190019309A1 (en) * 2015-08-07 2019-01-17 Xovis Ag Method for calibration of a stereo camera
US10679380B2 (en) * 2015-08-07 2020-06-09 Xovis Ag Method for calibration of a stereo camera

Also Published As

Publication number Publication date
EP1649699A1 (en) 2006-04-26
WO2005022929A1 (en) 2005-03-10
FR2858509A1 (en) 2005-02-04
FR2858509B1 (en) 2005-10-14
KR20060065657A (en) 2006-06-14
EP1649699B1 (en) 2013-06-05
US20070008405A1 (en) 2007-01-11

Similar Documents

Publication Publication Date Title
US7557835B2 (en) Method for calibrating at least two video cameras relatively to each other for stereoscopic filming and device therefor
US7961216B2 (en) Real-time composite image comparator
EP1383342A2 (en) Method and apparatus for aligning a stereoscopic camera
US8340356B2 (en) Method for producing a known fixed spatial relationship between a laser scanner and a digital camera for traffic monitoring
US20140354828A1 (en) System and method for processing multicamera array images
CA2534978A1 (en) Retinal array compound camera system
US7136059B2 (en) Method and system for improving situational awareness of command and control units
US6195455B1 (en) Imaging device orientation information through analysis of test images
DE102015122842A1 (en) Calibration plate for calibrating a 3D measuring device and method therefor
WO2003021187A2 (en) Digital imaging system for airborne applications
US11776158B2 (en) Detecting target objects in a 3D space
CN113869231B (en) Method and equipment for acquiring real-time image information of target object
WO2021150689A1 (en) System and methods for calibrating cameras with a fixed focal point
KR950001578B1 (en) Method and apparatus for 3-dimensional stereo vision
EP3745718B1 (en) Method of controlling pan-tilt-zoom camera by using fisheye camera and monitoring system
JP2001328600A (en) Landing point searching device, flying object using therewith and landing point evaluating device
CN116027283A (en) Method and device for automatic calibration of a road side sensing unit
TWI645372B (en) Image calibration system and image calibration method
NL2016718B1 (en) A method for improving position information associated with a collection of images.
RU2816541C2 (en) Machine stereo vision method
KR102044639B1 (en) Method and apparatus for aligning stereo cameras
Subedi et al. An extended method of multiple-camera calibration for 3D vehicle tracking at intersections
US20230386084A1 (en) Apparatus for calibrating a three-dimensional position of a centre of an entrance pupil of a camera, calibration method therefor, and system for determining relative positions of centres of entrance pupils of at least two cameras mounted on a common supporting frame to each other, and determination method therefor
AU2013201818A1 (en) Method for verifying the alignment of a traffic monitoring device
JPH0727514A (en) Calibrating method for image measuring device

Legal Events

Date Code Title Description
AS Assignment

Owner name: CITILOG, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BENOSMAN, RYAD;BOUZAR, SALAH;DEVARS, JEAN;AND OTHERS;REEL/FRAME:018270/0506;SIGNING DATES FROM 20060116 TO 20060719

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

SULP Surcharge for late payment
MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12