CN103975221A - Coordinate conversion table creation system and coordinate conversion table creation method - Google Patents

Coordinate conversion table creation system and coordinate conversion table creation method Download PDF

Info

Publication number
CN103975221A
CN103975221A CN201280060296.5A CN201280060296A CN103975221A CN 103975221 A CN103975221 A CN 103975221A CN 201280060296 A CN201280060296 A CN 201280060296A CN 103975221 A CN103975221 A CN 103975221A
Authority
CN
China
Prior art keywords
image
world
vehicle
information
coordinate transform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201280060296.5A
Other languages
Chinese (zh)
Other versions
CN103975221B (en
Inventor
福本刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Publication of CN103975221A publication Critical patent/CN103975221A/en
Application granted granted Critical
Publication of CN103975221B publication Critical patent/CN103975221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/03Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring coordinates of points
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/04Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness specially adapted for measuring length or width of objects while moving
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

Provided is a coordinate conversion table creation system with which a coordinate conversion table is created between image system coordinates which are set in a photographed image and a global coordinate system which is set in an unmoving object. The system comprises: an image system information acquisition unit which photographs a traveling vehicle, acquires an image assembly vehicle location with image system coordinates which are set in the photographed image, and outputs same as image system information; a global system information acquisition unit which acquires the global system location of the vehicle with respect to global system coordinates and outputs same as global system information; and a coordinate conversion information creation unit which creates a coordinate conversion table between the image system coordinates and the global system coordinates based on the image system information and the global system information.

Description

Coordinate transform table creates system and coordinate transform table creation method
Technical field
The present invention relates to coordinate transform table and create system and coordinate transform table creation method.
Background technology
Now, a kind of safe driving back-up system has been proposed, the camera head that this safe driving back-up system is arranged in trackside by use is taken vehicle, and on the basis of photographic images, detect position and the speed of vehicle, to measure the volume of traffic and prevent from bumping in bend or the crossroad of poor visibility.
According to safe driving back-up system, the position of the vehicle obtaining from photographic images and the basis of speed, estimation TTC (time-to-collision: collision time).Then, on the basis of estimation result, carry out the attention of arousing user or the braking of controlling vehicle.Correspondingly, be necessary excellent precision and obtain position and the speed of vehicle.
For from the excellent precision of photographic images obtain position and the speed of vehicle, be necessary by basis to the coordinate of photographic images setting (hereinafter, be called image coordinate) the position of vehicle convert the position of the vehicle of the coordinate (hereinafter, being called world coordinates) of setting according to the real space to such as road etc. to.That is, be necessary to carry out coordinate transform between image coordinate and world coordinates.
Therefore,, when new layout camera head, when monitoring the surveillance camera etc. of vehicle, create the coordinate transform table of the conversion between carries out image coordinate and world coordinates.
As for creating the technology of coordinate transform table, Japanese Patent Application Publication No.2010-236891 has described following method.That is, by trackside video camera (camera head), taken the vehicle that carries target object, and on the basis of photographic images, find out according to the position of the target object of image coordinate.Meanwhile, by using GPS (GPS), find out according to the position of the target object of world coordinates.Then, by relatively with associated according to the position of the target object of world coordinates with according to the position of the target object of image coordinate, establishment conversion table.
Summary of the invention
(problem that will be solved by the present invention)
Yet, the technology described in Japanese Patent Application Publication No.2010-236891 have in the situation that such as tunnel etc., GPS arranges trackside video camera in disabled space, can not create the problem of coordinate transform table.
Fundamental purpose of the present invention is to provide a kind of coordinate transform table and creates system and coordinate transform table creation method, though such as tunnel, in the disabled environment of GPS, also can obtain to excellent precision the coordinate transform table between image coordinate and world coordinates.
(means that address this problem)
For addressing this problem, a kind of coordinate transform table establishment system comprises: the information acquisition unit based on image, it takes driving vehicle, and according to the coordinate based on image that is set to photographic images, obtain the vehicle location based on image, and the obtained vehicle location based on image is output as to the information based on image; Information acquisition unit based on the world, it,, according to the coordinate based on the world, obtains the vehicle location based on the world of vehicle, and the obtained vehicle location based on the world is output as to the information based on the world; And coordinate transform information creating unit, it is created in information based on image and the coordinate transform table between the information based on the world on the basis of the information based on image and the information based on the world.
In addition, a kind of coordinate transform table creation method comprises: the information access process based on image, it takes driving vehicle, and according to the coordinate based on image that is set to photographic images, obtain the vehicle location based on image, and the obtained vehicle location based on image is output as to the information based on image; Information access process based on the world, it,, according to the coordinate based on the world, obtains the vehicle location based on the world of vehicle, and the obtained vehicle location based on the world is output as to the information based on the world; And coordinate transform information creating process, it is created in information based on image and the coordinate transform table between the information based on the world on the basis of the information based on image and the information based on the world.
(beneficial effect of the present invention)
According to the present invention, even in the disabled environment of the GPS such as tunnel construction sections, also can obtain to excellent precision the coordinate transform table between image coordinate and world coordinates.
Accompanying drawing explanation
Fig. 1 illustrates the schematic diagram that creates system according to coordinate transform table of the present invention;
Fig. 2 is the block diagram that trackside camera head and car-mounted device are shown;
Fig. 3 is the process flow diagram that the process that creates coordinate transform table is shown;
Fig. 4 A is when vehicle process judgement line, the trackside photographic images that trackside is taken; And
Fig. 4 B is when vehicle is after judgement line, the trackside photographic images that trackside is taken.
Embodiment
Hereinafter, will describe according to exemplary embodiment of the present invention.Fig. 1 illustrates the schematic diagram that creates system 2 according to coordinate transform table of the present invention.Coordinate transform table creates system 2 and comprises the trackside camera head 10 that is arranged in trackside, and is arranged on the car-mounted device 20 on vehicle 30.Fig. 2 is the block diagram that trackside camera head 10 and car-mounted device 20 are shown.
Trackside camera head 10 comprises trackside video camera 11, vehicle detection unit 12, the vehicle detection unit 13 based on image and coordinate transform table creating unit 14.Wherein, trackside video camera 11, vehicle detection unit 12 and the vehicle detection unit 13 based on image are included in the information acquisition unit 3 based on image, and coordinate transform table creating unit 14 is included in coordinate transform information creating unit 4.
Trackside video camera 11 is such as the video camera that is arranged in the road surveillance camera etc. of trackside.By trackside video camera 11, take driving vehicle 30, and photographic images is outputed to vehicle detection unit 12 and the vehicle detection unit 13 based on image, as trackside photographic images.Wherein, suppose differentiation boundary line, the track K (referring to Fig. 1) that is arranged as dotted line on road.
The irremovable object (hereinafter, being shown object of reference) that acts on trackside camera head 10 and car-mounted device 20 for the K of differentiation boundary line, track.Object of reference is not limited to differentiation boundary line, track.Object of reference can be to be arranged in reflecting plate on road etc.Therefore,, by using object of reference K, find out the reference point of the coordinate transform between image coordinate and world coordinates.
In addition, trackside video camera 11 is taken the afterbody (rear portion of vehicle) becoming away from the vehicle of trackside video camera 11.The travel direction of the arrow indication vehicle 30 in Fig. 1 and Fig. 4.
Vehicle detection unit 12 extracts vehicle 30 from trackside photographic images, and whether the vehicle 30 that judgement is extracted is present in predefined measured zone.By using the wireless device of radiobeam etc., judgement is sent to the vehicle detection unit 22 based on the world, as area judging information.Wherein, measured zone refers to the scope for detection of the position of vehicle 30.In the situation that large slight (in the situation that far point is taken) of the vehicle in trackside photographic images, the precision of reduction vehicle location.Therefore,, by considering the resolution of trackside video camera 11 etc., preset measured zone.
Whether the vehicle 30 that in addition, 12 judgements of vehicle detection unit are extracted approaches object of reference K.In the situation that vehicle 30 approaches object of reference K, the vehicle approach information that vehicle detection unit 12 approaches object of reference K by indication vehicle 30 sends to the vehicle detection unit 22 based on the world.
As shown in Figure 1, differentiation boundary line, track K is comprised of a plurality of white line K1 that intermittently draw, and each white line K1 has predetermined length.Therefore, vehicle 30 whether approaching judgements depend on that the position of which white line K1 is corresponding to the judgement of reference point.Then, the end points K2 (K2_i, wherein, i is positive integer) that supposes white line K1 is the reference point for judging that whether vehicle 30 is approaching.Owing to there being a plurality of white line K1, thereby there are a plurality of end points K2.Therefore, at each end points K2, carry out the whether approaching judgement of vehicle 30.
Vehicle detection unit 13 based on image obtains the position of light source cell 23 from trackside photographic images.Because light source cell 23 is arranged in car-mounted device 20, the position of light source cell 23 is corresponding to the position of vehicle.Wherein, finding out the position of the vehicle that the vehicle detection unit 13 based on image obtains from trackside photographic images, therefore, is based on image coordinate.
The position of vehicle detection unit based on image 13 definition vehicles is the vehicle locations based on image, and the time of obtaining the position of vehicle is the position acquisition time based on image.Therefore, the vehicle detection unit 13 based on image outputs to coordinate transform table creating unit 14 by these two information, as the information based on image.By the timer (not shown) being arranged on trackside video camera 11, vehicle detection unit 13 based on image etc., measure the position acquisition time based on image.
The information based on image that coordinate transform table creating unit 14 receives in the vehicle detection unit 13 from based on image, and the basis of the information based on the world receiving from the described after a while vehicle detection unit based on the world 22, be created in the coordinate transform table between image coordinate and world coordinates.
Then, will the configuration of car-mounted device 20 be described.Wherein, car-mounted device 20 is included in the information acquisition unit 5 based on the world.Car-mounted device 20 comprises vehicle-mounted vidicon 21, the vehicle detection unit 22 based on the world and light source cell 23, and car-mounted device 20 is installed on vehicle 30.
Vehicle-mounted vidicon 21 is taken object of reference K.The basis of the image (vehicle-mounted photographic images) that the vehicle detection unit 22 based on the world is taken at vehicle-mounted vidicon 21 and the vehicle approach information that receives from vehicle detection unit 12, detect the end points K2 of object of reference K, then, on the basis of the vehicle location based on the world, obtain vehicle with respect to the position of the position of end points K2.In addition, suppose that the time of obtaining the vehicle location based on the world is the position acquisition time based on the world.Vehicle location based on the world and the position acquisition time based on the world are sent to coordinate transform table creating unit 14, as the information based on the world.By the timer (not shown) being arranged on vehicle-mounted vidicon 21 and the vehicle detection unit 22 based on the world, measure the position acquisition time based on the world.
In addition,, when the vehicle detection unit 22 based on the world obtains the information based on the world, the vehicle detection unit 22 based on the world outputs to light source cell 23 by trigger pip.Once light source cell 23 receives trigger pip from the vehicle detection unit 22 based on the world, comprise such as light source cell 23 switch lamps of the light source of LED etc. once.
Then, with reference to the process flow diagram shown in Fig. 3, describe by using coordinate transform table as above to create system 2, create the process of coordinate transform table.Wherein, for convenience of description for the purpose of, suppose the internal placement trackside video camera in tunnel, exemplary embodiment is not limited to the use in above-mentioned environment.
Step SA1: on the road that vehicle 30 travels in tunnel.The trackside video camera 11 of trackside camera head 10 is taken road.Therefore, photograph and enter the vehicle 30 of taking district.
Step SA2: vehicle detection unit 12 is carried out predetermined image by the trackside photographic images that trackside video camera 11 is taken and processed to detect vehicle 30, and obtains the position of vehicle 30.The position of the vehicle 30 that now obtained is the positions according to image coordinate.
As the method for detection of vehicle 30, can the following method of illustration.That is, obtain in advance the image that does not have vehicle 30, image as a setting, and find out the trackside photographic images taken by trackside video camera 11 and the difference between background image.By finding out this difference, can extract vehicle 30.Calculate vehicle from the position of the initial point of predefined image coordinate.Initial point can be defined as to the point (for example, the point in the bight of trackside photographic images) of setting in taking district.As described later, position is now used for judging whether vehicle 30 is present in measured zone, and for judging whether vehicle 30 approaches.Wherein, exemplary embodiment is not limited to background difference and processes, and such as the known method of pattern match etc., is feasible.
Step SA3: then, the vehicle location that 12 judgements of vehicle detection unit are obtained whether in predefined measured zone, and sends to the vehicle detection unit 22 based on the world by judged result, as area judging information.
Step SA4: in addition, the in the situation that of in vehicle detection unit 12 judgement vehicles 30 are present in measured zone, vehicle detection unit 12 passes through according to described after a while process, equal positions of the position of the vehicle in trackside photographic images 30 and the end points K2 of object of reference K is compared, judge whether vehicle 30 approaches.
For example, Fig. 1 shows vehicle 30 and becomes away from end points K2_4, but approaches the state of end points k2_5.Because trackside video camera 11 is fixed on the irremovable position for road, trackside video camera 11 is also in the irremovable position for end points K2.Therefore,, if obtain in advance the position that the coordinate that is present in measured zone and take based on image is basic end points K2, can judge whether vehicle 30 approaches each end points K2.
Nearest end points K2 is selected in vehicle detection unit 12 in a plurality of end points K2, and judges whether vehicle 30 approaches this nearest end points K2.For example, Fig. 1 shows car-mounted device 20 and approaches end points K2_4 to the state of end points K2_7, and in a plurality of end points, nearest end points is end points K2_4.Therefore, vehicle detection unit 12 judgement vehicles 30 approach end points K2_4.In the situation that vehicle detection unit 12 judgement vehicles 30 approach, process enters step SA5, and in the situation that vehicle detection unit 12 judgement vehicles 30 do not approach, process turns back to step SA2.Wherein, in the situation that vehicle detection unit 12 judgement vehicles 30 are kept off end points K2, this situation refers to that vehicle 30 moves to outside measured zone.
Vehicle 30 whether approaching judgements are carried out as follows.That is,, with the shape of white line in trackside photographic images etc., take object of reference K.Then, vehicle detection unit 12, by trackside photographic images being carried out to the process of extracting the process at edge and extracting strong brightness part, obtains object of reference K.That is, vehicle detection unit 12 obtains the object of reference K that comprises a plurality of white line K1.
Each of Fig. 4 A, 4B is the figure that the vehicle 30 obtaining in the object of reference K that obtains as mentioned above and step SA2 is shown.Wherein, the dotted line shown in figure (judgement line) L is end points K2 by object of reference K and perpendicular to the line of object of reference K.In addition the position of the car-mounted device 20 in mark X indication vehicle 30.Fig. 4 A is the trackside photographic images that the time t during just in time through judgement line L takes at vehicle 30, and Fig. 4 B is the trackside photographic images of taking during the time t+ δ (δ >0) after judgement line L at vehicle 30.As mentioned above, in the situation of vehicle 30 through judgement line L, judge that vehicle 30 approaches.The setting of judgement line L is corresponding to specifying end points K2_4 in the approaching state of vehicle 30 to the end points K2_4 in end points K2_7.
Step SA5: in the situation that judgement vehicle 30 approaches, vehicle detection unit 12 sends to the vehicle detection unit 22 based on the world by vehicle approach information.
Wherein, by using WLAN etc. to carry out transmission information, but can use another send mode under short condition of required time of transmission information.Short the referring to of transmission information required time do not seen and can be had problems from the viewpoint of required precision the time delay of communication.
In the situation that carrying out transmission information by group technology, the vehicle detection unit 22 based on the world is carried out and is processed, to can identify the grouping that comprises vehicle approach information.For example, by the specific bit definition of integrated data, be sign, and send its sign and set this grouping of indication for and there is the grouping of vehicle approach information.Obviously, exemplary embodiment is not limited to said method.
Step SA6: on the other hand, the in the situation that of in vehicle detection unit 12 judgement vehicles 30 are not present in measured zone, vehicle detection unit 13 based on image, in trackside photographic images, is chosen in the trackside photographic images of taking when the moment of light source cell 23 switches.Then, the vehicle detection unit 13 based on image obtains the position (vehicle location based on image) of vehicle from selected trackside photographic images, and further shooting time is retrieved as to the position acquisition time based on image.As mentioned above, the vehicle detection unit 13 based on image obtains position based on image coordinate and the position acquisition time based on image, and these two information are outputed to coordinate transform table creating unit 14, as the information based on image.Repeatedly carry out obtaining the positional information based on image.Multiplicity is corresponding to the number of times of light source cell 23 switches.
On the basis of brightness in region that comprises light source cell 23, in trackside photographic images, can judge whether the moment when light source cell 23 switch takes trackside photographic images.That is,, by trackside photographic images is carried out to profile leaching process, find out the profile of vehicle 30.Because known luminaire unit 23, can be by finding out the profile of vehicle 30, the region of designated light source unit 23 with respect to the position of the profile of vehicle 30 in advance.Then, in the situation that the brightness of appointed area is not weaker than predetermined luminance, in 13 moment of judgement when light source cell 23 is opened of vehicle detection unit based on image, taken trackside photographic images.
According to environment, suitably set predetermined luminance.For example, in the situation that arrange in the environment such as tunnel that coordinate transform table creates system 2, vehicle 30 is opened headlight or taillight.Therefore, predetermined luminance is set for brighter a little, to prevent when not changing the brightness of light source cell 23, because these lamps cause erroneous judgement disconnected.As mentioned above, whether vehicle detection unit 13 judgements based on image, in the moment when light source cell 23 is opened, have taken trackside photographic images.
Certainly, exemplary embodiment is not limited to said method.For example, when finding out its brightness not being weaker than the region of predetermined luminance in trackside photographic images, brightness that also can judging area is due to the light source cell 23 of opening.In this case, because the process of the profile of vehicle 30 is found out in unnecessary execution, processing speed uprises.
Step SA7: coordinate transform table creating unit 14 is waited for the positional information receiving based on image from the vehicle detection unit 13 based on image, and the information receiving based on the world from the vehicle detection unit 22 based on the world.
Step SA8: coordinate transform table creating unit 14 is associated with the vehicle location based on the world by the vehicle location based on image.As mentioned above, the vehicle location based on image and the vehicle location based on the world are according to the vehicle location of the image coordinate in the moment when light source cell 23 is opened.That is, the vehicle location based on image and the vehicle location based on the world are the vehicle locations under the condition in the simultaneously moment when light source cell 23 is opened.Therefore, can be created in the coordinate transform table between image coordinate and world coordinates.Wherein, can realize coordinate transform table by approximation of function.
By the way, vehicle detection unit 13 based on image outputs to coordinate transform table creating unit 14 by the vehicle location based on image and the position acquisition time based on image, as the information based on image, and the vehicle detection unit 22 based on the world outputs to coordinate transform table creating unit 14 by the vehicle location based on the world and the position acquisition time based on the world, as based on world's information.
Yet, when creating above-mentioned coordinate transform table, do not use position acquisition time based on image and the position acquisition time based on the world.Reason is that the vehicle detection unit 13 based on image extracts vehicle location from the trackside photographic images of taking when light source cell 23 is opened, and extracted vehicle location is defined as to the vehicle location based on image, therefore, the vehicle location based on image and the vehicle location based on the world are that position is synchronous.
Compare with said method, coordinate transform table creating unit 14 can be passed through the position acquisition time of use based on image and the identical vehicle location based on image and the vehicle location based on the world of position acquisition time based on the world, creates coordinate transform table.That is, in this case, the vehicle location based on image and the vehicle location based on the world are time synchronized.
Consider processing speed and precision, determine that should to adopt which kind of in the synchronous and time synchronized in position synchronous, and set determined synchronously.In addition, can use two kinds synchronously.Synchronous about position, the moment of best detection when light source 23 is opened.But in some cases,, for example, due to shooting condition precision (, taking frame number), selection is not equal to the vehicle location based on image in the moment in the moment when light source cell 23 is opened.Then, last and rear one vehicle location based on image of the vehicle location based on image that the moment of working as while judging that light source cell 23 is opened is taken is considered as candidate, and also outputs to coordinate transform table creating unit 14.Coordinate transform table creating unit 14 is calculated the position acquisition time based on image that equals the position acquisition time based on the world.Then, 14 pairs of coordinate transform table creating units are received as a plurality of vehicle locations based on image of candidate and carry out the interpolation processing such as linear interpolation, and calculate the vehicle location based on image corresponding to the calculated position acquisition time based on image.According to said method, can create coordinate transform table in excellent precision ground.
Then, will the process of being carried out by car-mounted device 20 be described.The in the situation that of car-mounted device 20, the above-mentioned positional information based on the world is acquired and sends to coordinate transform table creating unit 14.
Step SB1: first, vehicle-mounted vidicon 21 is taken object of reference K.
Step SB2: the vehicle detection unit 22 based on the world is processed by the vehicle-mounted photographic images being provided by vehicle-mounted vidicon 21 is carried out to predetermined image, detects the end points K2 of object of reference K.Therefore, can acquisition device itself (vehicle 30) with respect to the position of the position of object of reference K.Because the position of obtained device itself is with respect to the position that is arranged in the position of the object of reference K on road, the position of the device obtaining itself is based on world coordinates.
As predetermined image disposal route, such as the known method of edge extracting method, strong clear zone extracting method etc., be feasible.For example, in the situation that using edge extracting method, suppose that the vehicle-mounted photographic images of being taken by vehicle-mounted vidicon 21 comprises a plurality of pixels.In this case, calculate the luminance difference (differential) between pixel adjacent one another are.In the region that the brightness at edge etc. changes widely, it is large that differential value becomes.Therefore, can extract this region.Certainly, above-mentioned is only for example, and other method is also feasible.
Step SB3: then, the vehicle detection unit 22 based on the world is on the basis of the area judging information receiving from vehicle detection unit 12, and whether judgment means itself (vehicle 30) is present in measured zone.Then, in the situation that device itself is present in outside measured zone, process proceeds to step SB10.On the other hand, in the situation that device itself is present in measured zone, process proceeds to step SB4.
Step SB4: the 12 reception vehicle approach information from vehicle detection unit are waited in the vehicle detection unit 22 based on the world.
Step SB5: when the vehicle detection unit 22 based on the world receives vehicle approach information, the vehicle detection unit 22 based on the world stops detecting object of reference K, and change object of reference detected parameters, make accuracy of detection high.
For example, when 10 frames per second are processed for the image as general precision, vehicle detection unit 22 based on the world changes object of reference detected parameters (in this case, frame number per second), make when the vehicle detection unit 22 based on the world receives vehicle approach information, by using 15 frames per second, detect the end points K2 of object of reference K.Therefore, can suppress to cause the period of vehicle 30 between the time when taking two frames adjacent one another are by the mistake of the end points K2 of object of reference K.That is, can be with excellent accuracy detection end points K2.
Step SB6: the vehicle detection unit 22 based on the world, by using the benchmaring parameter of new settings, restarts to detect object of reference K.
Step SB7: then, the vehicle detection unit 22 based on the world judges whether to detect the end points K2 of object of reference K.In the situation that end points K2 being detected, process proceeds to step SB8, and in the situation that end points K2 not detected, process turns back to step SB6.
Step SB8: in the situation that the end points K2 of object of reference K detected, the vehicle detection unit 22 based on the world outputs to light source cell 23 by trigger pip.By receiving trigger pip, light source cell 23 opens and closes the light source such as LED etc. once.
Step SB9: the timing definition of the vehicle detection unit 22 based on the world during by the vehicle detection unit based on the world 22 output trigger pip be the position acquisition time based on the world, and by automobile storage when exporting trigger pip position acquisition be the vehicle location based on the world.Then, process turns back to step SB2.
Wherein, by following method, obtain the vehicle location based on the world.The in the situation that of highway etc., because the length adjustment with reference to thing K becomes 8 meters, and be adjusted to 12 meters with reference to the interval between thing K, can calculate according to world coordinates the position of every bit K2.In addition, vehicle detection unit 22 based on the world is by being used Tsai camera model (R., Y.Tsai: " A Versatile Camera Calibration Technique for High-Accuracy3DMachine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses ", IEEE journal of Robotics and Automation, volume RA-3, No.4, pp.323.344,1999) etc., calculation element itself (vehicle 30) is with respect to the position of the position of end points K2.Therefore, can, according to based on world coordinates, obtain the position of vehicle.
Step SB10: in step SB3, judge the in the situation that vehicle 30 being outside measured zone (vehicle away from the situation that), the vehicle detection unit 22 based on the world sends to coordinate transform table creating unit 14 by the stored information based on the world.
According to said process, coordinate transform table creating unit 14, on the basis of the relation of the vehicle location based on image and the vehicle location based on the world, creates coordinate transform table.
As mentioned above, even if can create in the disabled environment of the GPS such as tunnel etc., the coordinate transform table of the relation between the vehicle location based on image that also can obtain from trackside photographic images with excellent precision indication and the vehicle location based on the world obtaining from vehicle-mounted photographic images.
The application based on and require the right of priority of the Japanese patent application No.2011-272449 that submits on Dec 13rd, 2011, its full content is incorporated herein for your guidance.
List of numerals
2 coordinate transform tables create system
3 information acquisition unit based on image
4 coordinate transform information creating unit
5 information acquisition unit based on the world
10 trackside camera heads
11 trackside video cameras
12 vehicle detection unit
13 vehicle detection unit based on image
14 coordinate transform table creating units
20 car-mounted devices
21 vehicle-mounted vidicons
22 vehicle detection unit based on the world
23 light source cells
30 vehicles

Claims (10)

1. a coordinate transform table creates system, described coordinate transform table creates system creation and is being set to the coordinate based on image of photographic images and is being set to the coordinate transform table between the world coordinates of irremovable object, and described coordinate transform table establishment system comprises:
Information acquisition unit based on image, the described information acquisition unit based on image is taken driving vehicle, and obtain according to being set to the vehicle location based on image of the coordinate based on image of photographic images, and the obtained vehicle location based on image is output as to the information based on image;
Information acquisition unit based on the world, the described information acquisition unit based on the world is obtained according to the vehicle location based on the world of the described vehicle of the coordinate based on the world, and the obtained vehicle location based on the world is output as to the information based on the world; And
Coordinate transform information creating unit, described coordinate transform information creating unit, on the basis of described information based on image and the described information based on the world, is created in described coordinate based on image and the coordinate transform table between the described coordinate based on the world.
2. coordinate transform table according to claim 1 creates system, and wherein, the described information acquisition unit based on the world comprises:
Vehicle-mounted vidicon, described vehicle-mounted vidicon is arranged on described driving vehicle, and is vehicle-mounted photographic images by the image taking that comprises the object of reference of arranging in advance;
Light source cell, described light source cell opens and closes light source when trigger pip is transfused to; And
Vehicle detection unit based on the world, the described vehicle detection unit based on the world is on the basis of described vehicle-mounted photographic images, calculate described driving vehicle with respect to the position of the position of described object of reference, and by using this result of calculation, obtain the vehicle location based on world coordinates, and export described trigger pip.
3. coordinate transform table according to claim 1 and 2 creates system, and wherein, the described information acquisition unit based on image comprises:
Trackside video camera, described trackside video camera is disposed in trackside, and is trackside photographic images by the image taking that comprises driving vehicle; And
Vehicle detection unit based on image, the described vehicle detection unit based on image is on the basis of described trackside photographic images, judge whether described vehicle is present in predefined measured zone, and this judgement is outputed to the described vehicle detection unit based on the world, as area judging information, and in addition, judge whether described vehicle approaches described object of reference, and this judgement is outputed to the described vehicle detection unit based on the world, as vehicle approach information.
4. according to the coordinate transform table described in any one in claims 1 to 3, create system, wherein
The described information acquisition unit based on image, from a plurality of described trackside photographic images, is extracted the described trackside photographic images that described light source cell is opened, and obtains the described vehicle location based on image from extracted trackside photographic images.
5. according to the coordinate transform table described in any one in claims 1 to 3, create system, wherein
When the described information acquisition unit based on image is obtained the described vehicle location based on image, the described information acquisition unit based on image is retrieved as the position acquisition time based on image by the acquisition time of the described vehicle location based on image, and described vehicle location based on image and described position acquisition time based on image are outputed to described coordinate transform information creating unit as the information based on image;
When the described information acquisition unit based on the world is obtained the described vehicle location based on the world, the described information acquisition unit based on the world is retrieved as the position acquisition time based on the world by the acquisition time of the described vehicle location based on the world, and described vehicle location based on the world and described position acquisition time based on the world are outputed to described coordinate transform information creating unit as the information based on the world; And
Described coordinate transform information creating unit is, on the basis of consistent described vehicle location based on image and the described vehicle location based on the world, to create described coordinate transform table in described position acquisition time based on image separately and the described position acquisition time based on the world.
6. a coordinate transform table creation method, described coordinate transform table creation method is created in the coordinate transform table between the world coordinates that is set to the coordinate based on image of photographic images and is set to irremovable object, and described coordinate transform table creation method comprises:
Information access process based on image, the described information access process based on image is taken driving vehicle, and obtain according to being set to the vehicle location based on image of the coordinate based on image of photographic images, and the obtained vehicle location based on image is output as to the information based on image;
Information access process based on the world, the described information access process based on the world obtains according to the vehicle location based on the world of the described vehicle of the coordinate based on the world, and the obtained vehicle location based on the world is output as to the information based on the world; And
Coordinate transform information creating process, described coordinate transform information creating process, on the basis of described information based on image and the described information based on the world, is created in described coordinate based on image and the coordinate transform table between the described coordinate based on the world.
7. coordinate transform table creation method according to claim 6, wherein, the described information access process based on the world comprises:
Vehicle-mounted shooting process, in described vehicle-mounted shooting process, the vehicle-mounted vidicon being arranged on described driving vehicle is vehicle-mounted photographic images by the image taking that comprises the object of reference of arranging in advance;
Luminescence process, described luminescence process opens and closes light source when trigger pip is transfused to; And
Vehicle detection process based on the world, the described vehicle detection process based on the world is on the basis of described vehicle-mounted photographic images, calculate described driving vehicle with respect to the position of the position of described object of reference, and by using this result of calculation, obtain the vehicle location based on world coordinates, and export described trigger pip.
8. according to the coordinate transform table creation method described in claim 6 or 7, wherein, the described information access process based on image comprises:
Trackside shooting process, in described trackside shooting process, the trackside video camera that is arranged in trackside is trackside photographic images by the image taking that comprises driving vehicle; And
Vehicle detection process based on image, the described vehicle detection process based on image is on the basis of described trackside photographic images, judge whether described vehicle is present in predefined measured zone, and this judgement is outputed to the described vehicle detection process based on the world, as area judging information, and in addition, judge whether described vehicle approaches described object of reference, and this judgement is outputed to the described vehicle detection process based on the world, as vehicle approach information.
9. according to the coordinate transform table creation method described in any one in claim 6 to 8, wherein,
The described information access process based on image, from a plurality of described trackside photographic images, extracts the described trackside photographic images that described luminescence process is opened described light source, and obtains the described vehicle location based on image from extracted trackside photographic images.
10. according to the coordinate transform table creation method described in any one in claim 6 to 8, wherein,
When the described information access process based on image obtains the described vehicle location based on image, the described information access process based on image is retrieved as the position acquisition time based on image by the acquisition time of the described vehicle location based on image, and described vehicle location based on image and described position acquisition time based on image are outputed to described coordinate transform information creating process as the information based on image;
When the described information access process based on the world obtains the described vehicle location based on the world, the described information access process based on the world is retrieved as the position acquisition time based on the world by the acquisition time of the described vehicle location based on the world, and described vehicle location based on the world and described position acquisition time based on the world are outputed to described coordinate transform information creating process as the information based on the world; And
Described coordinate transform information creating process is, on the basis of consistent described vehicle location based on image and the described vehicle location based on the world, to create described coordinate transform table in described position acquisition time based on image separately and the described position acquisition time based on the world.
CN201280060296.5A 2011-12-13 2012-10-23 Coordinate transform table creates system and coordinate transform table creation method Active CN103975221B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2011272449 2011-12-13
JP2011-272449 2011-12-13
PCT/JP2012/006774 WO2013088626A1 (en) 2011-12-13 2012-10-23 Coordinate conversion table creation system and coordinate conversion table creation method

Publications (2)

Publication Number Publication Date
CN103975221A true CN103975221A (en) 2014-08-06
CN103975221B CN103975221B (en) 2016-08-17

Family

ID=48612109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201280060296.5A Active CN103975221B (en) 2011-12-13 2012-10-23 Coordinate transform table creates system and coordinate transform table creation method

Country Status (4)

Country Link
JP (1) JP6083385B2 (en)
CN (1) CN103975221B (en)
HK (1) HK1197294A1 (en)
WO (1) WO2013088626A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106571046A (en) * 2016-11-11 2017-04-19 上海市政工程设计研究总院(集团)有限公司 Vehicle-road cooperation auxiliary driving method based on road surface grid system
CN110164135A (en) * 2019-01-14 2019-08-23 腾讯科技(深圳)有限公司 A kind of localization method, positioning device and positioning system
US10489929B2 (en) 2015-01-07 2019-11-26 Sony Corporation Information processing apparatus, information processing method, and information processing system
CN111640301A (en) * 2020-05-25 2020-09-08 北京百度网讯科技有限公司 Method, system and device for detecting fault vehicle, electronic equipment and storage medium
CN113129382A (en) * 2019-12-31 2021-07-16 华为技术有限公司 Method and device for determining coordinate conversion parameters

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969055B (en) 2018-09-29 2023-12-19 阿波罗智能技术(北京)有限公司 Method, apparatus, device and computer readable storage medium for vehicle positioning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070127778A1 (en) * 2005-12-07 2007-06-07 Nissan Motor Co., Ltd. Object detecting system and object detecting method
CN101016052A (en) * 2007-01-25 2007-08-15 吉林大学 Warning method and system for preventing deviation for vehicle on high standard highway
CN101604448A (en) * 2009-03-16 2009-12-16 北京中星微电子有限公司 A kind of speed-measuring method of moving target and system
JP2010020729A (en) * 2008-07-14 2010-01-28 I Transport Lab Co Ltd Vehicle traveling locus observation system, vehicle traveling locus observation method and program
CN101750049A (en) * 2008-12-05 2010-06-23 南京理工大学 Monocular vision vehicle distance measuring method based on road and vehicle information
JP2010236891A (en) * 2009-03-30 2010-10-21 Nec Corp Position coordinate conversion method between camera coordinate system and world coordinate system, vehicle-mounted apparatus, road side photographing apparatus, and position coordinate conversion system
CN102013099A (en) * 2010-11-26 2011-04-13 中国人民解放军国防科学技术大学 Interactive calibration method for external parameters of vehicle video camera
CN102254318A (en) * 2011-04-08 2011-11-23 上海交通大学 Method for measuring speed through vehicle road traffic videos based on image perspective projection transformation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10332334A (en) * 1997-06-04 1998-12-18 Hitachi Ltd Position measuring method by image processing and its device
JP2002372417A (en) * 2001-06-15 2002-12-26 Mitsubishi Electric Corp Object position and velocity measuring and processing equipment
JP2003042760A (en) * 2001-07-27 2003-02-13 Sumitomo Electric Ind Ltd Instrument, method, and system for measurement
JP2006017676A (en) * 2004-07-05 2006-01-19 Sumitomo Electric Ind Ltd Measuring system and method
JP5015749B2 (en) * 2007-12-12 2012-08-29 トヨタ自動車株式会社 Vehicle position detection device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070127778A1 (en) * 2005-12-07 2007-06-07 Nissan Motor Co., Ltd. Object detecting system and object detecting method
CN101016052A (en) * 2007-01-25 2007-08-15 吉林大学 Warning method and system for preventing deviation for vehicle on high standard highway
JP2010020729A (en) * 2008-07-14 2010-01-28 I Transport Lab Co Ltd Vehicle traveling locus observation system, vehicle traveling locus observation method and program
CN101750049A (en) * 2008-12-05 2010-06-23 南京理工大学 Monocular vision vehicle distance measuring method based on road and vehicle information
CN101604448A (en) * 2009-03-16 2009-12-16 北京中星微电子有限公司 A kind of speed-measuring method of moving target and system
JP2010236891A (en) * 2009-03-30 2010-10-21 Nec Corp Position coordinate conversion method between camera coordinate system and world coordinate system, vehicle-mounted apparatus, road side photographing apparatus, and position coordinate conversion system
CN102013099A (en) * 2010-11-26 2011-04-13 中国人民解放军国防科学技术大学 Interactive calibration method for external parameters of vehicle video camera
CN102254318A (en) * 2011-04-08 2011-11-23 上海交通大学 Method for measuring speed through vehicle road traffic videos based on image perspective projection transformation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张重德等: "一种提高视频车速检测精度的方法", 《上海交通大学学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489929B2 (en) 2015-01-07 2019-11-26 Sony Corporation Information processing apparatus, information processing method, and information processing system
CN106571046A (en) * 2016-11-11 2017-04-19 上海市政工程设计研究总院(集团)有限公司 Vehicle-road cooperation auxiliary driving method based on road surface grid system
CN106571046B (en) * 2016-11-11 2021-07-16 上海市政工程设计研究总院(集团)有限公司 Vehicle-road cooperative driving assisting method based on road surface grid system
CN110164135A (en) * 2019-01-14 2019-08-23 腾讯科技(深圳)有限公司 A kind of localization method, positioning device and positioning system
CN113129382A (en) * 2019-12-31 2021-07-16 华为技术有限公司 Method and device for determining coordinate conversion parameters
CN111640301A (en) * 2020-05-25 2020-09-08 北京百度网讯科技有限公司 Method, system and device for detecting fault vehicle, electronic equipment and storage medium
CN111640301B (en) * 2020-05-25 2021-10-08 北京百度网讯科技有限公司 Fault vehicle detection method and fault vehicle detection system comprising road side unit

Also Published As

Publication number Publication date
JP6083385B2 (en) 2017-02-22
JPWO2013088626A1 (en) 2015-04-27
WO2013088626A1 (en) 2013-06-20
CN103975221B (en) 2016-08-17
HK1197294A1 (en) 2015-01-09

Similar Documents

Publication Publication Date Title
CN103975221A (en) Coordinate conversion table creation system and coordinate conversion table creation method
KR101988811B1 (en) Signaling device and signaling device recognition method
US9915539B2 (en) Intelligent video navigation for automobiles
JP4553072B1 (en) Image integration apparatus and image integration method
EP2983153A1 (en) Signal recognition device
US20180012088A1 (en) Traffic Light Detection Device and Traffic Light Detection Method
US20170024622A1 (en) Surrounding environment recognition device
CN106463051B (en) Traffic signal recognition device and traffic signal recognition method
WO2016093028A1 (en) Host vehicle position estimation device
JP2015075889A (en) Driving support device
JP5365792B2 (en) Vehicle position measuring device
US11138451B2 (en) Training image selection system
KR102418051B1 (en) Lane traffic situation judgement apparatus, system, and method thereof
JP4848644B2 (en) Obstacle recognition system
JP2014130429A (en) Photographing device and three-dimensional object area detection program
JP2006344133A (en) Road division line detector
JP4686235B2 (en) Inter-vehicle communication system
JP6435660B2 (en) Image processing apparatus, image processing method, and device control system
JP2006285695A (en) Inter-vehicle communication system
KR101836246B1 (en) Current Lane Detecting Method
JP4585356B2 (en) Inter-vehicle communication system
JP2005259031A (en) Human recognition system, human recognition device, storage device and human recognition method
US11557201B2 (en) Apparatus for assisting driving of a host vehicle based on augmented reality and method thereof
CN105247571A (en) Method and apparatus for creating a recording of an object which lights up in a pulsed manner
JP2006285692A (en) Inter-vehicle communication system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1197294

Country of ref document: HK

C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1197294

Country of ref document: HK