CN105893931A - Object detection apparatus and method - Google Patents
Object detection apparatus and method Download PDFInfo
- Publication number
- CN105893931A CN105893931A CN201610008658.6A CN201610008658A CN105893931A CN 105893931 A CN105893931 A CN 105893931A CN 201610008658 A CN201610008658 A CN 201610008658A CN 105893931 A CN105893931 A CN 105893931A
- Authority
- CN
- China
- Prior art keywords
- region
- capture
- radar
- detection device
- radar installations
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Radar Systems Or Details Thereof (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
An object detection apparatus of the invention is capable of efficiently superposing the sensing function of a radar device and the sensing function of a camera device, thereby improving the detection accuracy of an object. A capture region calculation unit (32) calculates a capture point having the local highest reflection intensity in power profile information and calculates a capture region surrounding the capture point. An edge calculation unit (34) calculates the edges of one or more objects from image data. A marker calculation unit (35) calculates a marker from the capture region. A component region calculation unit (36) calculates component regions by extending the marker using the edges. A grouping unit (37) groups component regions belonging to the same object, of the component regions. The object identification unit (38) identifies the types of one or more objects (e.g., large vehicle, small vehicle, bicycle, pedestrian, flight object, bird) on the basis of a target object region resulting from the grouping.
Description
Technical field
The present invention relates to article detection device and object detecting method, relate more specifically at vehicle, road
That load in the monitoring system of infrastructure system or specific facilities, can individually and be correctly detected week
Enclose article detection device and the object detecting method of the object of existence.
Background technology
In recent years, in the vehicle of passenger car etc., it is mounted with and detects other existed around this vehicle
Vehicle, pedestrian, two wheeler, the in-vehicle radar device that thing etc. is set being positioned on road or vehicle-mounted shooting
Machine.In-vehicle radar device or the detection of vehicle-mounted camera system are leaned on from the front of this vehicle and side
Near target object, measures and the relative position between this vehicle, relative velocity etc..Then, vehicle-mounted
Radar installations is based on measurement result, it is judged that this vehicle and target object have collisionless danger.Judging
In the case of dangerous, in-vehicle radar device by giving a warning to driver further, and
Automatically control this vehicle, carry out collision free.
Such as, in patent documentation 1, disclose and use in-vehicle radar device and vehicle-mounted shooting simultaneously
Machine, carries out the technology of the detection of object.Specifically, in patent documentation 1, video camera is utilized
The metrical information of device, determines the number of target object and azimuthal scope, number based on target object and
Azimuthal scope, revises the metrical information of radar installations.
Additionally, in patent documentation 2, disclose and be simultaneously used in the camera system that road is arranged around
And radar installations, monitor the technology of the volume of traffic.Specifically, in patent documentation 2, by radar installations
The detection position of remote vehicle and the information of speed, determine this vehicle the position in camera review it
After, by the situation of the camera review prompting vehicle away from this headlight for vehicle two side, carry out traffic monitoring and
Traffic administration.
Additionally, in the past in order to monitor the specific facilities such as airport, harbour, railway station or building, arrange
Radar installations or camera system, detect swarming into from ground or aerial (space on ground)
Object, provides information to associated safety system or display unit, prevents suspicious object (including suspicious people)
Swarm into.
Prior art literature
Patent documentation
Patent documentation 1: Japanese Unexamined Patent Publication 2010-151621 publication
Patent documentation 2: No. 2013/0300870 description of U.S. Patent Application Publication No.
Non-patent literature
Non-patent literature 1:R.C.Gonzalez and R.E.Woods, Digital Image Processing,
Prentice Hall, 2001.
Summary of the invention
The problem that invention is to be solved
But, in the conventional art of above-mentioned patent documentation 1, it is necessary to according to vehicle-mounted camera system
Metrical information determines the number of target object and azimuthal scope of each target object.That is, for vehicle-mounted
With camera system, need high level object detection performance.
Additionally, in the conventional art of patent documentation 2, but obtain multiple at radar installations from a chassis
In the case of testing result, it may be difficult to determine the position of this vehicle.
That is, in above-mentioned conventional art, even loading, in road infrastructure system on so-called vehicle
Situation about utilizing in system or utilize in the monitoring system of specific facilities, the accuracy of detection of object is also relied on
The performance of the side among camera system and radar installations.I.e., it is difficult to overlap radar device efficiently
Sensing (sensing) function and the sensing function of camera system, improve object accuracy of detection.
It is an object of the invention to, it is provided that the sensing function of overlap radar device and camera system efficiently
Sensing function, article detection device and the object detecting method of the accuracy of detection of object can be improved.
The scheme of solution problem
The article detection device of the present invention includes: information generating unit, the radar sent for radar installations
Each sending direction of signal, has divided the distance gained away from described radar installations to the interval every regulation
Multiple communities, calculate described radar installations respectively and receive and reflected described thunder by more than one object
Reach the representative value i.e. reflex strength of power receiving signal of the reflected signal of signal gained, use described instead
Penetrate intensity, power distribution information is generated respectively for the plurality of community;Capture region computing unit, will
The community of the maximum of described reflex strength is represented among the described power distribution information of the plurality of community
Capture point as object more than capture one calculates, and calculates the one of the described capture point of encirclement
Individual above community i.e. capture region;Edge extracting unit, is extracted in the image that camera system obtains
The edge of the object more than one comprised;Labelling computing unit, survey based on described radar installations
Determine scope and the coverage of described camera system, described capture region is transformed in described image
Subregion, and using described subregion as the described image corresponding with described capture region region i.e.
Labelling calculates;Component area computing unit, by described edge to be extended described labelling as border,
The a part of corresponding component area of the object calculated and constitute more than one;Packet processing unit,
Described component area is grouped as target object region;And object determines unit, from described mesh
Mark object area judges the object of more than one, exports described result of determination.
The object detecting method of the present invention, comprises the following steps: the radar signal sent for radar installations
Each sending direction, to many with the distance gained that divided away from described radar installations of interval every regulation
Individual community, calculate respectively described radar installations receive by more than one object reflect described radar letter
The representative value i.e. reflex strength of the power receiving signal of the reflected signal of number gained, uses described reflection strong
Degree, generates power distribution information respectively for the plurality of community;By from the described merit of the plurality of community
Represent among rate distributed intelligence that the community of maximum of described reflex strength is as capturing more than one
The capture point of object calculates, and calculates the more than one community i.e. capture region surrounding described capture point;
It is extracted in the image that camera system obtains the edge of the object of more than the one comprised;Based on institute
State measurement range and the coverage of described camera system of radar installations, described capture region is converted
For the subregion in described image, using described subregion as described in corresponding with described capture region
The region of image i.e. labelling calculates;By described edge to be extended described labelling as border, calculate
A part of corresponding component area with the object constituted more than one;Using described component area as
Target object region is grouped;And the object more than described target object regional determination one,
Export described result of determination.
The effect of invention
In accordance with the invention it is possible to the sensing function of overlap radar device and the sensing of camera system efficiently
Function, makes the accuracy of detection of object improve.
Accompanying drawing explanation
Figure 1A is the use of the structure of the sensing equipment (sensing unit) of the article detection device of the present invention
Concept map.
Figure 1B is the use of the concept map of the structure of the sensing equipment of the article detection device of the present invention.
Fig. 2 A is the concept map of the setting place of the article detection device of the present invention.
Fig. 2 B is the concept map of the setting place of the article detection device of the present invention.
Fig. 3 is the block diagram of the primary structure of the article detection device representing embodiments of the present invention 1.
Fig. 4 is the figure of the example representing the power distribution information in embodiments of the present invention 1.
Fig. 5 is the figure of an example of the result of calculation representing the capture region in embodiments of the present invention 1.
Fig. 6 is the figure of the example representing the three-dimensional coordinate system of radar surveying.
Fig. 7 is to represent distance, the figure of relation between maximum possible height and ground distance.
Fig. 8 is from the coordinate of the three-dimensional coordinate transform of video camera to camera image plane for explanation
Figure.
Fig. 9 is the figure of the example representing camera image plane.
Figure 10 is an example of the result of calculation representing the labelling corresponding with the capture region shown in Fig. 5
Figure.
Figure 11 is the figure representing the example in the case of component area computing unit dividing mark.
Figure 12 is the figure of an example of the result of the area extension representing component calculation unit.
Figure 13 is the one of the result representing the region being transformed in radar surveying plane by component area coordinate
The figure of example.
Figure 14 is the figure of an example of the group result representing packet processing unit.
Figure 15 is the block diagram of the primary structure of the article detection device representing embodiments of the present invention 2.
Figure 16 is the block diagram of the primary structure of the article detection device representing embodiments of the present invention 3.
Figure 17 is the block diagram of the primary structure of the article detection device representing embodiments of the present invention 4.
Detailed description of the invention
(completing the process of the present invention)
First, the process of the present invention has been described.The present invention relates at in-vehicle radar device and video camera
Device, road infrastructure system radar installations and the supervision system of camera system and specific facilities
The article detection device used in system.
In-vehicle radar device and camera system have been loaded on a lot of vehicle, road infrastructure
System radar installations and camera system also in the infrastructure system being directed to road, additionally,
As the monitoring system of specific facilities, appointing of radar installations or camera system can be utilized individually in the past
What one, use the situation of radar installations and camera system also increasing simultaneously.
Road infrastructure system radar installations and camera system are arranged on the road of intersection etc.
Around, around detection road and road, the vehicle of existence, pedestrian, two wheeler etc., carry out traffic
Monitor and the management of traffic.
As the supervision of traffic, road infrastructure system radar installations and camera system are carried out
The detection of the volume of traffic, the hypervelocity of vehicle and the detection ignored a signal etc..Additionally, as the management of traffic,
Road infrastructure system radar installations and camera system, based on the volume of traffic detected, carry out traffic
The control of signal lights.Or, road infrastructure system radar installations and camera system detect at car
Dead angle at the object that exists, and by the driver of the information notice vehicle of the object that detects.So,
Road infrastructure system radar installations and camera system are capable of the high efficiency of traffic and prevent from handing over
Interpreter's event.
No matter in in-vehicle radar device and camera system, or at road infrastructure system thunder
Reach in device and camera system, be required for being correctly detected vehicle, pedestrian, bicycle, motorcycle etc.
The target object that feature is different.Additionally, in monitoring system radar installations and camera system, inciting somebody to action
In the case of ground is as monitor area, need to be correctly detected various vehicle and pedestrian, additionally, inciting somebody to action
In the case of in the air as monitor area, need to be correctly detected various aircraft and bird.
Be correctly detected by each target object, it is possible to correctly rest in space object exist state,
The state of the volume of traffic, is correctly predicted the probability swarmed into or collide.If each target object is not
It is correctly detected, then produces missing inspection and the error detection of target object, it is difficult to rest in space object and exist
State and the state of the volume of traffic, it is difficult to prediction have the probability swarmed into or collide.
Usually, in the measurement in radar installations, obtain multiple stronger reflections from a target object
Point (hereinafter referred to as capture point).To this end, in order to detect target object from measurement result, needs will be with same
The capture point packet that one object is corresponding.
In patent documentation 1, determine the number of target object according to the metrical information of vehicle-mounted camera system
Azimuthal with azimuthal scope of each target object, number based on target object and each target object
Scope, carries out being grouped by the capture point being grouped or releasing group again.By such process, at patent literary composition
Offer and technology disclosed in 1 is avoided error detection and missing inspection.
But, in technology disclosed in patent documentation 1, the accuracy of detection of object because of target object number and
The precision of the sensing function of the precision of azimuthal scope of each target object, i.e. camera system and change.
Additionally, in patent documentation 2, be difficult to when obtaining multiple capture point detect target object i.e. car
, it is as a result, it is difficult to utilize technology disclosed in patent documentation 2.
In view of such situation, if being conceived to consider metrical information and the survey of radar installations of camera system
The difference of amount information, then can these metrical informations overlapping efficiently, complete the present invention.
According to the present invention, in in-vehicle radar device and camera system, it is possible to be correctly detected at this
Vehicle, two wheeler and the pedestrian that vehicle periphery exists, it was predicted that and the collision risk between this vehicle,
Carry out warning and the control for avoiding danger.Its result, it is achieved try to forestall traffic accidents.
Additionally, according to the present invention, on airport, harbour, the specific facilities of railway station or building etc.
In monitoring system radar installations and camera system, it is correctly detected aircraft and bird from the air, from ground
Face is correctly detected various vehicle and intruder, and interlocks with outside security system, prevents suspicious people's
Swarm into, it is ensured that the safety of facility.
Additionally, according to the present invention, in road infrastructure system radar installations and camera system,
Can be correctly detected and comprise vehicle, two wheeler and the pedestrian existed around the road of intersection,
Carry out colliding the prediction of probability, the avoiding and the grasp of the volume of traffic and management of collision.Its result,
Realization tries to forestall traffic accidents, and realizes the high efficiency of traffic administration simultaneously.
(present invention utilizes purpose)
Here, method of attachment and the setting place of the article detection device of the present invention it are described with reference to.
Figure 1A, Figure 1B are the use of the concept of the structure of the sensing equipment of the article detection device of the present invention
Figure.At Figure 1A, Tu1BZhong, R and C represents radar installations and camera system respectively.W represents this
The article detection device of invention.Figure 1A represents that radar installations R and camera system C is arranged on same
In casing, situation about being connected with article detection device W.Figure 1B represents radar installations R and video camera dress
Put C to be arranged in different casing, situation about being connected with article detection device W.Further, Figure 1A
And the article detection device W of Figure 1B is also connected to security system or the display unit of outside.
The present invention is not intended to arrange radar installations R and the method for camera system C, place and relative
Position relationship.Additionally, be also not intended to investigative range and the detection of camera system C of radar installations R
Position relationship between scope.But, in order to the present invention is applicable to the investigative range of radar installations R and takes the photograph
The overlapping range of the investigative range of camera device C, preferably arranges radar in the way of making overlapping range big
Device R and camera system C is overlapping.
The present invention provides the metrical information of radar installations R and the metrical information overlapping of camera system C
The article detection device W of reason.The article detection device W of the present invention is also not restricted to radar installations R's
Structure and the structure of camera system C.Radar installations R and camera system C can also be existing
Market sale product and the product being made up of known technology.
Additionally, in the concept map shown in Figure 1A and Figure 1B, article detection device W and radar installations
R and camera system C is respectively provided with but it also may be included in radar installations R or camera system C.
Additionally, in the present invention, radar installations R and camera system C and article detection device W is even
Connect, metrical information is transferred to article detection device W, but does not limit this transmission means.Transmission means can
To be wire communication mode, it is also possible to be communication.
It follows that use the setting place of the article detection device W of Fig. 2 A, Fig. 2 B explanation present invention.
Fig. 2 A, Fig. 2 B are the concept maps of the setting place of the article detection device W of the present invention.Fig. 2 A is thing
The concept map being loaded on vehicle together with health check-up survey device W and radar installations R and camera system C,
Fig. 2 B is at road infrastructure together with article detection device W and radar installations R and camera system C
The concept map utilized in system.
In fig. 2, V represents that this vehicle, R/C represent that loaded on this vehicle comprises radar installations R
With the measurement apparatus of camera system C, T1 with T2 represents two different target objects.Load onto,
Article detection device W can be integrated with measurement apparatus R/C, arranges position and measurement apparatus R/C
Can also be different, as long as it is all right to be easy to detect the object being positioned at around the front of this vehicle V or side.
In fig. 2b, R/C represents that load on road infrastructure includes radar installations R and shooting
The measurement apparatus of machine C, P represents that road surface, L represent the pillar etc. that measurement apparatus R/C is set
Supporting arrangement, T1 with T2 represents two different target objects.Fig. 2 B is to be provided with measurement apparatus R
The schematic diagram of the stravismus near the position of/C.
Road surface P can be the road kept straight on, it is also possible to be a part for intersection.Additionally, be set
The position of measurement apparatus R/C can be the top of road, roadside, the top of intersection or intersect
Each turning at crossing.Further, the present invention does not limit the side of position and the setting arranging measurement apparatus R/C
Method.As long as measurement apparatus R/C can be easy to detection is being positioned at around the crossing of intersection existence
Vehicle, pedestrian, two wheeler etc. all right.
In Fig. 2 A and Fig. 2 B, target object T1 is greater than the object of target object T2, such as,
Object corresponding to vehicle etc..Additionally, target object T2 is such as corresponding to motor, bicycle, pedestrian etc..
Additionally, in the concept map shown in Fig. 2 A and Fig. 2 B, target object T2 is positioned at ratio target object T1
Away from the position that radar installations is near.The article detection device W of the present invention is by target object T1 and target object
T2 separates and is detected separately.
Additionally, although it is not shown, but the setting place of the article detection device W of the present invention can also be energy
Enough monitor the place of the specific facilities of airport, harbour, railway station or building etc..The object of the present invention
The measured zone of detection device W is also not limited to ground region, it is also possible to for aerial supervision or survey
Amount.
It follows that embodiments of the present invention are described in detail with reference to accompanying drawings.Further, following description is each
Embodiment is an example, and the present invention is not limited to these embodiments.
(embodiment 1)
First, the article detection device of accompanying drawing explanation embodiments of the present invention 1 is used.Fig. 3 is to represent
The block diagram of the primary structure of the article detection device 30 of embodiments of the present invention 1.
The article detection device 30 of embodiments of the present invention 1 is connected to radar installations R and video camera dress
Put C.Radar installations R has: change direction with the angle intervening sequences of regulation, while sending thunder
Reach the transmitting element of signal;Receive the reception unit of the reflected signal that radar signal reflects on target object;
And reflected signal transforms to base band, and the delay distribution obtaining each sending direction of radar signal (passes
Broadcast lag characteristic) signal processing unit.Camera system C shooting subject (target object), and obtain
Take view data.
Article detection device 30 has: information generating unit 31, capture region computing unit 32, shooting
Machine image acquisition unit 33, edge calculations unit 34, labelling computing unit 35, component area calculate single
Unit 36, packet processing unit 37 and object determine unit 38.Each structure of article detection device 30
Can be realized by the hardware of LSI circuit etc..Or, each structure of article detection device 30 also can be as control
A part for the electronic control unit (Electronic Control Unit:ECU) of vehicle processed realizes.
Information generating unit 31 is according to the delay distribution exported from the signal processing unit of radar installations, to thunder
Reach each sending direction of signal, to dividing each of the distance gained away from radar installations at a prescribed interval
Community, measures the representative value (hereinafter referred to as " reflex strength ") receiving power of reflected signal.Then,
Information generating unit 31 generates the power distribution information of the reflex strength representing each community, and exports capture
Area calculation unit 32.Further, reflex strength is generally successive value, but in order to make process simple, information
Signal generating unit 31 can also carry out quantification treatment.Further, it is raw to discuss signal generating unit 31 for information about below
The details of the power distribution information become.
First capture region computing unit 32 calculates the maximal point of reflex strength from power distribution information.By catching
Obtain the maximal point that area calculation unit 32 calculates and become the capture point of capture target object.Specifically, catch
Obtain area calculation unit 32 power distribution information to be processed as image, calculate greatly in a known manner
Point.Then, capture region computing unit 32 uses the method for known image procossing, calculates capture relatively
The capture region of point.Capture region is to surround the regional area of capture point, among the point around capture point
, the point of more than value with regulation reflex strength constitutes.Further, discuss relevant capture region below
The computational methods of the capture region in computing unit 32.
Camera review acquiring unit 33 accepts view data from camera system C, carries out picture quality
The pretreatment of improvement etc., output to edge calculations unit 34.
Edge calculations unit 34 uses known edge extracting method, from by camera review acquiring unit
The view data of 33 outputs calculates the edge (profile) of target object.
The capture region that labelling computing unit 35 calculates from capture region computing unit 32 calculates labelling.Mark
Note is the subregion of the camera review corresponding with capture region.Discuss relevant labelling computing unit below
The computational methods of the labelling in 35.
Component area computing unit 36 uses the edge of the camera review that edge calculations unit 34 calculates,
The labelling extension calculated by labelling computing unit 35 calculates component area.Discuss relevant component area below
The computational methods of the component area in computing unit 36.
Among the component area that component area computing unit 36 is calculated by packet processing unit 37, belong to
The component area of same object is grouped.Packet processing unit 37 obtains the result of packet, object
Body region.The method discussing packet about the component area in packet processing unit 37 below.
Object determines the unit 38 i.e. target object of result based on the packet transaction in packet processing unit 37
Region, classification (such as, the large car of the position of discrimination objective object, size, shape and then object
, dilly, two wheeler, pedestrian etc.).Discuss relevant object below and determine the target in unit 38
The discriminating conduct of object.Object determines that unit 38 will distinguish that result exports security system or the display of outside
Unit.
Then, the power distribution information that descriptive information signal generating unit 31 generates.Fig. 4 represents the present invention
The figure of one example of the power distribution information in embodiment 1.The transverse axis of Fig. 4 represents radar installations R's
Azimuth, the longitudinal axis represents the distance away from radar installations R.In the following description, will be by radar installations R
Azimuth and the plane that specifies of distance away from radar installations R be referred to as radar surveying plane.
In the example in fig. 4, the azimuth of every 10 ° of division transverse axis, the distance of every 10 meters of division longitudinal axis,
Constitute community.Further, in the present embodiment, angular range and the distance range of community are not limited to
State scope.In terms of available higher resolution, the most each scope is less.
Additionally, in the diagram, the deep or light expression reflex strength of power distribution information Zhong Ge community, color
The denseest, represent that reflex strength is the strongest.Further, for the purpose of simplifying the description, the community beyond specific community
Color is set to identical white.
Additionally, in the present embodiment, the reflex strength (representative value) of each community is set to the model of this community
The maximum receiving power in enclosing.But, the invention is not restricted to this, it is also possible to the reflection to each community
Intensity (representative value) uses other values of the meansigma methods etc. receiving power in the scope of this community.
Additionally, below, each community of the power distribution information shown in Fig. 4 is suitably processed as a point
Illustrate.
It follows that use Fig. 4 and Fig. 5 to illustrate about the capture region in capture region computing unit 32
Computational methods.
First, capture region computing unit 32 calculates capture point from the power distribution information shown in Fig. 4.Catch
Obtaining is some the maximal point of reflex strength in power distribution information.The computational methods of the maximal point of reflex strength,
Known method can also be used.Such as, certain specifically point and reflection of the point adjacent with this point are compared
Intensity, if more than this reflex strength specifically put reflex strength certain value more than adjacent point, then
This specifically can also be put the maximal point as reflex strength.
In the case of the power distribution information shown in Fig. 4, it is strong that capture region computing unit 32 calculates reflection
Maximal point i.e. capture point a1, a2 and a3 of degree.
Then, power distribution information is processed by capture region computing unit 32 as image, uses region
The known image processing methods such as growth (region growing) image processing method, calculate and surround respectively
The capture region of capture point a1, a2 and a3.About the details of region growing image processing method, please join
According to non-patent literature 1.
Fig. 5 is the figure of an example of the result of calculation representing the capture region in embodiments of the present invention 1.
The horizontal direction of Fig. 5 is corresponding with the azimuth of radar installations R, longitudinal direction with away from radar installations R away from
From correspondence.Capture region A1 shown in Fig. 5, A2 and A3 are to surround capture point a1, a2 and a3 respectively
Regional area.Additionally, capture region A1, A2 and A3 are the regional areas in radar surveying plane.
Generally, comparing capture point, capture region is not easily susceptible to effect of noise.
It follows that the computational methods of the labelling in explanation labelling computing unit 35.Labelling computing unit 35
Regional area from radar surveying plane i.e. capture region, calculates the part in the plane of camera review
Region i.e. labelling.In the following description, by by the horizontally and vertically regulation of camera review
Plane be referred to as camera image plane.Further, the coordinate of radar surveying plane and camera image plane
Coordinate inconsistent.Therefore, labelling computing unit 35 carries out coordinate transform, calculates labelling from capture region.
Hereinafter, illustrate that labelling computing unit 35, from capture region A corresponding with target object T1, calculates labelling
Situation.
Specifically, labelling computing unit 35 sequentially carries out surveying to radar from the coordinate of radar surveying plane
Measure three-dimensional transformation of coordinates, three-dimensional to video camera from the three-dimensional coordinate of radar surveying
Transformation of coordinates and from the three-dimensional coordinate of video camera to the transformation of coordinates of camera image plane
Three coordinate transforms.
Radar surveying three dimensions is the space of radar installations R scanning, and video camera three dimensions is video camera
Device C carries out the space shot.If radar installations R and camera system C to arrange position different,
The most also there are radar surveying three dimensions and the inconsistent situation of video camera three dimensions.
Here, the azimuth coverage of capture region A in radar surveying plane is set to θ 1~θ 2, will be away from
It is set to d1~d2 from scope.Azimuth coverage is by the minimum azimuth angle theta 1 of capture region A and maximum orientation
Angle θ 2 determines, distance range is determined by minimum range d1 and ultimate range d2 of capture region A.
(from the coordinate of radar surveying plane to the three-dimensional transformation of coordinates of radar surveying)
First, illustrate from the coordinate of radar surveying plane to the three-dimensional transformation of coordinates of radar surveying.
This conversion is, from the azimuth coverage θ 1~θ 2 and distance range d1~d2 of capture region A, calculate with
The three-dimensional position of radar surveying that capture region A is corresponding and the conversion of size.
Fig. 6 is the figure of the example representing the three-dimensional coordinate system of radar surveying.Initial point shown in Fig. 6
O and Xr-Yr-Zr represents the three-dimensional coordinate system of radar surveying.Radar installations R is arranged on Zr
On axle.Highly Hr is corresponding with the height arranging radar installations R.Additionally, filling away from radar shown in Fig. 7
Distance d putting R is corresponding with distance d of the longitudinal axis in radar surveying plane.Ground distance L is to Xr
Distance on the ground (road surface) of the target object T1 in-Yr plane.Highly h is target object T1
Height.Further, the position of target object T1 and be shaped as schematically.
Radar installations R using the Yr-Zr plane in Xr=0 as the direction of azimuth angle theta=0 °, by Zr
Axle, as axle, scans the radar surveying three dimensions shown in Fig. 6.Now, the horizontal stroke in radar surveying plane
The azimuth angle theta of axle with the scanning plane of radar installations R relative in radar surveying three-dimensional Xr-Yr plane
Projected position corresponding.Such as, the projected position of scanning plane is corresponding with azimuth angle theta with the angle that Yr axle is formed.
Fig. 6 represents to be the situation that 0 ° of corresponding position exists target object T1 with azimuth angle theta.
Generally, radar installations R measure the reflex strength corresponding with azimuth angle theta and distance d, but the opposing party
Face, radar installations R detects the direction of the Zr axle of Fig. 6, the most accurately in more detail for Fig. 6
In angle of pitch φ.That is, radar installations R can not from the height h of reflex strength detecting objects body T1,
It is as a result, it is difficult to detect the ground distance L to target object T1.
Therefore, the labelling computing unit 35 in present embodiment preset the maximum of target object T1 can
Can height hp.Maximum possible height hp is the probable value to greatest extent of the height as target object T1.
Such as, in the case of target object T1 is pedestrian, maximum possible height hp is set to 2 meters.Further,
In this stage, what object target object T1 is not determined to be, but maximum possible height hp based on mesh
The size of the capture region that mark object T1 is corresponding and reflex strength etc. set.
Labelling computing unit 35 uses maximum possible height hp, calculates distance d from radar surveying plane
The scope of the ground distance L of the target object T1 to radar surveying three dimensions.
Fig. 7 is the figure representing distance d, relation between maximum possible height hp and ground distance L.Figure
The signal that 7 expressions are reflected by target object T1 is at the Near Ground (Zr=0 in Fig. 7) of target object T1
The situation of reflection, and the signal reflected by target object T1 is at the maximum possible height of target object T1
The situation of reflection near hp.
As it is shown in fig. 7, for distance d of a reflex strength, ground distance L is anti-at Near Ground
Ground distance L1 in the case of penetrating and near maximum possible height hp ground identity distance in the case of reflection
Scope between L2.
For distance range d1~d2 of capture region A, labelling computing unit 35 calculates relative distance d 1
Ground distance L1 (L11) and the ground distance L2 (L12) of relative distance d 1, and calculate relatively away from
Ground distance L1 (L21) and the ground distance L2 (L22) of relative distance d 2 from d2.Then, mark
Note computing unit 35 judges minima Lmin among L11, L12, L21 and L22 and maximum Lmax.
Its result, labelling computing unit 35, from distance range d1~d2 of capture region A, calculates Yr direction of principal axis
Ground distance range L min~Lmax.
Additionally, as described above, the azimuth angle theta of the transverse axis in radar surveying plane is with radar installations R's
Projected position in the Xr-Yr plane of scanning plane is corresponding, so labelling computing unit 35 is from azimuth model
Enclose θ 1~θ 2 and calculate the azimuth coverage θ 1~θ 2 of the target object T1 in Xr-Yr plane.
(from the three-dimensional coordinate of radar surveying to the three-dimensional transformation of coordinates of video camera)
It follows that illustrate from the three-dimensional coordinate of radar surveying to the change of the three-dimensional coordinate of video camera
Change.The position that arranges arranging position and camera system C of radar installations R is known respectively, so
From the three-dimensional coordinate of radar surveying to the three-dimensional transformation of coordinates of video camera, use the seat of standard
Mark mapping mode is carried out.
By carrying out this conversion, even if arranging position and the setting of camera system C at radar installations R
In the case of the difference of position, the regional area from radar surveying plane i.e. capture region, it is also possible to calculate
Subregion in the plane of camera review i.e. labelling.
Hereinafter, for the purpose of simplifying the description, it is assumed that video camera three dimensions and the coordinate system with Xr-Yr-Zr
Radar surveying three dimensions be identical.I.e., it is assumed that the azimuth coverage in radar surveying three dimensions
θ 1~θ 2 and Yr axial ground distance range L min~Lmax are direct in video camera three dimensions
Use and illustrate.
(from the three-dimensional coordinate of video camera to the transformation of coordinates of camera image plane)
It follows that illustrate from the three-dimensional coordinate of video camera to the transformation of coordinates of camera image plane.
This conversion is, from video camera three dimensions (in the following description, identical with radar surveying three dimensions)
In azimuth coverage θ 1~θ 2 and Yr axial ground distance range L min~Lmax, calculate and take the photograph
The conversion of the scope that difference on the camera plane of delineation is corresponding.The camera review obtained by this conversion is put down
Scope, i.e. subregion on face are the labellings of relative capture region A.
Here, first, the Yr axial ground distance scope from video camera three dimensions is described
Lmin~Lmax, the method for the scope on camera image plane that calculating is corresponding.
Fig. 8 is for illustrating from the three-dimensional coordinate of video camera to the coordinate transform of camera image plane
Figure.Fig. 9 is the figure of an example of camera image plane.Fig. 9 is to represent in the space shown in Fig. 8
In, show schematically the figure of the image that camera system C shoots.Here, in order to illustrate, represent Fig. 9
Camera image plane, labelling computing unit 35 uses the image shot practically, i.e. use shooting
The image that machine image acquisition unit 33 obtains.
Initial point O and Xr-Yr-Zr shown in Fig. 8 represents the three-dimensional coordinate system of video camera.Shooting
Machine C is arranged on Zr axle.Highly Hc is corresponding with the height arranging camera system C.Hereinafter,
By the position of camera system C, in more detail camera system C is shot the central point of image as point
C, and the position of the C that the sets up an office height Hc that is positioned at Zr axle illustrates.
Angle ∠ PCQ shown in Fig. 8 is the angle of visual field scope of the vertical direction of camera system C.Fig. 8
With the some P shown in Fig. 9 and some Q respectively with lower limit and the upper limit pair of the angle of visual field scope of camera system C
Should.Point P and some Q calculates from the angle of visual field scope of camera system C.
Additionally, the Yr-Zr plane in Xr=0 in Fig. 8 is corresponding with the PQ line segment in Fig. 9.Additionally,
Xr=0 in Fig. 8 is central corresponding with the angle of visual field scope of the horizontal direction of camera system C.
End point F shown in Fig. 8 is the infinite of the road surface P shown in Fig. 9, in camera image plane
Far point.End point F calculates by known method.
Ground distance range L min shown in Fig. 8~Lmax are to thunder at the coordinate from radar surveying plane
Reach and measure the ground distance scope obtained in three-dimensional transformation of coordinates.Hereinafter, ground distance scope
Lmin~Lmax illustrates as the scope of the some K on Yr axle~some J.
As shown in Figure 8 and Figure 9, by the point on the camera image plane corresponding with a J and some K respectively
It is set to a V and puts U.Corresponding taking the photograph is calculated from Yr axial ground distance range L min~Lmax
Scope on the camera plane of delineation, i.e. be to calculate the some U on camera image plane and the position of some V.
First, the method illustrating to calculate the position of the some U on camera image plane.
For end point F, some P and some Q, ∠ PCF: ∠ PCQ=PF:PQ relation set up.∠
PCF and ∠ PCQ is the angle in the camera review three dimensions shown in Fig. 8, PF and PQ is Fig. 9
The shown length in camera image plane.Here, ∠ PCQ is the vertical direction of camera system C
Angle of visual field scope, PQ is the longitudinally wide of camera review.It is all that the specification by camera system C is true
Fixed known value.Additionally, end point F calculates by known method, so PF is also known.
∠ PCF calculates from above-mentioned relation.
Then, as shown in Figure 8, from the length of the OC i.e. length i.e. ground distance of height Hc and OK
Lmin, uses trigonometric function etc., calculates ∠ OKC.Connect Fig. 8 some C and some F straight line parallel in
Yr axle, so ∠ OKC and the ∠ UCF calculated is identical.
Then, for the ∠ PCF calculated and ∠ UCF, the pass of so-called ∠ UCF: ∠ PCF=UF:PF
It is tied to form vertical.PF and UF is the length in the camera image plane shown in Fig. 9.The length of UF is according to being somebody's turn to do
Relation calculates.
The position putting U the camera image plane shown in Fig. 9 is calculated from the UF calculated.For figure
Point V in camera image plane shown in 9, also calculates with the order as a U.
As above-mentioned, labelling computing unit 35 from Yr axial ground distance range L min~Lmax,
Calculate the some U on camera image plane and the position of some V.
It follows that the azimuth coverage θ 1~θ 2 that explanation is from video camera three dimensions, calculate corresponding taking the photograph
The method of the scope on the camera plane of delineation.
Azimuth in video camera three dimensions and the horizontal direction in the camera image plane shown in Fig. 9
The distance away from PQ corresponding.Additionally, the angle of visual field scope of the horizontal direction of camera system C is with rule
The known scope that lattice determine, corresponding with the left end of the horizontal direction of camera image plane and right-hand member.Mark
The angle of visual field of note computing unit 35 horizontal direction based on azimuth coverage θ 1~θ 2 and camera system C
Scope, calculates the scope on corresponding camera image plane, i.e. the distance away from PQ of horizontal direction.
The line θ 1 and θ 2 of the vertical direction shown in Fig. 9 is corresponding with the θ 1 and θ 2 of azimuth coverage.
As described above, the labelling computing unit 35 azimuth coverage θ 1 from video camera three dimensions~
Axial ground distance range L min of θ 2 and Yr~Lmax, calculate and difference on camera image plane
Corresponding scope.Then, the rectangle frame of the scope that encirclement is calculated by labelling computing unit 35 is as labelling.
The labelling B of Fig. 9 is the labelling corresponding with capture region A.Labelling B be by through the some U that calculates and
The rectangle that the straight line of the horizontal direction of some V surrounds with line θ 1 and line θ 2.
Figure 10 is the figure of an example of the result of calculation of the labelling corresponding with the capture region shown in Fig. 5.Figure
10 represent and distinguish the most corresponding, camera image plane with capture region A1 shown in Fig. 5, A2 and A3
On labelling B1, B2 and B3.Additionally, in Fig. 10, labelling B1, B2 and B3 with by edge meter
The edge calculating the camera review that unit 34 calculates is heavy folded.As shown in Figure 10, from radar surveying plane
On each capture region, the labelling on camera image plane calculates as rectangle.
Further, the computational methods by the labelling of each coordinate transform of described above are examples, the present invention
It is not limited to this.Labelling computing unit 35 can be based on the measurable orientation of radar installations R in realistic space
The scope that the scope at angle and the scope of distance and camera system C can shoot, converts capture region,
And calculate labelling.Further, the measurable azimuthal scope of radar installations R in realistic space and distance
Scope, be determined in advance by the arranging position and the specification of radar installations R of radar installations R.Additionally,
The scope that camera system C can shoot, arranges position and camera system C by camera system C
Specification is determined in advance.
Additionally, the labelling of described above is set to rectangle, but the invention is not restricted to this.Labelling can also right and wrong
Rectangular shape.
It follows that the computational methods of the component area in explanation component area computing unit 36.
First, component area computing unit 36 is by heavy to labelling and edge folded, at a labelling and edge weight
In the case of conjunction, dividing mark.In the case of Figure 10, labelling B2 and coincident, so component district
Territory computing unit 35 dividing mark B2.
Figure 11 is the figure of the example in the case of component area computing unit 36 dividing mark.Such as Figure 11
Shown in, in Figure 10 and coincident labelling B2 is divided into labelling B21 and labelling B22.
Then, component area computing unit 36 using each labelling as the seed of area extension, using edge as
The border of area extension, utilizes the known image processing methods such as watershed (Watershed) algorithm to carry out
Area extension, calculates component area.Component area is and a part of corresponding video camera figure constituting object
Subregion in image plane.
Figure 12 is the figure of an example of the result of the area extension representing component calculation unit 36.Region is expanded
The result of exhibition, calculates component area C1 from labelling B1 and labelling B22, calculates component district from labelling B21
Territory C2, calculates component area C3 from labelling B3.
It follows that the method for the packet of the component area illustrated in packet processing unit 37.
Among the component area that component area computing unit 36 is calculated by packet processing unit 37, belong to
The component area of same object is grouped.Whether component area belongs to same object, by from video camera
One or both of information that image obtains and the information that obtains from radar surveying judges.
As the information obtained from camera review, such as, it is each component area in camera review
Texture (texture).The texture of the component area that packet processing unit 37 is the most neighbouring, similar at texture
In the case of, neighbouring component area is grouped.Texture is the most similar, it is also possible to by regulation
Threshold values etc. judge.
As the information obtained from radar surveying, such as, there is doppler information.Doppler information is radar
The information of the speed of each point in measurement plane.Here, the letter during doppler information is radar surveying plane
Breath, component area is the region on camera image plane.Therefore, judging to divide by doppler information
In the case of whether amount region belongs to same object, need to put down component area coordinate transform to radar surveying
On region on face.
The method in the region that component area coordinate is transformed in radar surveying plane, with from described above
Capture region calculate the contrary order of method of labelling and carry out.
Figure 13 is to represent the result after component area coordinate is transformed to the region in radar surveying plane
The figure of one example.The horizontal direction of Figure 13 is corresponding to the azimuth of radar installations R, and longitudinal direction corresponds to
Away from the distance of radar installations R, in each point (each community), comprise doppler information.The region of Figure 13
D1, D2 and D3 correspond respectively to component area C1 shown in Figure 12, C2 and C3.
Packet processing unit 37 compares the doppler information comprised in region D1, D2 and D3, many
In the case of general Le information is similar, component area neighbouring on camera image plane is grouped.Many
General Le information is the most similar, it is also possible to judged by the threshold value of regulation.
Figure 14 is the figure of an example of the result of the packet representing packet processing unit 37.As shown in figure 14,
Component area C1 and the C2 of Figure 12 are grouped, and become the component area of target object region E1, Figure 12
C3 is not grouped with other regions, becomes target object region E2.
In the example shown in Figure 14, packet processing unit 37 obtains the result of packet, two targets
Object area E1 and E2.
It follows that explanation object determines the discriminating conduct of the target object in unit 38.
Object determines the result i.e. target object region of unit 38 packet based on packet processing unit 37,
The classification of the position of discrimination objective object, size, shape and then object.In embodiments of the present invention
In 1, do not limit the concrete discriminating conduct that object determines in unit 38.Such as, object determines unit 38
Keep the template model of the size and shape in the target object region corresponding with the classification of object in advance, pass through
The i.e. target object region of result of template model and the packet of packet processing unit 37 is compared, it is possible to
To distinguish.Or, object determine unit 38 by the reflex strength corresponding with the classification of object point
The template model of cloth compares, it is also possible to distinguish.
Such as, illustrate to use template model to distinguish for target object region E1 and E2 shown in Figure 14
Other situation.Object determines that multiple template models of target object region E1 and holding are carried out by unit 38
Relatively, it is determined that consistent with the template model of vehicle for target object region E1.Additionally, object determines unit
Multiple template models of target object region E2 and holding are compared by 38, it is determined that for target object district
Territory E2 is consistent with the template model of pedestrian.
Present embodiment according to the above description, by being transformed to the capture region in radar surveying plane
Labelling on camera image plane, overlaps labelling on camera review, it is possible to increase target object
Accuracy of detection.That is, by metrical information and the metrical information of video camera, the energy of overlap radar efficiently
Enough improve the accuracy of detection of target object.
(embodiment 2)
Figure 15 is the frame of the primary structure of the article detection device 150 representing embodiments of the present invention 2
Figure.In fig .15, to the structure common with Fig. 3, the additional label identical with Fig. 3 to omit it detailed
Explanation.Article detection device 150 shown in Figure 15 has the article detection device 30 shown in Fig. 3
Information generating unit 31 and capture region computing unit 32 be replaced into information generating unit 151 He respectively
The structure of capture region computing unit 152.
As the information generating unit 31 of embodiment 1, information generating unit 151 generates power distribution
Information.And, information generating unit 151, from the delay distribution accepted by radar installations R, generates and represents
Doppler's distributed intelligence of the doppler velocity of each community.In Doppler's distributed intelligence, transverse axis represents orientation
Angle, the longitudinal axis represents distance.
Capture region computing unit 152, based on power distribution information and Doppler's distributed intelligence, calculates capture
Region.
Specifically, capture region computing unit 152 is according to the method for explanation in embodiment 1, from merit
Rate distributed intelligence calculates capture region.And, capture region computing unit 152 compares in capture region
The doppler velocity of each point (each community) comprised, it is determined that doppler velocity is the most consistent.Capture region
The inconsistent point (community) of value that Doppler is distributed by computing unit 152 removes from capture region.
The capture region calculated is exported labelling computing unit 35 by capture region computing unit 152.At mark
After note computing unit 35, perform the process as the process of explanation in embodiment 1.
Present embodiment according to the above description, by using doppler velocity by one from capture region
Branch (community) removes, it is possible to avoids but being included in one from the reflex strength of different object reflections and catches
Obtain the situation in region.
Further, in the present embodiment, capture region computing unit 152 is based on power distribution information and many
General Le distributed intelligence calculates capture region but it also may calculate capture region based on Doppler's distributed intelligence.
(embodiment 3)
Figure 16 is the frame of the primary structure of the article detection device 160 representing embodiments of the present invention 3
Figure.In figure 16, to the structure common with Fig. 3, the additional label identical with Fig. 3 to omit it detailed
Explanation.Article detection device 160 shown in Figure 16 has at the article detection device 30 shown in Fig. 3
Packet processing unit 37 and object determine that inserting specification frame between unit 38 determines the structure of unit 161.
Specification frame determines that unit 161 seeks the result i.e. target object of the packet covered in packet processing unit 37
The specification frame in region.Specification frame is the frame of the shape reflecting target object, such as, for the frame of rectangle.
Specification frame determine unit 161 in the hope of specification frame surround target object region, interpolation is at packet
Reason unit 37 is difficult to the packet in the target object region being grouped.
Present embodiment according to the above description, carries out the interpolation being grouped, it is possible to make by operating specification frame
The shape in target object region is closer to the shape of object, it is possible to increase object determines the object in unit 38
Determination precision.
(embodiment 4)
Figure 17 is the frame of the primary structure of the article detection device 170 representing embodiments of the present invention 4
Figure.In fig. 17, to the structure common with Fig. 3, the additional label identical with Fig. 3 to omit it detailed
Explanation.Article detection device 170 shown in Figure 17, has at the article detection device 30 shown in Fig. 3
Packet processing unit 37 and object determine the structure of insert region tracking cell 171 between unit 38.
Area tracking unit 171 follows the tracks of the group result of packet processing unit 37 i.e. during the different moment
The position in target object region and shape.
Specifically, the target object region during area tracking unit 171 keeps certain detection timing t 1.
Area tracking unit 171 accepts the target object next detection timing t 2 from packet processing unit 37
Region, makes the target object region in detection timing t 1 and the target object region in detection timing t 2 link
(1ink).Then, area tracking unit 171 follow the tracks of link target object region shape change and
The change of position, the movement in detection target object region.
The information of the movement about target object region is exported object and determines list by area tracking unit 171
Unit 38.When from target object region discrimination objective object, object determines that unit 38 can also be with reference to having
Close the information of the movement in target object region.Additionally, after picking out target object, object determines unit
The information of the movement about this target object can also be exported outside together with the information of target object by 38
Display unit or security system etc..
Present embodiment according to the above description, by following the tracks of the target object district in different detection timings
The position in territory and shape, the movement in detection target object region, it is possible to increase the precision distinguished of object,
And it is obtained in that the information relevant with the movement of object.
Further, each embodiment of described above can also be appropriately combined.Such as, at embodiment 4
In article detection device 171, it is also possible to interleaving of packet processing unit 37 and area tracking unit 171
Enter the specification frame of explanation in embodiment 3 and determine unit 161.In the case of such structure, it is possible to
Make the shape shape closer to object in target object region, it is possible to increase the mesh of area tracking unit 171
The mobile accuracy of detection of mark object.
Further, in the respective embodiments described above, illustrate to constitute the situation of the present invention with hardware by example,
But the present invention also can realize with software.
Additionally, the method for integrated circuit is not limited to LSI, it is possible to use special circuit or general processor
Realize.FPGA (the Field Programmable Gate that can program after LSI manufactures can also be used
Array: field programmable gate array), connecting of circuit unit within restructural LSI or weighing of setting
Structure processor (Reconfigurable Processor).
Furthermore, along with the technological progress of quasiconductor or the appearance of other technology derived from therewith, if there is
Can substitute the technology of the integrated circuit of LSI, this technology the most available carries out the integrated of functional device.
There is also the probability being suitable for biotechnology etc..
Industrial applicibility
The article detection device of the present invention and object detecting method, be suitable for in-vehicle radar device and take the photograph
Camera device and road infrastructure system radar installations and camera system and facility monitoring
System radar installations and camera system.In the situation for in-vehicle radar device and camera system
Under, it is possible to realize detection at the pedestrian of this vehicle periphery, two wheeler, other vehicles, and to this vehicle
Driver sends alarm or controls control loop, and the danger of collision free.Additionally, for basis
In the case of facility system radar installations and camera system, it is possible to realize detection at road and crossroad
Pedestrian in Kou, two wheeler, vehicle etc., monitor traffic, control simultaneously infrastructure system and to
Vehicle driver transmits information, carries out the management of the volume of traffic and avoids traffic accident.For specific facilities
Monitoring system radar installations and camera system in the case of, it is possible to detection from aerial aircraft
With bird or from the various vehicles on ground and intruder, security system is transmitted information, and prevents suspicious
People swarms into.
Label declaration
30,150,160,170 article detection device
31,151 information generating unit
32,152 capture region computing unit
33 camera review acquiring units
34 edge calculations unit
35 labelling computing units
36 component area computing units
37 packet processing units
38 objects determine unit
161 specification frames determine unit
171 area tracking unit
Claims (10)
1. article detection device, including:
Information generating unit, each sending direction of radar signal that radar installations is sent, to every
The interval of regulation has divided multiple communities of the distance gained away from described radar installations, calculates described thunder respectively
Reach device and receive the reception of the reflected signal being reflected described radar signal gained by more than one object
The representative value of the power of signal i.e. reflex strength, uses described reflex strength, for the plurality of little differentiation
Do not generate power distribution information;
Capture region computing unit, will represent institute among the described power distribution information of the plurality of community
The community of the maximum stating reflex strength calculates as the capture point of object more than capture one,
And calculate the more than one community i.e. capture region surrounding described capture point;
Edge extracting unit, is extracted in the image that camera system obtains more than the one comprised
The edge of object;
Labelling computing unit, measurement range based on described radar installations and the shooting of described camera system
Scope, is transformed to the subregion in described image by described capture region, using described subregion as
The region i.e. labelling of the described image corresponding with described capture region calculates;
Component area computing unit, by described edge to be extended described labelling as border, calculate with
Constitute the component area of the part correspondence of the object of more than one;
Packet processing unit, is grouped described component area as target object region;And
Object determines unit, the object more than described target object regional determination one, exports institute
State result of determination.
2. article detection device as claimed in claim 1,
Described labelling computing unit has the elevation information representing probable value to greatest extent in advance as described one
The height of individual above object, and the height that described elevation information is used as the object of more than one is next
Convert described capture region, and calculate described marked region.
3. the article detection device as described in claim 1 or claim 2,
Described component area computing unit is by described labelling and described imbricate, and splits described labelling.
4. the article detection device as described in any one of claim 1 to claim 3,
The delay distribution that described information generating unit is tried to achieve from described reception signal based on described radar installations,
Each described community is calculated doppler velocity, generates the described doppler velocity of the described each community of expression
Doppler's distributed intelligence,
The community more than one comprised in the described more described capture region of capture region computing unit
Described doppler velocity, the inconsistent community of the value that described Doppler is distributed is from described capture region
Remove.
5. the article detection device as described in any one of claim 1 to claim 4, also includes:
Specification frame determines unit, arranges the specification frame comprising described target object region, uses described specification
Frame carries out the interpolation of the packet in described target object region.
6. the article detection device as described in any one of claim 1 to claim 5, also includes:
Area tracking unit, follows the tracks of the change of the shape in the described target object region with time process,
Detect the information relevant with the movement in described target object region.
7. trailer-mounted radar device, including:
Claim 1 to claim 6 any one described in article detection device;And
It is connected to the radar installations of described article detection device.
8. road infrastructure system radar installations, including:
Claim 1 to claim 6 any one described in article detection device;And
It is connected to the radar installations of described article detection device.
9. monitoring system radar installations, including:
Claim 1 to claim 6 any one described in article detection device;And
It is connected to the radar installations of described article detection device.
10. object detecting method, comprises the following steps:
For each sending direction of the radar signal that radar installations sends, divide with the interval every regulation
Multiple communities of the distance gained away from described radar installations, calculate respectively described radar installations receive by
More than one object reflects the allusion quotation of the power receiving signal of the reflected signal of described radar signal gained
Offset i.e. reflex strength, uses described reflex strength, generates power distribution respectively for the plurality of community
Information;
The maximum of described reflex strength will be represented among the described power distribution information of the plurality of community
Community calculate as the capture point of object more than capture one, and calculate the described capture of encirclement
The more than one community i.e. capture region of point;
It is extracted in the image that camera system obtains the edge of the object of more than the one comprised;
Measurement range based on described radar installations and the coverage of described camera system, catch described
Obtain the subregion that region is transformed in described image, using described subregion as with described capture region
The region i.e. labelling of corresponding described image calculates;
By described edge to be extended described labelling as border, calculate and constitute more than one
The component area of the part correspondence of object;
Described component area is grouped as target object region;And
Object more than described target object regional determination one, exports described result of determination.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-027514 | 2015-02-16 | ||
JP2015027514 | 2015-02-16 | ||
JP2015-193173 | 2015-09-30 | ||
JP2015193173A JP6593588B2 (en) | 2015-02-16 | 2015-09-30 | Object detection apparatus and object detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105893931A true CN105893931A (en) | 2016-08-24 |
Family
ID=56761164
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610008658.6A Pending CN105893931A (en) | 2015-02-16 | 2016-01-07 | Object detection apparatus and method |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP6593588B2 (en) |
CN (1) | CN105893931A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106908783A (en) * | 2017-02-23 | 2017-06-30 | 苏州大学 | Obstacle detection method based on multi-sensor information fusion |
CN108226917A (en) * | 2016-12-09 | 2018-06-29 | 梅塔建筑株式会社 | High-accuracy emergency case detecting system based on radar |
CN108932869A (en) * | 2017-05-18 | 2018-12-04 | 松下电器(美国)知识产权公司 | Vehicular system, information of vehicles processing method, recording medium, traffic system, infrastructure system and information processing method |
WO2018218680A1 (en) * | 2017-06-02 | 2018-12-06 | 华为技术有限公司 | Obstacle detection method and device |
CN109118537A (en) * | 2018-08-21 | 2019-01-01 | 加特兰微电子科技(上海)有限公司 | A kind of picture matching process, device, equipment and storage medium |
CN109143220A (en) * | 2017-06-16 | 2019-01-04 | 电装波动株式会社 | Vehicle identifier, vehicle identification system, storage medium |
CN110660186A (en) * | 2018-06-29 | 2020-01-07 | 杭州海康威视数字技术股份有限公司 | Method and device for identifying target object in video image based on radar signal |
CN110940974A (en) * | 2018-09-21 | 2020-03-31 | 丰田自动车株式会社 | Object detection device |
CN111325088A (en) * | 2018-12-14 | 2020-06-23 | 丰田自动车株式会社 | Information processing system, program, and information processing method |
CN111508272A (en) * | 2019-01-31 | 2020-08-07 | 斯特拉德视觉公司 | Method and apparatus for providing robust camera-based object distance prediction |
CN111971579A (en) * | 2018-03-30 | 2020-11-20 | 三菱电机株式会社 | Object identification device |
CN112119330A (en) * | 2018-05-14 | 2020-12-22 | 三菱电机株式会社 | Object detection device and object detection method |
CN112581771A (en) * | 2019-09-30 | 2021-03-30 | 丰田自动车株式会社 | Driving control device for automatic driving vehicle, target object for parking, and driving control system |
CN112703424A (en) * | 2018-09-21 | 2021-04-23 | 玛尔斯登集团 | Anti-collision and motion control system and method |
CN113711583A (en) * | 2019-04-25 | 2021-11-26 | 日本电信电话株式会社 | Object information processing device, object information processing method, and object information processing program |
CN113740355A (en) * | 2020-05-29 | 2021-12-03 | 清华大学 | Boundary protection method and system for ray detection robot |
CN113811788A (en) * | 2019-05-14 | 2021-12-17 | 三菱电机株式会社 | Vehicle-mounted object detection system |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10989791B2 (en) | 2016-12-05 | 2021-04-27 | Trackman A/S | Device, system, and method for tracking an object using radar data and imager data |
JP6888950B2 (en) * | 2016-12-16 | 2021-06-18 | フォルシアクラリオン・エレクトロニクス株式会社 | Image processing device, external world recognition device |
KR101881458B1 (en) * | 2017-03-09 | 2018-07-25 | 건아정보기술 주식회사 | The region of interest extraction system for moving object |
JP7062878B2 (en) * | 2017-03-27 | 2022-05-09 | 沖電気工業株式会社 | Information processing method and information processing equipment |
KR101972361B1 (en) * | 2018-08-10 | 2019-04-25 | (주)이젠정보통신 | control system for lighting time of cross walk signal lamp |
US11009590B2 (en) * | 2018-08-29 | 2021-05-18 | Aptiv Technologies Limited | Annotation of radar-profiles of objects |
CN110874927A (en) * | 2018-08-31 | 2020-03-10 | 百度在线网络技术(北京)有限公司 | Intelligent road side unit |
JP7461160B2 (en) * | 2020-02-21 | 2024-04-03 | Jrcモビリティ株式会社 | Three-dimensional information estimation system, three-dimensional information estimation method, and computer-executable program |
JP7452333B2 (en) | 2020-08-31 | 2024-03-19 | 株式会社デンソー | LIDAR correction parameter generation method, LIDAR evaluation method, and LIDAR correction device |
EP4099302A1 (en) * | 2021-06-02 | 2022-12-07 | Arlo Technologies, Inc. | Multisensor security system with aircraft monitoring |
CN113888871B (en) * | 2021-10-20 | 2023-05-05 | 上海电科智能系统股份有限公司 | Automatic handling linkage system and method for expressway traffic incidents |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1980322A (en) * | 2005-12-07 | 2007-06-13 | 日产自动车株式会社 | Object detecting system and object detecting method |
EP1837804A1 (en) * | 2006-03-22 | 2007-09-26 | Nissan Motor Co., Ltd. | Object detection |
EP2022026A1 (en) * | 2006-05-05 | 2009-02-11 | Dan Manor | Traffic sensor incorporating a video camera and method of operating same |
CN101837782A (en) * | 2009-01-26 | 2010-09-22 | 通用汽车环球科技运作公司 | Be used to collide the multiple goal Fusion Module of preparation system |
CN101952688A (en) * | 2008-02-04 | 2011-01-19 | 电子地图北美公司 | Method for map matching with sensor detected objects |
EP2275971A1 (en) * | 2009-07-06 | 2011-01-19 | Valeo Vision | Method of obstacle detection for a vehicle |
CN101975951A (en) * | 2010-06-09 | 2011-02-16 | 北京理工大学 | Field environment barrier detection method fusing distance and image information |
CN103837872A (en) * | 2012-11-22 | 2014-06-04 | 株式会社电装 | Object detection apparatus |
CN104054005A (en) * | 2012-01-16 | 2014-09-17 | 丰田自动车株式会社 | Object detection device |
US20140340518A1 (en) * | 2013-05-20 | 2014-11-20 | Nidec Elesys Corporation | External sensing device for vehicle, method of correcting axial deviation and recording medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001134769A (en) * | 1999-11-04 | 2001-05-18 | Honda Motor Co Ltd | Object recognizing device |
JP2005090974A (en) * | 2003-09-12 | 2005-04-07 | Daihatsu Motor Co Ltd | Preceding car recognition device |
JP4680294B2 (en) * | 2008-12-26 | 2011-05-11 | トヨタ自動車株式会社 | Object detection apparatus and object detection method |
JP2011099683A (en) * | 2009-11-04 | 2011-05-19 | Hitachi Automotive Systems Ltd | Body detector |
-
2015
- 2015-09-30 JP JP2015193173A patent/JP6593588B2/en active Active
-
2016
- 2016-01-07 CN CN201610008658.6A patent/CN105893931A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1980322A (en) * | 2005-12-07 | 2007-06-13 | 日产自动车株式会社 | Object detecting system and object detecting method |
EP1837804A1 (en) * | 2006-03-22 | 2007-09-26 | Nissan Motor Co., Ltd. | Object detection |
EP2022026A1 (en) * | 2006-05-05 | 2009-02-11 | Dan Manor | Traffic sensor incorporating a video camera and method of operating same |
CN101952688A (en) * | 2008-02-04 | 2011-01-19 | 电子地图北美公司 | Method for map matching with sensor detected objects |
CN101837782A (en) * | 2009-01-26 | 2010-09-22 | 通用汽车环球科技运作公司 | Be used to collide the multiple goal Fusion Module of preparation system |
EP2275971A1 (en) * | 2009-07-06 | 2011-01-19 | Valeo Vision | Method of obstacle detection for a vehicle |
CN101975951A (en) * | 2010-06-09 | 2011-02-16 | 北京理工大学 | Field environment barrier detection method fusing distance and image information |
CN104054005A (en) * | 2012-01-16 | 2014-09-17 | 丰田自动车株式会社 | Object detection device |
CN103837872A (en) * | 2012-11-22 | 2014-06-04 | 株式会社电装 | Object detection apparatus |
US20140340518A1 (en) * | 2013-05-20 | 2014-11-20 | Nidec Elesys Corporation | External sensing device for vehicle, method of correcting axial deviation and recording medium |
Non-Patent Citations (4)
Title |
---|
ARUNESH ROY ET AL: ""Automated traffic surveillance using fusion of Doppler radar and video information"", 《MATHEMATICAL AND COMPUTER MODELLING》 * |
JUHANA AHTIAINEN ET AL: ""Radar based detection and tracking of a walking human"", 《IFAC PROCEEDINGS VOLUMES》 * |
曲昭伟 等: ""雷达与视觉信息融合的行人检测方法"", 《吉林大学学报(工学版)》 * |
杨磊 等: ""基于摄像机与激光雷达的车辆识别技术"", 《计算机测量与控制》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108226917A (en) * | 2016-12-09 | 2018-06-29 | 梅塔建筑株式会社 | High-accuracy emergency case detecting system based on radar |
CN108226917B (en) * | 2016-12-09 | 2021-09-28 | 梅塔建筑株式会社 | High-precision emergency detection system based on radar |
CN106908783B (en) * | 2017-02-23 | 2019-10-01 | 苏州大学 | Based on obstacle detection method combined of multi-sensor information |
CN106908783A (en) * | 2017-02-23 | 2017-06-30 | 苏州大学 | Obstacle detection method based on multi-sensor information fusion |
CN108932869A (en) * | 2017-05-18 | 2018-12-04 | 松下电器(美国)知识产权公司 | Vehicular system, information of vehicles processing method, recording medium, traffic system, infrastructure system and information processing method |
WO2018218680A1 (en) * | 2017-06-02 | 2018-12-06 | 华为技术有限公司 | Obstacle detection method and device |
CN109143220A (en) * | 2017-06-16 | 2019-01-04 | 电装波动株式会社 | Vehicle identifier, vehicle identification system, storage medium |
CN111971579A (en) * | 2018-03-30 | 2020-11-20 | 三菱电机株式会社 | Object identification device |
CN112119330A (en) * | 2018-05-14 | 2020-12-22 | 三菱电机株式会社 | Object detection device and object detection method |
CN110660186A (en) * | 2018-06-29 | 2020-01-07 | 杭州海康威视数字技术股份有限公司 | Method and device for identifying target object in video image based on radar signal |
CN110660186B (en) * | 2018-06-29 | 2022-03-01 | 杭州海康威视数字技术股份有限公司 | Method and device for identifying target object in video image based on radar signal |
CN109118537A (en) * | 2018-08-21 | 2019-01-01 | 加特兰微电子科技(上海)有限公司 | A kind of picture matching process, device, equipment and storage medium |
CN109118537B (en) * | 2018-08-21 | 2021-11-02 | 加特兰微电子科技(上海)有限公司 | Picture matching method, device, equipment and storage medium |
CN110940974A (en) * | 2018-09-21 | 2020-03-31 | 丰田自动车株式会社 | Object detection device |
CN112703424A (en) * | 2018-09-21 | 2021-04-23 | 玛尔斯登集团 | Anti-collision and motion control system and method |
CN110940974B (en) * | 2018-09-21 | 2023-10-10 | 丰田自动车株式会社 | Object detection device |
CN111325088B (en) * | 2018-12-14 | 2023-06-16 | 丰田自动车株式会社 | Information processing system, recording medium, and information processing method |
CN111325088A (en) * | 2018-12-14 | 2020-06-23 | 丰田自动车株式会社 | Information processing system, program, and information processing method |
CN111508272A (en) * | 2019-01-31 | 2020-08-07 | 斯特拉德视觉公司 | Method and apparatus for providing robust camera-based object distance prediction |
CN113711583A (en) * | 2019-04-25 | 2021-11-26 | 日本电信电话株式会社 | Object information processing device, object information processing method, and object information processing program |
CN113811788A (en) * | 2019-05-14 | 2021-12-17 | 三菱电机株式会社 | Vehicle-mounted object detection system |
CN112581771A (en) * | 2019-09-30 | 2021-03-30 | 丰田自动车株式会社 | Driving control device for automatic driving vehicle, target object for parking, and driving control system |
US11648963B2 (en) | 2019-09-30 | 2023-05-16 | Toyota Jidosha Kabushiki Kaisha | Driving control apparatus for automated driving vehicle, stop target, and driving control system |
CN113740355B (en) * | 2020-05-29 | 2023-06-20 | 清华大学 | Boundary protection method and system for ray detection robot |
CN113740355A (en) * | 2020-05-29 | 2021-12-03 | 清华大学 | Boundary protection method and system for ray detection robot |
Also Published As
Publication number | Publication date |
---|---|
JP6593588B2 (en) | 2019-10-23 |
JP2016153775A (en) | 2016-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105893931A (en) | Object detection apparatus and method | |
US10061023B2 (en) | Object detection apparatus and method | |
US11593950B2 (en) | System and method for movement detection | |
AU2014202300B2 (en) | Traffic monitoring system for speed measurement and assignment of moving vehicles in a multi-target recording module | |
CN105157608B (en) | A kind of detection method of overrun vehicle, apparatus and system | |
CN110211388A (en) | Multilane free-flow vehicle matching process and system based on 3D laser radar | |
CN103927904B (en) | Early warning method of pedestrian anti-collision early warning system using smartphone | |
CN104237881B (en) | FMCW anti-collision radar multi-target detecting and tracking system and method | |
CN106019281B (en) | Object detection device and object detection method | |
CN109085570A (en) | Automobile detecting following algorithm based on data fusion | |
US10699567B2 (en) | Method of controlling a traffic surveillance system | |
CN109871728B (en) | Vehicle type recognition method and device | |
CN104808216B (en) | A kind of vehicle collision avoidance early warning system based on laser radar range | |
CN103559791A (en) | Vehicle detection method fusing radar and CCD camera signals | |
CN108877269A (en) | A kind of detection of intersection vehicle-state and V2X broadcasting method | |
US10222466B2 (en) | Method and a device for detecting an all-round view | |
CN105222752B (en) | A kind of portable road face detection means and method using structure light | |
CN205862589U (en) | A kind of automatic Vehicle Recognition System | |
KR102093237B1 (en) | Vehicle classification system using non-contact automatic vehicle detectior | |
CN108974007A (en) | Determine the interest object of cruise active control | |
US20200349743A1 (en) | Image synthesizing system and image synthesizing method | |
CN112784679A (en) | Vehicle obstacle avoidance method and device | |
CN114875877A (en) | Ship lockage safety detection method | |
CN109801503B (en) | Vehicle speed measuring method and system based on laser | |
JP2004021496A (en) | Parked vehicle detection method, detection system and parked vehicle detection apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160824 |
|
WD01 | Invention patent application deemed withdrawn after publication |