CN109188932A - A kind of multi-cam assemblage on-orbit test method and system towards intelligent driving - Google Patents

A kind of multi-cam assemblage on-orbit test method and system towards intelligent driving Download PDF

Info

Publication number
CN109188932A
CN109188932A CN201810957179.8A CN201810957179A CN109188932A CN 109188932 A CN109188932 A CN 109188932A CN 201810957179 A CN201810957179 A CN 201810957179A CN 109188932 A CN109188932 A CN 109188932A
Authority
CN
China
Prior art keywords
camera
data
module
image
intelligent driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810957179.8A
Other languages
Chinese (zh)
Inventor
张栋
吴坚
布莱恩·阿里·万德尔
孙博华
何睿
惠政
慕文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201810957179.8A priority Critical patent/CN109188932A/en
Publication of CN109188932A publication Critical patent/CN109188932A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B17/00Systems involving the use of models or simulators of said systems
    • G05B17/02Systems involving the use of models or simulators of said systems electric

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses, a kind of multi-cam assemblage on-orbit test method and system towards intelligent driving, built-in computer Carsim software, simulated experiment trolley and traffic environment, camera acquire video image data;Data correction is carried out respectively to the video image data of different cameras acquisition;Image data after the correction of different cameras is perceived respectively;Image data after the perception of different cameras is merged;Driving behavior decision-making module parses fused sensing results, completes Driving Decision-making work, converts decision signal to the control signal of test carriage, driven in traffic simulation environment according to the control signal command test carriage received.A kind of multi-cam assemblage on-orbit test method towards intelligent driving of the invention, when carrying out data acquisition using multiple cameras, the data processed result of multiple cameras is merged by algorithm model optimization, generates unified effective object detection results, perfection shows intelligent driving technology.

Description

A kind of multi-cam assemblage on-orbit test method and system towards intelligent driving
Technical field
The present invention relates to data to acquire correlative technology field, and in particular to a kind of multi-cam towards intelligent driving is in ring Emulation test method and system.
Background technique
As the exploitation of computer, the fast development of microelectric technique, intellectualized technology is getting faster, degree of intelligence is also got over Come higher, the range of application has also obtained great extension.Intelligent driving system is back with the automotive electronic technology grown rapidly Scape covers electronics, and computer is mechanical, multiple subjects such as sensing technology.Automated intelligent driving is the development of future automobile Direction, and road traffic have the revolutionary vehicles influenced.With dashing forward for the core technologies such as artificial intelligence, sensing detection Broken and perfect and global reliability promotion, autonomous driving vehicle can be gradually accepted by the public, and become trip and logistics tool. But from the current Preliminary Applications stage, when may need very long to the process in mature popularization stage or even comprehensive stage of popularization Between, the very long stage of legislation implementation and Social Psychology adjustment is also solved the problems, such as after technology maturation.Autonomous driving vehicle is One commanding elevation of future automobile industry and information industry, research and development ability will directly reflect National Industrial competitiveness.From the whole world From the point of view of the developing activity of national governments and enterprise, following 5~10 years will develop automatic Pilot very crucial period.
Intelligent driving technical field is still at an early stage at this stage, and many difficulties are also faced in development process.It is practical In, the most crucial problem that intelligent driving technology faces is road conditions identification perception, is adopted carrying out data in face of multiple cameras When collection, the acquisition data of the multiple cameras of processing of effectively optimizing are unable to, unified object detection results output is generated and drives life It enables.
Summary of the invention
In view of the above existing problems in the prior art, the present invention provides a kind of multi-cams towards intelligent driving in ring Emulation test method and system merge the data processed result of multiple cameras by algorithm model optimization, generate system One effective object detection results, show intelligent driving technology.
To achieve the goals above, the present invention provides technical solutions below: a kind of more camera shootings towards intelligent driving Head assemblage on-orbit test method, includes the following steps:
(11) built-in computer Carsim software, simulated experiment trolley and traffic environment, test carriage original state is static, Display screen shows present day analog state traffic environment;
(12) at least two cameras are into the video image data in acquisition step (11);
(13) image flame detection module carries out data correction to the video image data that different cameras acquire respectively;
(14) module of target detection perceives the image data after the correction of different cameras respectively;
(15) data fusion module merges the image data after the perception of different cameras;
(16) driving behavior decision-making module parses fused sensing results, completes Driving Decision-making work, and export decision Signal;
(17) data conversion module converts decision signal to the control signal of test carriage, and is sent to control module;
(18) control module drives in traffic simulation environment according to the control signal command test carriage received.
As advanced optimizing for above scheme, camera determines installation number according to functional requirement, and each camera is only Vertical work acquisition demand video image data is for analyzing.
As advanced optimizing for above scheme, if the first camera is used for acquisition testing lane line there are two camera And information of vehicles, second camera is for detecting traffic lights and traffic mark information.
As advanced optimizing for above scheme, the processing method of described image rectification module includes the following steps:
(21) it determines the geometrical relationship between the acquisition of camera data and lane, establishes world coordinate system and image coordinate System, world coordinate system XO1Y, O1Point is coordinate origin, and camera carries out the image data acquiring visual field in assemblage on-orbit test macro The center of lower edge, Y-axis are the longitudinal direction of vehicle axis system, indicate that vehicle forward direction, X-axis are laterally, to indicate vehicle right-hand rotation side To the object point coordinate in coordinate system is indicated with (x, y), unit cm;Image coordinate system UO2V, O2Point is coordinate origin, is indicated in ring The upper right corner of camera image plane in emulation test system, U axis are a row elements of image top, and to the left, V axis is figure in direction As a column pixel on the right, direction is downward, and the pixel coordinate in coordinate system is indicated with (u, v), characterizes the pixel and is located at figure As the columns and line number of array, unit is pixel;
(22) world coordinate system XO is intercepted1The Y-axis of Y, i.e. longitudinal section of longitudinal direction of car, there are the point I on ground level, I point coordinate is (0, y) in world coordinate system, and I' point coordinate is (u, v) in image coordinate system, and I point and I' point are that mutually mapping is closed System, the V axis of Two coordinate system longitudinal direction and the corresponding relationship of Y-axis are as follows:
In formula, h is the height of camera, and θ is camera pitch angle, and α is camera subtended angle, and f is camera focal length.
(23) world coordinate system XO is intercepted1The Y-axis of Y, i.e. longitudinal section of longitudinal direction of car,A point I on ground level, generation I point coordinate is (0, y) in boundary's coordinate system, and I' point coordinate is (u, v) in image coordinate system, and I point and I' are mutual mapping relations, two The U axis of coordinate system transverse direction and the corresponding relationship of X-axis are as follows:
In formula, l is the width of camera view lower part, and N is total columns of pixel.
As advanced optimizing for above scheme, the processing method of the data fusion module includes the following steps:
(31) for traffic scene of the camera in image data acquiring in the visual field in continually changing situation, use Interpolation calibrating patterns, setting traffic scene constantly change, in the short time motion state of vehicle as linear uniform motion into Row processing:
In formula, T1And T2The position coordinates at moment areThe position coordinates at target T moment areThe relationship T at three moment1<T<T2
(32) spacial alignment work is completed by the method for mobile target trajectory association and direct linear transformation, camera exists Test carriage chooses mobile target during advancing, the location information of same target is determined by carrying out target trajectory association, into And spacial alignment is completed, mathematical model is associated with using the mobile target trajectory based on fuzzy double threshold relevance theory;
(33) with after spacial alignment, data fusion module obtains camera under same reference frame for deadline alignment It is mutually related and acquires image information, construct Kalman fusion formula:
In formula, S is the variance of sample error;N is number of probes;X is actual measured value;A is fusion results.
As advanced optimizing for above scheme, the data fusion module, in data fusion process, by data Temporal information and spatial information, the sensing results of functional requirement difference camera are synchronous.
As advanced optimizing for above scheme, it is associated with using the mobile target trajectory based on fuzzy double threshold relevance theory Mathematical model, association process include the following:
(321) description that fuzzy factors are carried out using relative position, chooses the fuzzy factors of camera:
In formula, fuzzy set A={ a1,a2, wherein a1For the Euclidean distance between the position of t moment, a2For the target of t moment Moving direction;Rp, Aq respectively arbitrarily take the sequence of p-th of camera and q-th of camera;D, θ are respectively Euclidean distance, side Parallactic angle;
(322) the association degree of membership in the alignment of track is solved, mathematical model is as follows:
In formula, s=1 is Euclidean distance, and s=2 is target moving direction, rs1pqIt (t) is the association degree of membership of t moment, Take normal distribution, τs={ 0.01,0.01 }, corresponding dereferenced degree rs2pq(t)=1-rs1pq(t), comprehensive evaluation matrix mathematical modulo Type is as follows:
In formula, WpqIt (t) is comprehensive evaluation matrix;[x1 x2] is weight matrix, and value is { 0.45,0.1 };
(323) camera carries out environment sensing Back end data fusion process, carries out fuzzy double threshold first and judges, realizes and move After moving-target Track association, by camera obtain image space three-dimensional coordinate convert in same plane rectangular coordinate system into Row processing is completed to realize that spacial alignment, fuzzy double threshold Appraisal process include the following: by direct linear transformation
(3231) whenWhen, H (t)=H (t-1)+1, F1For the first threshold value, 0.75, H is taken It (t) is accumulative parameter;
(3232) as H (t) >=F2, F2Value 10, selected information sequence are the corresponding sequence being mutually matched, that is, are realized The mobile associated purpose of target trajectory;
The multi-cam assemblage on-orbit test macro towards intelligent driving that invention additionally discloses a kind of: appointed using right 1-7 A kind of multi-cam assemblage on-orbit test method towards intelligent driving described in one, comprising:
Environment setup module, built-in computer Carsim software, simulated experiment trolley and traffic environment, test carriage are initial State is static, and display screen shows present day analog state traffic environment;
Data acquisition module, including at least two cameras, the camera is for the video figure in acquisition step (11) As data;
Image flame detection module, the video image data for acquiring to different cameras carry out data correction respectively;
Module of target detection, for being perceived respectively to the image data after the correction of different cameras;
Data fusion module, for merging the image data after the perception of different cameras;
Driving behavior decision-making module completes Driving Decision-making work, and export decision for parsing fused sensing results Signal;
Data conversion module for converting decision signal to the control signal of test carriage, and is sent to control module;
Control module, the control signal command test carriage for will receive drive in traffic simulation environment.
The invention also discloses a kind of equipment, the equipment includes:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of places Reason device executes a kind of multi-cam assemblage on-orbit test method towards intelligent driving as claimed in claim 1.
The invention also discloses a kind of computer readable storage mediums for being stored with computer program, and the program is by processor A kind of multi-cam assemblage on-orbit test method towards intelligent driving as claimed in claim 1 is realized when execution.
By adopting the above technical scheme, compared with prior art, a kind of towards intelligent driving of the invention take the photograph the present invention more As head assemblage on-orbit test method and system, have the advantages that
1, a kind of multi-cam assemblage on-orbit test method towards intelligent driving of the invention, built-in computer Carsim Software, simulated experiment trolley and traffic environment, test carriage original state is static, and display screen shows present day analog state traffic ring Border;Camera acquires video image data;Image flame detection module carries out the video image data that different cameras acquire respectively Data correction;Image data after module of target detection corrects different cameras perceives respectively;Data fusion module will Image data after different camera perception is merged;Driving behavior decision-making module parses fused sensing results, completes Driving Decision-making work, and export decision signal;Data conversion module converts decision signal to the control signal of test carriage, and It is sent to control module;Control module drives in traffic simulation environment according to the control signal command test carriage received.
2, a kind of multi-cam assemblage on-orbit test method towards intelligent driving of the invention, using multiple cameras into When row data acquire, the data processed result of multiple cameras is merged by algorithm model optimization, is generated unified effective Object detection results, perfection shows intelligent driving technology.
3, a kind of multi-cam assemblage on-orbit test method towards intelligent driving of the invention, data fusion model acquisition Image data after correction merges specific algorithm of target detection between camera, completes target detection work, output environment sense Know as a result, from the process for perceiving decision need to carry out sensing results Back end data fusion work, guarantee camera data when Between, spatially accomplish to synchronize, be fused to whole unified sensing results, driven caused by eliminating because multi-sensor data is asynchronous Sail decision model error.
4, a kind of multi-cam assemblage on-orbit test method towards intelligent driving of the invention, functional requirement is different to be taken the photograph As the offset of head time of occurrence and space during the work time, camera perception knot that can not be different by functional requirement is directly resulted in Fruit is integrated into one piece, and Driving Decision-making module is given in output, data fusion module, in data fusion process, by data when Between information and spatial information, the sensing results of functional requirement difference camera are synchronous.By the sense of functional requirement difference camera Know that result integrates, counts the sensing results information notified for one.
5, a kind of multi-cam assemblage on-orbit test method towards intelligent driving of the invention, driving behavior decision model The traffic scene video image that display screen output is acquired by camera is merged by algorithm of target detection module, Back end data Module exports sensing results, generates Driving Decision-making information according to sensing results.
Detailed description of the invention
Fig. 1 is a kind of flow chart of multi-cam assemblage on-orbit test method towards intelligent driving.
Fig. 2 is a kind of structural block diagram of multi-cam assemblage on-orbit test macro towards intelligent driving.
Fig. 3 is that world coordinate system and image are sat in multi-cam assemblage on-orbit test macro and method towards intelligent driving Mark system transformational relation figure.
Fig. 4 is the longitudinal direction of car section of multi-cam assemblage on-orbit test macro and the dynamic terminal of method towards intelligent driving Figure.
Specific embodiment
In order to make the objectives, technical solutions and advantages of the present invention clearer, right below by attached drawing and embodiment The present invention is further elaborated.However, it should be understood that specific embodiment described herein is only used to explain this hair Range that is bright, being not intended to restrict the invention.
Referring to Fig. 1, a kind of multi-cam assemblage on-orbit test method towards intelligent driving includes the following steps:
(11) built-in computer Carsim software, simulated experiment trolley and traffic environment, test carriage original state is static, Display screen shows present day analog state traffic environment;
(12) at least two cameras are into the video image data in acquisition step (11);
(13) image flame detection module carries out data correction to the video image data that different cameras acquire respectively;
(14) module of target detection perceives the image data after the correction of different cameras respectively;
(15) data fusion module merges the image data after the perception of different cameras;
(16) driving behavior decision-making module parses fused sensing results, completes Driving Decision-making work, and export decision Signal;
(17) data conversion module converts decision signal to the control signal of test carriage, and is sent to control module;
(18) control module drives in traffic simulation environment according to the control signal command test carriage received.
As advanced optimizing for above scheme, camera determines installation number according to functional requirement, and each camera is only Vertical work acquisition demand video image data is for analyzing.
As advanced optimizing for above scheme, if the first camera is used for acquisition testing lane line there are two camera And information of vehicles, second camera is for detecting traffic lights and traffic mark information.
The processing method of described image rectification module, includes the following steps:
(21) it determines the geometrical relationship between the acquisition of camera data and lane, establishes world coordinate system and image coordinate System, world coordinate system XO1Y, O1Point is coordinate origin, and camera carries out the image data acquiring visual field in assemblage on-orbit test macro The center of lower edge, Y-axis are the longitudinal direction of vehicle axis system, indicate that vehicle forward direction, X-axis are laterally, to indicate vehicle right-hand rotation side To the object point coordinate in coordinate system is indicated with (x, y), unit cm;Image coordinate system UO2V, O2Point is coordinate origin, is indicated in ring The upper right corner of camera image plane in emulation test system, U axis are a row elements of image top, and to the left, V axis is figure in direction As a column pixel on the right, direction is downward, and the pixel coordinate in coordinate system is indicated with (u, v), characterizes the pixel and is located at figure As the columns and line number of array, unit is pixel;
(22) world coordinate system XO is intercepted1The Y-axis of Y, i.e. longitudinal section of longitudinal direction of car, there are the point I on ground level, I point coordinate is (0, y) in world coordinate system, and I' point coordinate is (u, v) in image coordinate system, and I point and I' point are that mutually mapping is closed System, the V axis of Two coordinate system longitudinal direction and the corresponding relationship of Y-axis are as follows:
In formula, h is the height of camera, and θ is camera pitch angle, and α is camera subtended angle, and f is camera focal length.
(23) world coordinate system XO is intercepted1The Y-axis of Y, i.e. longitudinal section of longitudinal direction of car,A point I on ground level, generation I point coordinate is (0, y) in boundary's coordinate system, and I' point coordinate is (u, v) in image coordinate system, and I point and I' are mutual mapping relations, two The U axis of coordinate system transverse direction and the corresponding relationship of X-axis are as follows:
In formula, l is the width of camera view lower part, and N is total columns of pixel.
The processing method of the data fusion module, includes the following steps:
(31) for traffic scene of the camera in image data acquiring in the visual field in continually changing situation, use Interpolation calibrating patterns, setting traffic scene constantly change, in the short time motion state of vehicle as linear uniform motion into Row processing:
In formula, T1And T2The position coordinates at moment areThe position coordinates at target T moment areThe relationship T at three moment1<T<T2
(32) spacial alignment work is completed by the method for mobile target trajectory association and direct linear transformation, camera exists Test carriage chooses mobile target during advancing, the location information of same target is determined by carrying out target trajectory association, into And spacial alignment is completed, mathematical model is associated with using the mobile target trajectory based on fuzzy double threshold relevance theory;
(33) with after spacial alignment, data fusion module obtains camera under same reference frame for deadline alignment It is mutually related and acquires image information, construct Kalman fusion formula:
In formula, S is the variance of sample error;N is number of probes;X is actual measured value;A is fusion results.
As advanced optimizing for above scheme, the data fusion module, in data fusion process, by data Temporal information and spatial information, the sensing results of functional requirement difference camera are synchronous.
As advanced optimizing for above scheme, it is associated with using the mobile target trajectory based on fuzzy double threshold relevance theory Mathematical model, association process include the following:
(321) description that fuzzy factors are carried out using relative position, chooses the fuzzy factors of camera:
In formula, fuzzy set A={ a1,a2, wherein a1For the Euclidean distance between the position of t moment, a2For the target of t moment Moving direction;Rp, Aq respectively arbitrarily take the sequence of p-th of camera and q-th of camera;D, θ are respectively Euclidean distance, side Parallactic angle;
(322) the association degree of membership in the alignment of track is solved, mathematical model is as follows:
In formula, s=1 is Euclidean distance, and s=2 is target moving direction, rs1pqIt (t) is the association degree of membership of t moment, Take normal distribution, τs={ 0.01,0.01 }, corresponding dereferenced degree rs2pq(t)=1-rs1pq(t), comprehensive evaluation matrix mathematical modulo Type is as follows:
In formula, WpqIt (t) is comprehensive evaluation matrix;[x1 x2] is weight matrix, and value is { 0.45,0.1 };
(323) camera carries out environment sensing Back end data fusion process, carries out fuzzy double threshold first and judges, realizes and move After moving-target Track association, by camera obtain image space three-dimensional coordinate convert in same plane rectangular coordinate system into Row processing is completed to realize that spacial alignment, fuzzy double threshold Appraisal process include the following: by direct linear transformation
(3231) whenWhen, H (t)=H (t-1)+1, F1For the first threshold value, 0.75, H is taken It (t) is accumulative parameter;
(3232) as H (t) >=F2, F2Value 10, selected information sequence are the corresponding sequence being mutually matched, that is, are realized The mobile associated purpose of target trajectory.
Data fusion module acquires the image data after correction, merges specific algorithm of target detection between camera, complete It works at target detection, output environment sensing results need to carry out sensing results Back end data from the process for perceiving decision and melt Close work, guarantee that camera data accomplish to synchronize on time, space, be fused to whole unified sensing results, eliminate because Driving Decision-making model error caused by multi-sensor data is asynchronous;Such as camera 1,2 acquires respectively and detects lane line/vehicle / barrier, traffic lights/traffic mark, output environment sensing results submit after the synchronizing of data fusion model Driving Decision-making model is given, the work of vehicle behavior decision is carried out.
Referring to fig. 2, the multi-cam assemblage on-orbit test macro towards intelligent driving that invention additionally discloses a kind of: using power A kind of any multi-cam assemblage on-orbit test method towards intelligent driving of sharp 1-7, comprising:
Environment setup module, built-in computer Carsim software, simulated experiment trolley and traffic environment, test carriage are initial State is static, and display screen shows present day analog state traffic environment;
Data acquisition module, including at least two cameras, the camera is for the video figure in acquisition step (11) As data;
Image flame detection module, the video image data for acquiring to different cameras carry out data correction respectively;
Module of target detection, for being perceived respectively to the image data after the correction of different cameras;
Data fusion module, for merging the image data after the perception of different cameras;
Driving behavior decision-making module completes Driving Decision-making work, and export decision for parsing fused sensing results Signal;
Data conversion module for converting decision signal to the control signal of test carriage, and is sent to control module;
Control module, the control signal command test carriage for will receive drive in traffic simulation environment.
Control module is the control module based on simulink, establishes connection Driving Decision-making information and Carsim vehicle control Model converts Driving Decision-making information to the driving behavior control of this vehicle of Carsim.
The invention also discloses a kind of equipment, the equipment includes:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of places Reason device executes a kind of multi-cam assemblage on-orbit test method towards intelligent driving as claimed in claim 1.
The invention also discloses a kind of computer readable storage mediums for being stored with computer program, and the program is by processor A kind of multi-cam assemblage on-orbit test method towards intelligent driving as claimed in claim 1 is realized when execution.
In addition, the present embodiment additionally provides a kind of computer readable storage medium for being stored with computer program, the program A kind of multi-cam assemblage on-orbit test method towards intelligent driving of the present embodiment is realized when being executed by processor.The calculating Machine readable storage medium storing program for executing can be computer readable storage medium included in system or equipment described in above-described embodiment;? It can be individualism, without the computer readable storage medium in supplying equipment, such as hard disk, CD, SD card.
Multi-cam assemblage on-orbit test macro and method provided by the invention towards intelligent driving, computer operation Carsim software, simulation test trolley and traffic environment, the view that camera data acquisition image flame detection model acquires camera Frequency image data is corrected, the image data after the algorithm of target detection model perception correction of camera, camera environment sense Know that Back end data Fusion Module integrates the image perception of camera output as a result, driving behavior decision-making module parses fused sense Know that result completes Driving Decision-making work, and export decision signal, Carsim/Simulink automobile Controlling model turns decision signal Control signal is turned to, trolley generates driving behavior.The present invention acquires image flame detection model by organically combining camera data, takes the photograph As head environment sensing Back end data Fusion Model and driving behavior decision model, efficiently solves intelligent driving and use multiple camera shootings When head carries out data acquisition, the data processed result of multiple cameras is merged, generates unified effective target detection knot Fruit correctly exports steering instructions.
In addition, it should be understood that although this specification is described in terms of embodiments, but not each embodiment is only wrapped Containing an independent technical solution, this description of the specification is merely for the sake of clarity, and those skilled in the art should It considers the specification as a whole, the technical solutions in the various embodiments may also be suitably combined, forms those skilled in the art The other embodiments being understood that.

Claims (10)

1. a kind of multi-cam assemblage on-orbit test method towards intelligent driving, which comprises the steps of:
(11) built-in computer Carsim software, simulated experiment trolley and traffic environment, test carriage original state is static, display Screen display present day analog state traffic environment;
(12) at least two cameras are into the video image data in acquisition step (11);
(13) image flame detection module carries out data correction to the video image data that different cameras acquire respectively;
(14) module of target detection perceives the image data after the correction of different cameras respectively;
(15) data fusion module merges the image data after the perception of different cameras;
(16) driving behavior decision-making module parses fused sensing results, completes Driving Decision-making work, and export decision signal;
(17) data conversion module converts decision signal to the control signal of test carriage, and is sent to control module;
(18) control module drives in traffic simulation environment according to the control signal command test carriage received.
2. a kind of multi-cam assemblage on-orbit test method towards intelligent driving according to claim 1, feature exist In: camera determines that installation number, each camera autonomous working acquisition demand video image data are used for according to functional requirement Analysis.
3. a kind of multi-cam assemblage on-orbit test method towards intelligent driving according to claim 1, feature exist In: it sets there are two camera, the first camera is used for acquisition testing lane line and information of vehicles, and second camera is handed over for detecting Ventilating signal lamp and traffic mark information.
4. a kind of multi-cam assemblage on-orbit test method towards intelligent driving according to claim 1, feature exist In: the processing method of described image rectification module includes the following steps:
(21) it determines the geometrical relationship between the acquisition of camera data and lane, establishes world coordinate system and image coordinate system, generation Boundary coordinate system XO1Y, O1Point is coordinate origin, and camera carries out image data acquiring visual field lower edge in assemblage on-orbit test macro Center, Y-axis be vehicle axis system longitudinal direction, indicate vehicle forward direction, X-axis be laterally, indicate vehicle right-hand rotation direction, coordinate Object point coordinate (x, y) expression in system, unit cm;Image coordinate system UO2V, O2Point is coordinate origin, indicates that assemblage on-orbit is surveyed The upper right corner of camera image plane in test system, U axis are a row elements of image top, and to the left, V axis is on the right of image in direction A column pixel, direction is downward, and pixel coordinate in coordinate system is indicated with (u, v), characterizes the pixel and is located at image array Columns and line number, unit be pixel;
(22) world coordinate system XO is intercepted1The Y-axis of Y, i.e. longitudinal section of longitudinal direction of car, there are the point I on ground level, the worlds I point coordinate is (0, y) in coordinate system, and I' point coordinate is (u, v) in image coordinate system, and I point and I' point are mutual mapping relations, two The V axis of coordinate system longitudinal direction and the corresponding relationship of Y-axis are as follows:
In formula, h is the height of camera, and θ is camera pitch angle, and α is camera subtended angle, and f is camera focal length.
(23) world coordinate system XO is intercepted1The Y-axis of Y, i.e. longitudinal section of longitudinal direction of car,A point I on ground level, world coordinates I point coordinate is (0, y) in system, and I' point coordinate is (u, v) in image coordinate system, and I point and I' are mutual mapping relations, Two coordinate system The corresponding relationship of lateral U axis and X-axis are as follows:
In formula, l is the width of camera view lower part, and N is total columns of pixel.
5. a kind of multi-cam assemblage on-orbit test method towards intelligent driving according to claim 1, feature exist In: the processing method of the data fusion module includes the following steps:
(31) it is interleave for traffic scene of the camera in image data acquiring in the visual field in continually changing situation, use Value calibration model, setting traffic scene constantly change, in the short time motion state of vehicle as linear uniform motion at Reason:
In formula, T1And T2The position coordinates at moment areThe position coordinates at target T moment areThree The relationship T at a moment1<T<T2
(32) spacial alignment work is completed by the method for mobile target trajectory association and direct linear transformation, camera is being tested Trolley chooses mobile target during advancing, the location information of same target is determined by carrying out target trajectory association, and then complete At spacial alignment, mathematical model is associated with using the mobile target trajectory based on fuzzy double threshold relevance theory;
(33) with after spacial alignment, it is mutual under same reference frame that data fusion module obtains camera for deadline alignment Associated acquisition image information constructs Kalman fusion formula:
In formula, S is the variance of sample error;N is number of probes;X is actual measured value;A is fusion results.
6. a kind of multi-cam assemblage on-orbit test method towards intelligent driving according to claim 1, feature exist In: the data fusion module, by the temporal information and spatial information in data, function is needed in data fusion process Ask the sensing results of different cameras synchronous.
7. a kind of multi-cam assemblage on-orbit test method towards intelligent driving according to claim 1, feature exist In: mathematical model is associated with using the mobile target trajectory based on fuzzy double threshold relevance theory, association process includes the following:
(321) description that fuzzy factors are carried out using relative position, chooses the fuzzy factors of camera:
In formula, fuzzy set A={ a1,a2, wherein a1For the Euclidean distance between the position of t moment, a2For the target movement side of t moment To;Rp, Aq respectively arbitrarily take the sequence of p-th of camera and q-th of camera;D, θ are respectively Euclidean distance, azimuth;
(322) the association degree of membership in the alignment of track is solved, mathematical model is as follows:
In formula, s=1 is Euclidean distance, and s=2 is target moving direction, rs1pq(t) it is the association degree of membership of t moment, takes normal state Distribution, τs={ 0.01,0.01 }, corresponding dereferenced degree rs2pq(t)=1-rs1pq(t), comprehensive evaluation matrix mathematical model is such as Under:
In formula, WpqIt (t) is comprehensive evaluation matrix;[x1 x2] is weight matrix, and value is { 0.45,0.1 };
(323) camera carries out environment sensing Back end data fusion process, carries out fuzzy double threshold first and judges, realizes mobile mesh After marking Track association, the three-dimensional coordinate for the image space that camera is obtained is converted in same plane rectangular coordinate system Reason is completed to realize that spacial alignment, fuzzy double threshold Appraisal process include the following: by direct linear transformation
(3231) whenWhen, H (t)=H (t-1)+1, F1For the first threshold value, 0.75, the H (t) is taken to be Accumulative parameter;
(3232) as H (t) >=F2, F2Value 10, selected information sequence are the corresponding sequence being mutually matched, that is, realize movement The associated purpose of target trajectory.
8. what it is based on a kind of multi-cam assemblage on-orbit test method towards intelligent driving as claimed in claim 1 to 7 is System: it is characterised by comprising:
Environment setup module, built-in computer Carsim software, simulated experiment trolley and traffic environment, test carriage original state Static, display screen shows present day analog state traffic environment;
Data acquisition module, including at least two cameras, the camera is for the video image number in acquisition step (11) According to;
Image flame detection module, the video image data for acquiring to different cameras carry out data correction respectively;
Module of target detection, for being perceived respectively to the image data after the correction of different cameras;
Data fusion module, for merging the image data after the perception of different cameras;
Driving behavior decision-making module completes Driving Decision-making work, and export decision letter for parsing fused sensing results Number;
Data conversion module for converting decision signal to the control signal of test carriage, and is sent to control module;
Control module, the control signal command test carriage for will receive drive in traffic simulation environment.
9. a kind of equipment, which is characterized in that the equipment includes:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors Execute a kind of multi-cam assemblage on-orbit test method towards intelligent driving as claimed in claim 1.
10. a kind of computer readable storage medium for being stored with computer program, which is characterized in that the program is executed by processor A kind of Shi Shixian multi-cam assemblage on-orbit test method towards intelligent driving as claimed in claim 1.
CN201810957179.8A 2018-08-22 2018-08-22 A kind of multi-cam assemblage on-orbit test method and system towards intelligent driving Pending CN109188932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810957179.8A CN109188932A (en) 2018-08-22 2018-08-22 A kind of multi-cam assemblage on-orbit test method and system towards intelligent driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810957179.8A CN109188932A (en) 2018-08-22 2018-08-22 A kind of multi-cam assemblage on-orbit test method and system towards intelligent driving

Publications (1)

Publication Number Publication Date
CN109188932A true CN109188932A (en) 2019-01-11

Family

ID=64918832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810957179.8A Pending CN109188932A (en) 2018-08-22 2018-08-22 A kind of multi-cam assemblage on-orbit test method and system towards intelligent driving

Country Status (1)

Country Link
CN (1) CN109188932A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109947110A (en) * 2019-04-02 2019-06-28 吉林大学 Lane self-checking algorithm assemblage on-orbit control method and system towards automatic Pilot
CN110070139A (en) * 2019-04-28 2019-07-30 吉林大学 Small sample towards automatic Pilot environment sensing is in ring learning system and method
CN110262283A (en) * 2019-06-11 2019-09-20 远形时空科技(北京)有限公司 A kind of the vision robot's emulation platform and method of more scenes
CN110688943A (en) * 2019-09-25 2020-01-14 武汉光庭信息技术股份有限公司 Method and device for automatically acquiring image sample based on actual driving data
CN111010414A (en) * 2019-04-29 2020-04-14 当家移动绿色互联网技术集团有限公司 Simulation data synchronization method and device, storage medium and electronic equipment
CN111209956A (en) * 2020-01-02 2020-05-29 北京汽车集团有限公司 Sensor data fusion method, and vehicle environment map generation method and system
CN111272195A (en) * 2020-02-24 2020-06-12 天津清智科技有限公司 Vehicle camera detection method
CN111532260A (en) * 2020-05-20 2020-08-14 湖北亿咖通科技有限公司 Parking space detection performance evaluation method and electronic equipment
CN111583714A (en) * 2020-04-27 2020-08-25 深圳市国脉科技有限公司 Vehicle driving early warning method and device, computer readable medium and electronic equipment
CN112362069A (en) * 2020-11-16 2021-02-12 浙江大学 Modular automatic driving algorithm development verification system and method
CN112965503A (en) * 2020-05-15 2021-06-15 东风柳州汽车有限公司 Multi-path camera fusion splicing method, device, equipment and storage medium
CN112987593A (en) * 2021-02-19 2021-06-18 中国第一汽车股份有限公司 Visual positioning hardware-in-the-loop simulation platform and simulation method
CN113110392A (en) * 2021-04-28 2021-07-13 吉林大学 In-loop testing method for camera hardware of automatic driving automobile based on map import
CN113160454A (en) * 2021-05-31 2021-07-23 重庆长安汽车股份有限公司 Method and system for recharging historical sensor data of automatic driving vehicle
CN113219507A (en) * 2021-01-29 2021-08-06 重庆长安汽车股份有限公司 RT 3000-based precision measurement method for perception fusion data of automatic driving vehicle
CN113508391A (en) * 2021-06-11 2021-10-15 商汤国际私人有限公司 Data processing method, device and system, medium and computer equipment
WO2022033179A1 (en) * 2020-08-12 2022-02-17 广州小鹏自动驾驶科技有限公司 Traffic light recognition method and device
WO2022259031A1 (en) * 2021-06-11 2022-12-15 Sensetime International Pte. Ltd. Methods, apparatuses, systems, media, and computer devices for processing data
CN117079468A (en) * 2023-10-16 2023-11-17 深圳市城市交通规划设计研究中心股份有限公司 Traffic flow track position method for realizing traffic digital twin
CN117097430A (en) * 2023-10-16 2023-11-21 深圳市城市交通规划设计研究中心股份有限公司 Method for synchronizing simulation time of vehicle flow track position

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105807630A (en) * 2015-01-21 2016-07-27 福特全球技术公司 Virtual sensor testbed
CN106080590A (en) * 2016-06-12 2016-11-09 百度在线网络技术(北京)有限公司 Control method for vehicle and device and the acquisition methods of decision model and device
CN106128115A (en) * 2016-08-01 2016-11-16 青岛理工大学 Fusion method for detecting road traffic information based on double cameras
CN106926800A (en) * 2017-03-28 2017-07-07 重庆大学 The vehicle-mounted visually-perceptible system of multi-cam adaptation
CN206627782U (en) * 2017-03-31 2017-11-10 北京经纬恒润科技有限公司 A kind of hardware-in-the-loop test system of automobile controller
CN107807542A (en) * 2017-11-16 2018-03-16 北京北汽德奔汽车技术中心有限公司 Automatic Pilot analogue system
CN108332716A (en) * 2018-02-07 2018-07-27 徐州艾特卡电子科技有限公司 A kind of autonomous driving vehicle context aware systems

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105807630A (en) * 2015-01-21 2016-07-27 福特全球技术公司 Virtual sensor testbed
GB2536770A (en) * 2015-01-21 2016-09-28 Ford Global Tech Llc Virtual sensor testbed
CN106080590A (en) * 2016-06-12 2016-11-09 百度在线网络技术(北京)有限公司 Control method for vehicle and device and the acquisition methods of decision model and device
CN106128115A (en) * 2016-08-01 2016-11-16 青岛理工大学 Fusion method for detecting road traffic information based on double cameras
CN106926800A (en) * 2017-03-28 2017-07-07 重庆大学 The vehicle-mounted visually-perceptible system of multi-cam adaptation
CN206627782U (en) * 2017-03-31 2017-11-10 北京经纬恒润科技有限公司 A kind of hardware-in-the-loop test system of automobile controller
CN107807542A (en) * 2017-11-16 2018-03-16 北京北汽德奔汽车技术中心有限公司 Automatic Pilot analogue system
CN108332716A (en) * 2018-02-07 2018-07-27 徐州艾特卡电子科技有限公司 A kind of autonomous driving vehicle context aware systems

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吴坚 等: "基于dSPACE的汽车驱动力控制系统硬件在环研究", 《汽车技术》 *
李泽 等: "车路协同环境下行人目标信息融合算法研究", 《交通信息与安全》 *
李静 等: "硬件在环试验台整车状态跟随控制系统设计", 《吉林大学学报(工学版)》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109947110A (en) * 2019-04-02 2019-06-28 吉林大学 Lane self-checking algorithm assemblage on-orbit control method and system towards automatic Pilot
CN110070139B (en) * 2019-04-28 2021-10-19 吉林大学 Small sample in-loop learning system and method facing automatic driving environment perception
CN110070139A (en) * 2019-04-28 2019-07-30 吉林大学 Small sample towards automatic Pilot environment sensing is in ring learning system and method
CN111010414A (en) * 2019-04-29 2020-04-14 当家移动绿色互联网技术集团有限公司 Simulation data synchronization method and device, storage medium and electronic equipment
CN110262283A (en) * 2019-06-11 2019-09-20 远形时空科技(北京)有限公司 A kind of the vision robot's emulation platform and method of more scenes
CN110688943A (en) * 2019-09-25 2020-01-14 武汉光庭信息技术股份有限公司 Method and device for automatically acquiring image sample based on actual driving data
CN111209956A (en) * 2020-01-02 2020-05-29 北京汽车集团有限公司 Sensor data fusion method, and vehicle environment map generation method and system
CN111272195A (en) * 2020-02-24 2020-06-12 天津清智科技有限公司 Vehicle camera detection method
CN111583714A (en) * 2020-04-27 2020-08-25 深圳市国脉科技有限公司 Vehicle driving early warning method and device, computer readable medium and electronic equipment
CN112965503B (en) * 2020-05-15 2022-09-16 东风柳州汽车有限公司 Multi-path camera fusion splicing method, device, equipment and storage medium
CN112965503A (en) * 2020-05-15 2021-06-15 东风柳州汽车有限公司 Multi-path camera fusion splicing method, device, equipment and storage medium
CN111532260A (en) * 2020-05-20 2020-08-14 湖北亿咖通科技有限公司 Parking space detection performance evaluation method and electronic equipment
WO2022033179A1 (en) * 2020-08-12 2022-02-17 广州小鹏自动驾驶科技有限公司 Traffic light recognition method and device
CN112362069A (en) * 2020-11-16 2021-02-12 浙江大学 Modular automatic driving algorithm development verification system and method
CN113219507A (en) * 2021-01-29 2021-08-06 重庆长安汽车股份有限公司 RT 3000-based precision measurement method for perception fusion data of automatic driving vehicle
CN113219507B (en) * 2021-01-29 2024-02-23 重庆长安汽车股份有限公司 Precision measurement method for sensing fusion data of automatic driving vehicle based on RT3000
CN112987593A (en) * 2021-02-19 2021-06-18 中国第一汽车股份有限公司 Visual positioning hardware-in-the-loop simulation platform and simulation method
CN112987593B (en) * 2021-02-19 2022-10-28 中国第一汽车股份有限公司 Visual positioning hardware-in-the-loop simulation platform and simulation method
CN113110392A (en) * 2021-04-28 2021-07-13 吉林大学 In-loop testing method for camera hardware of automatic driving automobile based on map import
CN113160454A (en) * 2021-05-31 2021-07-23 重庆长安汽车股份有限公司 Method and system for recharging historical sensor data of automatic driving vehicle
CN113508391A (en) * 2021-06-11 2021-10-15 商汤国际私人有限公司 Data processing method, device and system, medium and computer equipment
WO2022259031A1 (en) * 2021-06-11 2022-12-15 Sensetime International Pte. Ltd. Methods, apparatuses, systems, media, and computer devices for processing data
CN113508391B (en) * 2021-06-11 2024-08-09 商汤国际私人有限公司 Data processing method, device and system, medium and computer equipment
CN117079468A (en) * 2023-10-16 2023-11-17 深圳市城市交通规划设计研究中心股份有限公司 Traffic flow track position method for realizing traffic digital twin
CN117097430A (en) * 2023-10-16 2023-11-21 深圳市城市交通规划设计研究中心股份有限公司 Method for synchronizing simulation time of vehicle flow track position
CN117097430B (en) * 2023-10-16 2024-02-27 深圳市城市交通规划设计研究中心股份有限公司 Method for synchronizing simulation time of vehicle flow track position
CN117079468B (en) * 2023-10-16 2024-02-27 深圳市城市交通规划设计研究中心股份有限公司 Traffic flow track position method for realizing traffic digital twin

Similar Documents

Publication Publication Date Title
CN109188932A (en) A kind of multi-cam assemblage on-orbit test method and system towards intelligent driving
US11276185B2 (en) Target tracking method and apparatus
CN102737236B (en) Method for automatically acquiring vehicle training sample based on multi-modal sensor data
CN105931263B (en) A kind of method for tracking target and electronic equipment
CN110070139A (en) Small sample towards automatic Pilot environment sensing is in ring learning system and method
CN110264495B (en) Target tracking method and device
CN112396650A (en) Target ranging system and method based on fusion of image and laser radar
CN107665507B (en) Method and device for realizing augmented reality based on plane detection
CN109919144B (en) Drivable region detection method, device, computer storage medium and drive test visual apparatus
CN108748184B (en) Robot patrol method based on regional map identification and robot equipment
US11703596B2 (en) Method and system for automatically processing point cloud based on reinforcement learning
CN108106617A (en) A kind of unmanned plane automatic obstacle-avoiding method
CN114252884A (en) Method and device for positioning and monitoring roadside radar, computer equipment and storage medium
CN113537046A (en) Map lane marking method and system based on vehicle track big data detection
CN116030130A (en) Hybrid semantic SLAM method in dynamic environment
CN115439621A (en) Three-dimensional map reconstruction and target detection method for coal mine underground inspection robot
CN106611147A (en) Vehicle tracking method and device
CN116778262B (en) Three-dimensional target detection method and system based on virtual point cloud
CN117496515A (en) Point cloud data labeling method, storage medium and electronic equipment
CN114252859A (en) Target area determination method and device, computer equipment and storage medium
CN112395956A (en) Method and system for detecting passable area facing complex environment
CN114252883A (en) Target detection method, apparatus, computer device and medium
CN208937705U (en) A kind of device of multi-source heterogeneous sensor characteristics depth integration
CN116386003A (en) Three-dimensional target detection method based on knowledge distillation
CN115755072A (en) Special scene positioning method and system based on binocular structured light camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190111

RJ01 Rejection of invention patent application after publication