CN114723820A - Road data multiplexing method, driving assisting system, driving assisting device and computer equipment - Google Patents

Road data multiplexing method, driving assisting system, driving assisting device and computer equipment Download PDF

Info

Publication number
CN114723820A
CN114723820A CN202210231208.9A CN202210231208A CN114723820A CN 114723820 A CN114723820 A CN 114723820A CN 202210231208 A CN202210231208 A CN 202210231208A CN 114723820 A CN114723820 A CN 114723820A
Authority
CN
China
Prior art keywords
vehicle type
camera
image data
external parameter
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210231208.9A
Other languages
Chinese (zh)
Inventor
赖苗杰
缪盛
刘信凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foss Hangzhou Intelligent Technology Co Ltd
Original Assignee
Foss Hangzhou Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foss Hangzhou Intelligent Technology Co Ltd filed Critical Foss Hangzhou Intelligent Technology Co Ltd
Priority to CN202210231208.9A priority Critical patent/CN114723820A/en
Publication of CN114723820A publication Critical patent/CN114723820A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a road data multiplexing method, a driving assistance system, a driving assistance device and computer equipment. The method comprises the following steps: acquiring and determining the deviation between the installation position of a camera on a first vehicle type and the installation position of a camera on a second vehicle type; the first vehicle type is different from the second vehicle type; acquiring a first external parameter matrix; the first external parameter matrix is an external parameter matrix of a camera on the first vehicle type; correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type to obtain a second external parameter matrix; acquiring first image data; the first image data is image data shot by a camera on a first vehicle type; and performing coordinate transformation on the pixel position in the first image data according to the first external parameter matrix and the second external parameter matrix to obtain second image data. By adopting the method, the image data acquired by the previous project can be reused.

Description

Road data multiplexing method, driving assisting system, driving assisting device and computer equipment
Technical Field
The application relates to the technical field of automobile electronic information, in particular to a road data multiplexing method, a driving assisting system, a driving assisting device and computer equipment.
Background
The driving software development is assisted, and the reliability and performance of the software need to be verified greatly based on a large amount of actual scene data or simulated scene data so as to optimize the performance of the software, improve the positive trigger rate of functions and reduce the false trigger rate of products.
The general method of the current project development is that in the early stage of functional software development, sensors, data acquisition upper computers, storage equipment and the like which are required to be used are installed on a vehicle body to acquire actual scene data, the data of different scenes which are generally required to be acquired is about 5-10W kilometers, the acquisition period is about 3-6 months by calculating according to 500 kilometers per day, and a plurality of road acquisition vehicles are generally arranged to perform in parallel in the process so as to shorten the road acquisition period. Due to differences of installation positions and the like of sensors among different projects, precious data acquired by spending a large amount of manpower and material resources can only serve one project, and after one project is served, the data acquired by the project cannot be reused.
Disclosure of Invention
In view of the above, it is necessary to provide a road data multiplexing method, a driving assistance system, a device, and a computer apparatus.
In a first aspect, the present application provides a method for multiplexing road data. The method is applied to a driving assistance system, the system comprises a camera, and the method comprises the following steps:
acquiring and determining the deviation between the installation position of a camera on a first vehicle type and the installation position of a camera on a second vehicle type; the first vehicle type is different from the second vehicle type;
acquiring a first external parameter matrix; the first external parameter matrix is an external parameter matrix of a camera on the first vehicle type;
correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type to obtain a second external parameter matrix;
acquiring first image data; the first image data is image data shot by a camera on a first vehicle type;
and performing coordinate transformation on the pixel position in the first image data according to the first external parameter matrix and the second external parameter matrix to obtain second image data.
In one embodiment, the obtaining the second image data further includes:
acquiring body data information corresponding to the first vehicle type; the vehicle body data information comprises vehicle speed and direction;
determining whether the communication matrix of the first vehicle type is the same as the communication matrix of the second vehicle type;
and if the communication matrix of the first vehicle type is the same as that of the second vehicle type, the second image data and the vehicle body data information are fed back to a software system corresponding to the second vehicle type.
In one embodiment, the method further comprises:
if the communication matrix of the first vehicle type is different from the communication matrix of the second vehicle type, determining a first mapping relation; the first mapping relation represents a conversion relation between the message signals in the communication matrix of the first vehicle type and the message signals in the communication matrix of the second vehicle type;
and converting the communication matrix of the first vehicle type into the communication matrix of the second vehicle type according to the first mapping relation.
In one embodiment, the performing coordinate transformation on the pixel position in the first image data according to the first extrinsic parameter matrix and the second extrinsic parameter matrix to obtain second image data includes:
determining a first position coordinate of an imaging point of each corner point in the calibration plate in the image according to the first external reference matrix;
determining a second position coordinate of an imaging point of each corner point in the calibration plate in the image according to the second external reference matrix;
determining a coordinate conversion relation according to the first position coordinate and the second position coordinate;
and according to the coordinate conversion relation, carrying out coordinate transformation on the pixel position in the first image data to obtain second image data.
In one embodiment, the re-correcting the first external parameter matrix according to a deviation between an installation position of a camera on the first vehicle type and an installation position of a camera on the second vehicle type to obtain a second external parameter matrix includes:
acquiring the installation position of a camera on a first vehicle type;
determining a second mapping relation between the position of the camera on the first vehicle type and the first external parameter matrix;
and correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type and the second mapping relation to obtain a second external parameter matrix.
In a second aspect, the application also provides a driving assistance system. The system comprises: the system comprises an original data acquisition module, an image conversion module and a video analysis module; the image conversion module is respectively connected with the original data acquisition module and the video analysis module;
the original data acquisition module is used for acquiring first image data; the first image data is image data shot by a camera on a historical vehicle type;
the image conversion module is used for correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the historical vehicle model and the installation position of the camera on the actual vehicle model to obtain a second external parameter matrix, and performing coordinate transformation on the pixel position in the first image data according to the first external parameter matrix and the second external parameter matrix to obtain second image data; the historical vehicle type is different from the actual vehicle type; the first external parameter matrix is an external parameter matrix of a camera on the historical vehicle type;
the video analysis module is used for identifying a target object in the second image data to obtain first identification information of the target object; the target objects comprise lane lines, license plates, traffic signs and pedestrians.
In one embodiment, the system further comprises: the system comprises a target object identification information fusion module, a communication conversion module and a planning control module; the target object identification information fusion module is respectively connected with the video analysis module and the planning control module; the communication conversion module is connected with the planning control module;
the target object identification information fusion module is used for fusing the first identification information with second identification information of the target object acquired by a radar to obtain third identification information of the target object;
the communication conversion module is used for determining whether a communication matrix of the historical vehicle type is the same as that of the actual vehicle type, determining a first mapping relation if the communication matrix of the historical vehicle type is different from that of the actual vehicle type, and converting the communication matrix of the historical vehicle type into the communication matrix of the actual vehicle type according to the first mapping relation; the first mapping relation represents a conversion relation between message signals in the communication matrix of the historical vehicle type and message signals in the communication matrix of the second vehicle type;
the planning control module is used for acquiring vehicle body data information corresponding to the second vehicle type through the communication conversion module and planning a path of a vehicle corresponding to the second vehicle type according to the vehicle body data information and third identification information; the vehicle body data information includes a vehicle speed and a direction.
In a third aspect, the present application further provides a device for multiplexing road data, where the device includes:
the first acquisition module is used for acquiring and determining the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type; the first vehicle type is different from the second vehicle type;
the second acquisition module is used for acquiring the first external parameter matrix; the first external parameter matrix is an external parameter matrix of a camera on the first vehicle type;
the external parameter matrix correction module is used for correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type to obtain a second external parameter matrix;
the third acquisition module is used for acquiring the first image data; the first image data is image data shot by a camera on a first vehicle type;
and the coordinate transformation module is used for carrying out coordinate transformation on the pixel position in the first image data according to the first external parameter matrix and the second external parameter matrix to obtain second image data.
In a fourth aspect, the application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring and determining the deviation between the installation position of a camera on a first vehicle type and the installation position of a camera on a second vehicle type; the first vehicle type is different from the second vehicle type;
acquiring a first external parameter matrix; the first external parameter matrix is an external parameter matrix of a camera on the first vehicle type;
correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type to obtain a second external parameter matrix;
acquiring first image data; the first image data is image data shot by a camera on a first vehicle type;
and performing coordinate transformation on the pixel position in the first image data according to the first external parameter matrix and the second external parameter matrix to obtain second image data.
In a fifth aspect, the present application further provides a vehicle, including a road end sensor, a communication module, a memory, a processor, and a computer program stored in the memory and operable on the processor, where the road end sensor includes a laser radar, a millimeter wave radar, and a camera, and the processor is connected to the processor, the road end sensor, and the communication module, and is characterized in that the processor implements the following steps when executing the computer program:
acquiring and determining the deviation between the installation position of a camera on a first vehicle type and the installation position of a camera on a second vehicle type; the first vehicle type is different from the second vehicle type;
acquiring a first external parameter matrix; the first external parameter matrix is an external parameter matrix of a camera on the first vehicle type;
correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type to obtain a second external parameter matrix;
acquiring first image data; the first image data is image data shot by a camera on a first vehicle type;
and performing coordinate transformation on the pixel position in the first image data according to the first external parameter matrix and the second external parameter matrix to obtain second image data.
According to the road data multiplexing method, the driving assistance system, the driving assistance device and the computer equipment, the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type is determined through obtaining; the method comprises the steps that a first vehicle type is different from a second vehicle type, a first external parameter matrix is obtained, the first external parameter matrix is the external parameter matrix of a camera on the first vehicle type, the first external parameter matrix is corrected again according to deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type, a second external parameter matrix is obtained, first image data are obtained, the first image data are image data shot by the camera on the first vehicle type, and coordinate transformation is carried out on pixel positions in the first image data according to the first external parameter matrix and the second external parameter matrix, so that second image data are obtained, the second image data can be used for testing a software system corresponding to the second vehicle type, and the problem that data collected in the previous project cannot be reused is solved.
Drawings
Fig. 1 is a block diagram of a hardware configuration of a terminal of a multiplexing method of road data in one embodiment;
FIG. 2 is a schematic flow chart illustrating a method for multiplexing road data according to an embodiment;
FIG. 3 is a schematic flow chart illustrating another method for multiplexing road data according to one embodiment;
FIG. 4 is a schematic flow chart diagram illustrating a method for transforming image data according to one embodiment;
FIG. 5 is a schematic diagram of coordinate system conversion in one embodiment;
FIG. 6 is a schematic illustration of distortion correction in one embodiment;
FIG. 7 is a diagram illustrating image data collected under a first reference rectangle, according to one embodiment;
FIG. 8 is a diagram illustrating image data collected under a second reference rectangle, according to one embodiment;
FIG. 9 is a schematic illustration of a driver assistance system in one embodiment;
FIG. 10 is a schematic illustration of another driver assistance system in one embodiment;
FIG. 11 is a schematic illustration of yet another driver assistance system in one embodiment;
fig. 12 is a block diagram showing a structure of a road data multiplexing apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
Unless defined otherwise, technical or scientific terms used herein shall have the same general meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The use of the terms "a" and "an" and "the" and similar referents in the context of this application do not denote a limitation of quantity, either in the singular or the plural. The terms "comprises," "comprising," "has," "having," and any variations thereof, as referred to in this application, are intended to cover non-exclusive inclusions; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or modules, but may include other steps or modules (elements) not listed or inherent to such process, method, article, or apparatus. Reference throughout this application to "connected," "coupled," and the like is not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. In general, the character "/" indicates a relationship in which the objects associated before and after are an "or". The terms "first," "second," "third," and the like in this application are used for distinguishing between similar items and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or a similar computing device. For example, the present invention is executed on a terminal, and fig. 1 is a block diagram of a hardware structure of a terminal of a road data multiplexing method according to an embodiment of the present invention. As shown in fig. 1, the terminal may include one or more processors 101 (only one is shown in fig. 1) and a memory 102, wherein the processor 101 may include, but is not limited to, a processing device such as a central processing unit CPU, a microprocessor MCU, or a programmable logic device FPGA, the memory 102 may include a read only memory ROM and/or a random access memory RAM, and the processor 101 may perform various suitable actions and processes according to computer program instructions stored in the ROM or computer program instructions loaded into the RAM from the storage unit 107. In the RAM, various programs and data required for the operation of the terminal can also be stored. The processor 101 and the memory 102 are connected to each other by a bus 103. An input/output interface 104 is also connected to the bus 103.
A number of components in the terminal are connected to the input/output interface 104, including: an input unit 105 such as a keyboard, a mouse, and the like; an output unit 106 such as various types of displays, speakers, and the like; a storage unit 107 such as a magnetic disk, an optical disk, or the like; and a communication unit 108, such as a network card, modem, wireless communication transceiver, etc. The communication unit 108 allows the terminal to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The various processes and processes of the method embodiments provided in this embodiment may be performed by the processor 101. For example, in some embodiments, the methods provided in the present embodiments may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 107. In some embodiments, part or all of the computer program may be loaded and/or installed onto the terminal via the ROM and/or the communication unit 108. When the computer program is loaded into RAM and executed by a CPU, the steps of the method provided in the present embodiment may be performed.
The driving assistance system provided by the embodiment of the application can be arranged in a vehicle and used for controlling the vehicle to carry out intelligent driving assistance, and the driving assistance system can comprise a detection device and a detection device, wherein the detection device is used for acquiring detection information of traffic participants around the vehicle and environment in real time, such as detection information of pedestrians, lane lines, traffic signboards and the like.
It should be noted that the detection device in the embodiment of the present application may implement sensor detection of 360 degrees, and the detection device may include at least one of a vision camera, a millimeter wave sensor, and a laser radar sensor, which is not limited herein.
In one embodiment, as shown in fig. 2, there is provided a method for multiplexing road data, which can be applied to a driving assistance system, the system including a camera, the method including the steps of:
step 201, acquiring and determining a deviation between an installation position of a camera on a first vehicle type and an installation position of a camera on a second vehicle type; the first vehicle type is different from the second vehicle type.
Specifically, the driving assistance system in this embodiment may be disposed in the vehicle and configured to control the vehicle to perform intelligent driving assistance, and the driving assistance system may include a camera and configured to obtain, in real time, detection information of traffic participants around the vehicle and an environment, for example, detection information of pedestrians, lane lines, traffic signs, and the like.
The first vehicle type and the second vehicle type are respectively applied to different projects, the cameras mounted on the first vehicle type and the second vehicle type are the same, but the positions of the cameras mounted on the first vehicle type are different from the positions of the cameras mounted on the second vehicle type, so that the deviation between the mounting position of the camera on the first vehicle type and the mounting position of the camera on the second vehicle type can be obtained according to the mounting positions of the cameras on the first vehicle type and the second vehicle type.
Step 202, acquiring a first external parameter matrix; the first external parameter matrix is an external parameter matrix of a camera on the first vehicle type.
It should be noted that the external reference matrix is used to convert the position of the target object in the world coordinate system into a position in the camera coordinate system, and the corresponding external reference matrices are different when the same camera is installed at different positions.
And 203, correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type to obtain a second external parameter matrix.
Specifically, the installation position of the camera on the first vehicle type is different from the installation position of the camera on the second vehicle type, therefore, the external parameter matrix corresponding to the camera on the first vehicle type is different from the external parameter matrix corresponding to the camera on the second vehicle type, further causing the image data collected by the camera on the first vehicle type to be different from the image data collected by the camera on the second vehicle type, the image data collected by the camera on the first vehicle type can not be directly used in the software system corresponding to the second vehicle type, so that in the present embodiment based on the deviation between the mounting position of the camera on the first vehicle type and the mounting position of the camera on the second vehicle type, and correcting the first external parameter matrix again to obtain a second external parameter matrix, so that the deviation between the image data shot by the camera on the first vehicle type and the image data shot by the camera on the second vehicle type can be obtained according to the deviation between the second external parameter matrix and the first external parameter matrix.
Step 204, acquiring first image data; the first image data is image data shot by a camera on the first vehicle type.
Step 205, performing coordinate transformation on the pixel position in the first image data according to the first external reference matrix and the second external reference matrix to obtain second image data.
Specifically, the internal parameter corresponding to the same type of camera is fixed, the same camera mainly causes different external parameter at different installation positions, and further causes different acquired image data.
According to the method for multiplexing the road data, the first external parameter matrix is corrected again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type to obtain the second external parameter matrix, and the pixel position in the first image data is subjected to coordinate transformation according to the first external parameter matrix and the second external parameter matrix to obtain the second image data, so that the second image data can be used for testing a software system corresponding to the second vehicle type, and the repeated utilization rate of the image data acquired by the previous project is improved.
In one embodiment, referring to fig. 3, step 205, obtaining the second image data, and then further comprises the following steps:
step 302, obtaining body data information corresponding to a first vehicle type; the vehicle body data information includes vehicle speed and direction.
Step 304, determining whether the communication matrix of the first vehicle type is the same as the communication matrix of the second vehicle type.
It should be noted that the content of the communication matrix mainly includes a CAN node network topology, a CAN node message signal definition list, a message signal receiving list, and the like, and different product items often have a plurality of vehicle models with different configurations.
And step 306, if the communication matrix of the first vehicle type is the same as that of the second vehicle type, the second image data and the vehicle body data information are fed back to the software system corresponding to the second vehicle type.
Specifically, if the communication matrix of the first vehicle type is the same as the communication matrix of the second vehicle type, the vehicle body data information corresponding to the first vehicle type may be back-fed into the software system corresponding to the second vehicle type, and the vehicle body data information corresponding to the first vehicle type may be used to test the software system corresponding to the second vehicle type, where the software system is an advanced driving assistance system, and the advanced driving assistance system includes at least one of: navigation and real-time traffic systems, car networking, adaptive cruise, lane departure warning systems, lane keeping systems, collision avoidance or pre-collision systems, night vision systems, adaptive light control, pedestrian protection systems, automatic parking systems, traffic sign recognition, blind spot detection, driver fatigue detection, downhill control systems and electric car warning systems.
It can be understood that, in this embodiment, if the communication matrix of the first vehicle type is the same as the communication matrix of the second vehicle type, the second image data and the vehicle body data information corresponding to the first vehicle type may be directly reflowed to the software system corresponding to the second vehicle type, so that the software system corresponding to the second vehicle type may be tested using the second image data and the vehicle body data information corresponding to the first vehicle type, that is, the vehicle body data information corresponding to the first vehicle type may be reused, and the image data corresponding to the first vehicle type may be converted to test the software system corresponding to the second vehicle type, thereby implementing the reusability of the image data corresponding to the first vehicle type, and solving the problem that the data collected in the previous project cannot be reused.
In one embodiment, determining whether the communication matrix of the first vehicle type is the same as the communication matrix of the second vehicle type, then comprises:
if the communication matrix of the first vehicle type is different from the communication matrix of the second vehicle type, determining a first mapping relation; the first mapping relation represents a conversion relation between the message signals in the communication matrix of the first vehicle type and the message signals in the communication matrix of the second vehicle type;
and converting the communication matrix of the first vehicle type into the communication matrix of the second vehicle type according to the first mapping relation.
Specifically, if the communication matrix of the first vehicle type is different from the communication matrix of the second vehicle type, the vehicle body data information corresponding to the first vehicle type cannot be directly fed back to the software system corresponding to the second vehicle type, and therefore, the communication matrix corresponding to the first vehicle type needs to be converted into the communication matrix corresponding to the second vehicle type.
It can be understood that, in this embodiment, the first mapping relationship is obtained in advance according to the conversion relationship between the message signal in the communication matrix of the first vehicle type and the message signal in the communication matrix of the second vehicle type, and the communication matrix of the first vehicle type can be quickly and accurately converted into the communication matrix of the second vehicle type according to the first mapping relationship.
In another embodiment, if the communication matrix of the first vehicle type is different from the communication matrix of the second vehicle type, when the communication matrix corresponding to the second vehicle type is compiled, a CAN node network topological graph of each vehicle type is drawn according to a product configuration definition table, and then the communication matrix of the second vehicle type is modified based on a message signal sending definition list and a project comprehensive signal receiving relation list of a project comprehensive CAN node corresponding to the first vehicle type.
By means of the method, the communication matrix of the first vehicle type is converted into the communication matrix of the second vehicle type, so that the vehicle body data information corresponding to the first vehicle type can be applied to the software system of the second vehicle type, and the repeated utilization rate of the vehicle body data information corresponding to the first vehicle type is improved.
In an embodiment, step 205 performs coordinate transformation on a pixel position in the first image data according to the first external reference matrix and the second external reference matrix to obtain the second image data, as shown in fig. 4, specifically includes the following steps:
step 402, determining a first position coordinate of an imaging point of each corner point in the calibration plate in the image according to the first external reference matrix.
And step 404, determining a second position coordinate of each corner point in the calibration plate at an imaging point in the image according to the second external reference matrix.
And 406, determining a coordinate conversion relation according to the first position coordinate and the second position coordinate.
And 408, performing coordinate transformation on the pixel position in the first image data according to the coordinate transformation relation to obtain second image data.
Specifically, referring to fig. 5 and 6, the appearance matrix describes how the real-world coordinate points fall on the camera coordinates by rotating and translating. The description of the internal reference matrix is mainly given by a rotation matrix and an offset matrix, the internal reference matrix describes how the pixel points pass through a lens and are converted into the pixel points through pinhole imaging and electronization after passing through the camera, the distortion matrix describes why the pixel points do not fall at the falling positions according to the theory, such as barrel distortion, for the same type of camera, the internal reference matrix and the distortion matrix describe the inherent properties after the camera is produced, the external reference coefficients of the same camera are mainly different at different installation positions, and further the acquired image data are different, as shown in fig. 7 and 8, fig. 7 is a schematic diagram of the image data acquired under a first external reference rectangle in one embodiment, fig. 8 is a schematic diagram of the image data acquired under a second external reference rectangle in one embodiment, therefore, in the embodiment, firstly, the external reference matrixes are determined to be respectively under the first external reference matrix and the second external reference matrix, and calibrating the position coordinates of the imaging points of each corner point in the image in the plate, and determining a coordinate conversion relation according to the position coordinates, so that the pixel coordinates in the first image data are subjected to coordinate conversion according to the coordinate conversion relation, and the converted image data can test a software system corresponding to a second vehicle type.
In an example, the angular point coordinates of a calibration plate in a world coordinate system and an internal reference matrix of a camera on a first vehicle type are obtained, the angular point coordinates of the calibration plate in a camera coordinate system on the first vehicle type are obtained according to the angular point coordinates of the calibration plate in the world coordinate system and the first external reference matrix, the first position coordinates of an imaging point of each angular point in the calibration plate in an image are obtained according to the angular point coordinates of the calibration plate in the camera coordinate system on the first vehicle type and the internal reference matrix of the camera on the first vehicle type, and then the second position coordinates of the imaging point of each angular point in the calibration plate in the image are obtained according to the method.
It can be understood that, in this embodiment, the first external parameter matrix is an external parameter matrix corresponding to the installation position of the camera on the first vehicle type, and the second external parameter matrix is an external parameter matrix corresponding to the installation position of the camera on the second vehicle type, so that, according to the difference between the first external parameter matrix and the second external parameter matrix, the present embodiment may perform coordinate transformation on the pixel position in the first image data to obtain the second image data, so that the second image data can be used in testing a software system corresponding to the second vehicle type.
In one embodiment, step 203, re-correcting the first external parameter matrix according to a deviation between an installation position of a camera on the first vehicle type and an installation position of a camera on the second vehicle type, to obtain a second external parameter matrix, specifically includes:
acquiring the installation position of a camera on a first vehicle type;
determining a second mapping relation between the position of the camera on the first vehicle type and the first external parameter matrix;
and correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type and the second mapping relation to obtain a second external parameter matrix.
Specifically, the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type are coordinates of the camera in a world coordinate system, the mapping relation between the horizontal and vertical coordinates of the camera in the world coordinate system and the first extrinsic parameter matrix can be determined according to the second mapping relation, the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type is coordinate deviation in the world coordinate system, and further the first extrinsic parameter matrix can be corrected again according to the coordinate deviation and the second mapping relation to obtain the second extrinsic parameter matrix.
It can be understood that, in this embodiment, the second appearance matrix can be obtained quickly and accurately according to the predetermined second mapping relationship, so that the pixel position in the first image data can be subjected to coordinate transformation according to the first appearance matrix and the second appearance matrix, and the software system corresponding to the second vehicle type can be tested according to the image data after the coordinate transformation.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
In an embodiment, please refer to fig. 9, fig. 9 is a schematic diagram of a driving assistance system in an embodiment, as shown in fig. 9, the system includes a raw data obtaining module 91, an image converting module 92 and a video parsing module 93; the image conversion module 92 is respectively connected with the original data acquisition module 91 and the video analysis module 91;
the raw data acquiring module 91 is used for acquiring first image data; the first image data is image data shot by a camera on a historical vehicle type;
the image conversion module 92 is used for correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the historical vehicle model and the installation position of the camera on the actual vehicle model to obtain a second external parameter matrix, and performing coordinate transformation on the pixel position in the first image data according to the first external parameter matrix and the second external parameter matrix to obtain second image data; the historical vehicle type is different from the actual vehicle type; the first external parameter matrix is an external parameter matrix of a camera on a historical vehicle type;
the video parsing module 93 is configured to identify a target object in the second image data to obtain first identification information of the target object; the target objects include lane lines, license plates, traffic signs and pedestrians.
Specifically, the historical vehicle type is a vehicle type in a previous project, and the actual vehicle type is a vehicle type in a current project.
It can be understood that, in this embodiment, the image conversion module 92 converts the image data captured by the camera in the historical vehicle type into the image data corresponding to the camera in the actual vehicle type, so that the converted image data can be applied to the actual project, and the image data collected by the camera in the historical vehicle type is reused.
In an embodiment, please refer to fig. 10, fig. 10 is a schematic diagram of another driving assistance system in an embodiment, as shown in fig. 10, the system further includes a target object identification information fusion module 101, a communication conversion module 102, and a planning control module 103; the target object identification information fusion module 101 is respectively connected with the video analysis module 93 and the planning control module 103; the communication conversion module 102 is connected with the planning control module 103;
the target object identification information fusion module 101 is configured to fuse the first identification information with second identification information of the target object obtained by the radar to obtain third identification information of the target object;
the communication conversion module 102 is used for determining whether a communication matrix of a historical vehicle type is the same as that of an actual vehicle type, determining a first mapping relation if the communication matrix of the historical vehicle type is different from that of the actual vehicle type, and converting the communication matrix of the historical vehicle type into the communication matrix of the actual vehicle type according to the first mapping relation; the first mapping relation represents a conversion relation between message signals in a communication matrix of a historical vehicle type and message signals in a communication matrix of a second vehicle type;
the planning control module 103 is configured to obtain vehicle body data information corresponding to a second vehicle type through the communication conversion module, and plan a path of a vehicle corresponding to the second vehicle type according to the vehicle body data information and the third identification information; the vehicle body data information includes vehicle speed and direction.
In an embodiment, please refer to fig. 11, fig. 11 is a schematic diagram of another driving assistance system in an embodiment, the system includes an image conversion module 111, a video parsing module 112, a fusion module 113, a network management module 114, and a planning control module 115, the image conversion module 111 is connected to the video parsing module 112, the fusion module 113 is respectively connected to the video parsing module 112 and the planning control module 115, and the planning control module 115 is connected to the gateway module 114;
and the image conversion module 111 is an algorithm module for performing coordinate transformation on the video data sensed by the camera based on the installation position of the sensor.
Specifically, the video data is original scene data identified by a camera recorded through the ethernet in a scene data acquisition process, the image conversion module 111 uses FGPA to convert video stream data acquired by the same camera installed on different vehicle models, and further unifies the data.
The image conversion module 111 equivalently converts the existing video scene data into image data of different installation positions according to different installation positions and a linear method of translational and rotational coordinate transformation. The equivalent dimension of the scene data information elements is kept unchanged, the information value is equivalent to actual installation position information data after passing through the image conversion module, and the method specifically comprises the following steps:
Step1:
the image conversion module 111: the difference of the installation of the carding platform camera and the installation of the customer model camera, wherein the platform camera is a camera adopted in the previous project, and the customer model camera is a camera adopted in the current project.
Step2:
The image conversion module 111: and rechecking the external parameter matrix of the platform camera based on the difference between the installation of the platform camera and the installation of the client vehicle type camera.
Step3:
The image conversion module 111: based on the FPGA, image pixel coordinates in video data shot by the platform camera are moved and converted to obtain image data corresponding to the client vehicle type camera.
Step4:
The image conversion module 111: and loading the program written by step3 into the FPGA project.
Step5:
And (4) running a program, and recharging video data and CAN vehicle body data information acquired by the platform camera to the client vehicle type.
The video parsing module 112 is configured to perform data processing on the video data, including extracting element information such as lane lines, targets, and traffic signs and attribute values thereof.
The fusion module 113 is an algorithm processing module for performing weight distribution and reconfirmation on the same element information at the same time in the video analysis module 112 and the radar data.
The radar data is radar perception identification target data recorded through a CAN network in a scene data acquisition process.
The gateway module 114 maps the signal to a corresponding module based on the CAN communication matrix.
The gateway module 114 provided in this embodiment converts the customer project communication matrix into a platform communication matrix, and ensures that the vehicle input signal and the function output signal are communicated, which specifically includes the following steps:
Step1:
the gateway module 114 is used for combing Mapping relationship tables of the platform communication matrix and the client communication matrix.
Step2:
And generating an h file by using script conversion according to the communication matrix DBC file, wherein attribute information in a Mapping relation table is defined in the h file.
Step3:
And generating a c file through a script file by using a communication matrix Mapping table.
Step4:
The gateway module 114: and loading the h and c files generated by Step2 and Step3 into a keil5 project for compiling.
Step5:
And (5) running a program, and converting the network data of the vehicle type on the platform into a protocol of the client vehicle type.
The planning control module 115 performs path planning, requests execution data of the horizontal and vertical actuators, and sends signals such as an alarm and instrument display based on the scene data output by the fusion module 113.
The vehicle data is vehicle state data information recorded through a CAN network in a scene data acquisition process, and the vehicle state data information comprises at least one of the following information:
acceleration, deceleration, backing up and turning.
The vehicle data is sent to the planning control module 115 through the gateway module 114.
The output control is sent to the relevant actuators via the CAN network according to the signals calculated by the planning control module 115.
Through the mode, compared with the mode that different driving scenes are rendered through scene rendering tools such as VTD (virtual traffic display), the real road acquisition scene data are subjected to coordinate transformation through the image conversion module 111, the scene data are rich in details, wide in scene coverage and large in data volume, the scene data are closest to the real driving scene, and the whole software development process can be verified more comprehensively; compared with actual scene data acquired by a sensor installation position of a project or an approximate vehicle type, the embodiment converts images through the image conversion module 111, compared with the method of utilizing the approximate installation position sensor data, the embodiment has higher video precision and better recharge effect, compared with the method of acquiring video data by the project, the image conversion module 111 is used for processing project acquisition data in the early stage, the path acquisition link is reduced, waste of manpower and material cost is avoided, the project development progress is accelerated, the reuse rate of project acquisition data in the early stage is improved, the same scene is applied to different projects, accumulation of special scenes can be identified, the accumulation can be used as resources of later-stage projects, in addition, the development process of the gateway module 114 is simple, programming skills are not needed, and the existing script tools are combined, and only the corresponding Mapping relation needs to be filled in an Excel table to generate c, H files, the development process is efficient and accurate.
In one embodiment, as shown in fig. 12, there is provided a road data multiplexing apparatus including: the device comprises a first acquisition module, a second acquisition module, a third acquisition module, an external parameter matrix correction module and a coordinate transformation module, wherein:
a first obtaining module 120, configured to obtain and determine a deviation between an installation position of a camera on a first vehicle type and an installation position of a camera on a second vehicle type; the first vehicle type is different from the second vehicle type;
a second obtaining module 121, configured to obtain a first extrinsic parameter matrix; the first external parameter matrix is an external parameter matrix of a camera on the first vehicle type;
the external parameter matrix correction module 122 is configured to correct the first external parameter matrix again according to a deviation between an installation position of a camera on the first vehicle type and an installation position of a camera on the second vehicle type, so as to obtain a second external parameter matrix;
a third obtaining module 123, configured to obtain the first image data; the first image data is image data shot by a camera on a first vehicle type;
and the coordinate transformation module 124 is configured to perform coordinate transformation on the pixel position in the first image data according to the first extrinsic parameter matrix and the second extrinsic parameter matrix to obtain second image data.
In one embodiment, the apparatus for multiplexing road data further includes:
acquiring body data information corresponding to the first vehicle type; the vehicle body data information comprises vehicle speed and direction;
determining whether the communication matrix of the first vehicle type is the same as the communication matrix of the second vehicle type;
and if the communication matrix of the first vehicle type is the same as that of the second vehicle type, the second image data and the vehicle body data information are fed back to a software system corresponding to the second vehicle type.
In one embodiment, the apparatus for multiplexing road data further includes:
if the communication matrix of the first vehicle type is different from the communication matrix of the second vehicle type, determining a first mapping relation; the first mapping relation represents a conversion relation between the message signals in the communication matrix of the first vehicle type and the message signals in the communication matrix of the second vehicle type;
and converting the communication matrix of the first vehicle type into the communication matrix of the second vehicle type according to the first mapping relation.
In one embodiment, the coordinate transformation module 124 includes:
determining a first position coordinate of an imaging point of each corner point in the calibration plate in the image according to the first external reference matrix;
determining a second position coordinate of an imaging point of each corner point in the calibration plate in the image according to the second external reference matrix;
determining a coordinate conversion relation according to the first position coordinate and the second position coordinate;
and according to the coordinate conversion relation, carrying out coordinate transformation on the pixel position in the first image data to obtain second image data.
In one embodiment, the external parameter matrix rectification module 122 includes:
acquiring the installation position of a camera on a first vehicle type;
determining a second mapping relation between the position of the camera on the first vehicle type and the first external parameter matrix;
and correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type and the second mapping relation to obtain a second external parameter matrix.
The respective modules in the road data multiplexing apparatus may be wholly or partially implemented by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a vehicle is provided, comprising an end-of-road sensor, a communication module, a memory, a processor, and a computer program stored on the memory and executable on the processor, the end-of-road sensor comprising a lidar, a millimeter-wave radar, and a camera, the processor being coupled to the processor, the end-of-road sensor, and the communication module, the processor when executing the computer program implementing the steps of:
acquiring and determining the deviation between the installation position of a camera on a first vehicle type and the installation position of a camera on a second vehicle type; the first vehicle type is different from the second vehicle type;
acquiring a first external parameter matrix; the first external parameter matrix is an external parameter matrix of a camera on the first vehicle type;
correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type to obtain a second external parameter matrix;
acquiring first image data; the first image data is image data shot by a camera on a first vehicle type;
and performing coordinate transformation on the pixel position in the first image data according to the first external reference matrix and the second external reference matrix to obtain second image data.
In one embodiment, a vehicle is further provided, which includes a road end sensor, a communication module, a memory, a processor and a computer program stored on the memory and executable on the processor, where the road end sensor includes a laser radar, a millimeter wave radar and a camera, the processor is connected to the processor, the road end sensor and the communication module, respectively, and the processor implements the steps of the above method embodiments when executing the computer program.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring and determining the deviation between the installation position of a camera on a first vehicle type and the installation position of a camera on a second vehicle type; the first vehicle type is different from the second vehicle type;
acquiring a first external parameter matrix; the first external parameter matrix is an external parameter matrix of a camera on the first vehicle type;
correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type to obtain a second external parameter matrix;
acquiring first image data; the first image data is image data shot by a camera on a first vehicle type;
and performing coordinate transformation on the pixel position in the first image data according to the first external reference matrix and the second external reference matrix to obtain second image data.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware that is instructed by a computer program, and the computer program may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A method for multiplexing road data is applied to a driving assistance system, the system comprises a camera, and the method comprises the following steps:
acquiring and determining the deviation between the installation position of a camera on a first vehicle type and the installation position of a camera on a second vehicle type; the first vehicle type is different from the second vehicle type;
acquiring a first external parameter matrix; the first external parameter matrix is an external parameter matrix of a camera on the first vehicle type;
correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type to obtain a second external parameter matrix;
acquiring first image data; the first image data is image data shot by a camera on a first vehicle type;
and performing coordinate transformation on the pixel position in the first image data according to the first external parameter matrix and the second external parameter matrix to obtain second image data.
2. The method of claim 1, wherein obtaining second image data further comprises:
acquiring body data information corresponding to the first vehicle type; the vehicle body data information comprises vehicle speed and direction;
determining whether the communication matrix of the first vehicle type is the same as the communication matrix of the second vehicle type;
and if the communication matrix of the first vehicle type is the same as that of the second vehicle type, the second image data and the vehicle body data information are fed back to a software system corresponding to the second vehicle type.
3. The method of claim 2, further comprising:
if the communication matrix of the first vehicle type is different from the communication matrix of the second vehicle type, determining a first mapping relation; the first mapping relation represents a conversion relation between the message signals in the communication matrix of the first vehicle type and the message signals in the communication matrix of the second vehicle type;
and converting the communication matrix of the first vehicle type into the communication matrix of the second vehicle type according to the first mapping relation.
4. The method of claim 1, wherein the performing coordinate transformation on pixel positions in the first image data according to the first and second extrinsic matrices to obtain second image data comprises:
determining a first position coordinate of an imaging point of each corner point in the calibration plate in the image according to the first external reference matrix;
determining a second position coordinate of an imaging point of each corner point in the calibration plate in the image according to the second external reference matrix;
determining a coordinate conversion relation according to the first position coordinate and the second position coordinate;
and according to the coordinate conversion relation, carrying out coordinate transformation on the pixel position in the first image data to obtain second image data.
5. The method of claim 1, wherein the re-rectifying the first external parameter matrix according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type to obtain a second external parameter matrix comprises:
acquiring the installation position of a camera on a first vehicle type;
determining a second mapping relation between the position of the camera on the first vehicle type and the first external parameter matrix;
and correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type and the second mapping relation to obtain a second external parameter matrix.
6. A driving assistance system, characterized in that the system comprises: the system comprises an original data acquisition module, an image conversion module and a video analysis module; the image conversion module is respectively connected with the original data acquisition module and the video analysis module;
the original data acquisition module is used for acquiring first image data; the first image data is image data shot by a camera on a historical vehicle type;
the image conversion module is used for correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the historical vehicle model and the installation position of the camera on the actual vehicle model to obtain a second external parameter matrix, and performing coordinate transformation on the pixel position in the first image data according to the first external parameter matrix and the second external parameter matrix to obtain second image data; the historical vehicle type is different from the actual vehicle type; the first external parameter matrix is an external parameter matrix of a camera on the historical vehicle model;
the video analysis module is used for identifying a target object in the second image data to obtain first identification information of the target object; the target objects comprise lane lines, license plates, traffic signs and pedestrians.
7. The system of claim 6, further comprising: the system comprises a target object identification information fusion module, a communication conversion module and a planning control module; the target object identification information fusion module is respectively connected with the video analysis module and the planning control module; the communication conversion module is connected with the planning control module;
the target object identification information fusion module is used for fusing the first identification information with second identification information of the target object acquired by a radar to obtain third identification information of the target object;
the communication conversion module is used for determining whether a communication matrix of the historical vehicle type is the same as that of the actual vehicle type, determining a first mapping relation if the communication matrix of the historical vehicle type is different from that of the actual vehicle type, and converting the communication matrix of the historical vehicle type into the communication matrix of the actual vehicle type according to the first mapping relation; the first mapping relation represents a conversion relation between the message signals in the communication matrix of the historical vehicle type and the message signals in the communication matrix of the second vehicle type;
the planning control module is used for acquiring vehicle body data information corresponding to the second vehicle type through the communication conversion module and planning a path of a vehicle corresponding to the second vehicle type according to the vehicle body data information and third identification information; the vehicle body data information includes a vehicle speed and a direction.
8. A road data multiplexing device is applied to a driving assistance system, the system comprises a camera, and the device comprises:
the first acquisition module is used for acquiring and determining the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type; the first vehicle type is different from the second vehicle type;
the second acquisition module is used for acquiring the first external parameter matrix; the first external parameter matrix is an external parameter matrix of a camera on the first vehicle type;
the external parameter matrix correction module is used for correcting the first external parameter matrix again according to the deviation between the installation position of the camera on the first vehicle type and the installation position of the camera on the second vehicle type to obtain a second external parameter matrix;
the third acquisition module is used for acquiring the first image data; the first image data is image data shot by a camera on a first vehicle type;
and the coordinate transformation module is used for carrying out coordinate transformation on the pixel position in the first image data according to the first external parameter matrix and the second external parameter matrix to obtain second image data.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 5.
10. A vehicle comprising an end-of-road sensor, a communication module, a memory, a processor and a computer program stored on the memory and executable on the processor, the end-of-road sensor comprising a lidar, a millimeter wave radar and a camera, the processor being connected to the processor, the end-of-road sensor and the communication module, respectively, characterized in that the processor implements the method for multiplexing road data according to any one of claims 1 to 5 when executing the computer program.
CN202210231208.9A 2022-03-09 2022-03-09 Road data multiplexing method, driving assisting system, driving assisting device and computer equipment Pending CN114723820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210231208.9A CN114723820A (en) 2022-03-09 2022-03-09 Road data multiplexing method, driving assisting system, driving assisting device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210231208.9A CN114723820A (en) 2022-03-09 2022-03-09 Road data multiplexing method, driving assisting system, driving assisting device and computer equipment

Publications (1)

Publication Number Publication Date
CN114723820A true CN114723820A (en) 2022-07-08

Family

ID=82236968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210231208.9A Pending CN114723820A (en) 2022-03-09 2022-03-09 Road data multiplexing method, driving assisting system, driving assisting device and computer equipment

Country Status (1)

Country Link
CN (1) CN114723820A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797442A (en) * 2022-12-01 2023-03-14 昆易电子科技(上海)有限公司 Simulation image reinjection method of target position and related equipment thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797442A (en) * 2022-12-01 2023-03-14 昆易电子科技(上海)有限公司 Simulation image reinjection method of target position and related equipment thereof

Similar Documents

Publication Publication Date Title
Wu et al. Tracking vehicle trajectories and fuel rates in phantom traffic jams: Methodology and data
CN106462996B (en) Method and device for displaying vehicle surrounding environment without distortion
DE102020106204A1 (en) Technologies for managing a world model of a monitored area
CN112753038B (en) Method and device for identifying lane change trend of vehicle
WO2023221566A1 (en) 3d target detection method and apparatus based on multi-view fusion
CN112348848A (en) Information generation method and system for traffic participants
CN114764778A (en) Target detection method, target detection model training method and related equipment
CN114140592A (en) High-precision map generation method, device, equipment, medium and automatic driving vehicle
CN114723820A (en) Road data multiplexing method, driving assisting system, driving assisting device and computer equipment
CN111401190A (en) Vehicle detection method, device, computer equipment and storage medium
CN115344503A (en) Traffic flow simulation system and simulation method for automatic driving planning control test
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN114550116A (en) Object identification method and device
Bai et al. Cyber mobility mirror for enabling cooperative driving automation in mixed traffic: A co-simulation platform
CN112507891B (en) Method and device for automatically identifying high-speed intersection and constructing intersection vector
CN114550117A (en) Image detection method and device
CN117308972A (en) Vehicle positioning method, device, storage medium and electronic equipment
CN116524718A (en) Remote visual processing method and system for intersection data
CN113902047B (en) Image element matching method, device, equipment and storage medium
CN115792867A (en) Laser radar simulation method and device
CN113808186B (en) Training data generation method and device and electronic equipment
CN115240168A (en) Perception result obtaining method and device, computer equipment and storage medium
US20210329219A1 (en) Transfer of additional information among camera systems
US20230274590A1 (en) Scalable sensor analysis for vehicular driving assistance system
CN116295469B (en) High-precision map generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination