CN116743937A - Domain controller and vehicle running control method - Google Patents

Domain controller and vehicle running control method Download PDF

Info

Publication number
CN116743937A
CN116743937A CN202311014651.1A CN202311014651A CN116743937A CN 116743937 A CN116743937 A CN 116743937A CN 202311014651 A CN202311014651 A CN 202311014651A CN 116743937 A CN116743937 A CN 116743937A
Authority
CN
China
Prior art keywords
obstacle
deserializer
information
obstacle image
processing chip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311014651.1A
Other languages
Chinese (zh)
Other versions
CN116743937B (en
Inventor
施嘉婷
姚根
于英俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hekun Technology Beijing Co ltd
Original Assignee
Hekun Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hekun Technology Beijing Co ltd filed Critical Hekun Technology Beijing Co ltd
Priority to CN202311014651.1A priority Critical patent/CN116743937B/en
Publication of CN116743937A publication Critical patent/CN116743937A/en
Application granted granted Critical
Publication of CN116743937B publication Critical patent/CN116743937B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/50Barriers

Abstract

Embodiments of the present disclosure disclose a domain controller and a vehicle travel control method. One embodiment of the domain controller includes: the device comprises a first deserializer, a second deserializer, a first shooting processing chip, a second shooting processing chip and a main control chip, wherein: the main control chip is in communication connection with the positioning navigation terminal, wherein the main control chip is used for acquiring time information and a time stamp signal from the positioning navigation terminal; the main control chip is respectively in communication connection with the first deserializer and the second deserializer; the main control chip is respectively in communication connection with the first image pickup processing chip and the second image pickup processing chip. This embodiment improves the safety of the vehicle running.

Description

Domain controller and vehicle running control method
Technical Field
Embodiments of the present disclosure relate to the field of vehicle domain controllers, and in particular, to a domain controller and a vehicle travel control method.
Background
In the running process of the vehicle, the domain controller is required to control the vehicle-mounted camera to acquire images of the surrounding environment of the vehicle. Currently, when the domain controller receives camera data, the following methods are generally adopted: the method comprises the steps of controlling each camera connected with a deserializer to synchronously acquire images through a synchronous signal generated in the deserializer, synchronously processing each image through a processing chip by adopting an external synchronous time compensation scheme, sending the images to a main control chip, and obtaining barrier information through extracting semantic information in the images by the main control chip.
However, the inventors found that when the above-described domain controller is employed to receive camera data, there are often the following technical problems:
firstly, the image data acquisition time of different deserializers is inconsistent, and when the main control chip acquires images of all cameras at the same moment, partial image data is lost, so that the accuracy of image data analysis is reduced, and the safety of vehicle driving is reduced;
secondly, the synchronous signals are generated by the inside of different deserializers, and the different deserializers can lead to different exposure time of the camera module, so that the image information collected by the camera module is asynchronous, the timestamp marked on the image data is not at the same time point after the processing chip receives the image data, the accuracy of obtaining the obstacle information by different processing chips is reduced, and the driving safety of a vehicle is reduced;
thirdly, according to the external synchronization time compensation scheme, the frequencies of the images acquired by different deserializers are different, so that the main control chip is difficult to acquire the image data shot by each camera in real time, the instantaneity of the obtained obstacle information is reduced, and the instantaneity of vehicle running control is reduced;
Fourth, in the process of extracting semantic information from an image, loss of bottom features is easily caused, so that the accuracy of the extracted obstacle information is reduced, and further, the accuracy of vehicle driving control is reduced.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a domain controller and a vehicle travel control method to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a domain controller comprising: the device comprises a first deserializer, a second deserializer, a first shooting processing chip, a second shooting processing chip and a main control chip, wherein: the main control chip is in communication connection with the positioning navigation terminal, wherein the main control chip is used for acquiring time information and a time stamp signal from the positioning navigation terminal; the main control chip is respectively in communication connection with the first deserializer and the second deserializer, wherein the main control chip is used for generating an exposure synchronous signal and simultaneously transmitting the exposure synchronous signal to the first deserializer and the second deserializer, and the first deserializer and the second deserializer are used for controlling the camera to acquire barrier image information according to the exposure synchronous signal; the master control chip is respectively in communication connection with the first image pickup processing chip and the second image pickup processing chip, wherein the master control chip is used for sending the time information and the time stamp signals to the first image pickup processing chip and the second image pickup processing chip, and the first image pickup processing chip and the second image pickup processing chip are used for generating the time stamp information according to the time stamp signals.
Optionally, the first image capturing processing chip is in communication with the first deserializer, where the first image capturing processing chip is configured to obtain first obstacle image information from the first deserializer, and add the timestamp information to the first obstacle image information; the second image capturing processing chip is in communication connection with the second deserializer, wherein the second image capturing processing chip is used for acquiring second obstacle image information from the second deserializer and adding the timestamp information to the second obstacle image information.
Optionally, a master control serial peripheral interface is arranged on the master control chip; a first camera serial peripheral interface is arranged on the first camera processing chip; a second camera serial peripheral interface is arranged on the second camera processing chip; the master control chip is in communication connection with the first camera processing chip through the master control serial peripheral interface and the first camera serial peripheral interface, wherein the master control serial peripheral interface and the first camera serial peripheral interface are used for transmitting the time information, the time stamp signal and the first barrier image information; the master control chip is in communication connection with the second camera processing chip through the master control serial peripheral interface and the second camera serial peripheral interface, wherein the master control serial peripheral interface and the second camera serial peripheral interface are used for transmitting the time information, the time stamp signal and the second barrier image information.
Optionally, the master control chip is provided with an exposure synchronous interface and a universal asynchronous receiving and transmitting interface: the main control chip is in communication connection with the first deserializer and the second deserializer through the exposure synchronization interface respectively, wherein the exposure synchronization interface is used for transmitting the exposure synchronization signal; the main control chip is in communication connection with the positioning navigation terminal through the universal asynchronous receiving and transmitting interface, wherein the universal asynchronous receiving and transmitting interface is used for transmitting the time information and the time stamp signal.
Optionally, the first deserializer is in communication connection with a first preset number of cameras; the second deserializer is in communication connection with a second preset number of cameras; a first mobile processor interface is arranged on the first camera shooting processing chip; the first deserializer is in communication connection with the first camera shooting processing chip through the first mobile processor interface, wherein the first mobile processor interface is used for transmitting the first obstacle image information; a second mobile processor interface is arranged on the second camera shooting processing chip; the second deserializer is in communication connection with the second image pickup processing chip through the second mobile processor interface, wherein the second mobile processor interface is used for transmitting the second obstacle image information.
In a second aspect, some embodiments of the present disclosure provide a vehicle travel control method including: acquiring time information and a time stamp control signal; generating an exposure synchronizing signal and simultaneously transmitting the exposure synchronizing signal to a first deserializer and a second deserializer to control the first deserializer and the second deserializer to acquire first obstacle image information and second obstacle image information respectively; transmitting the time information and the timestamp control signal to a first image capturing processing chip and a second image capturing processing chip to control the first image capturing processing chip and the second image capturing processing chip to acquire the first obstacle image information and the second obstacle image information from the first deserializer and the second deserializer respectively at the same time, and updating the first obstacle image information and the second obstacle image information to acquire first synchronous obstacle image information and second synchronous obstacle image information; acquiring the first synchronous obstacle image information and the second synchronous obstacle image information from the first image pickup processing chip and the second image pickup processing chip, and performing feature fusion processing on the first synchronous obstacle image information and the second synchronous obstacle image information to obtain obstacle feature information; and transmitting the obstacle characteristic information to a positioning navigation terminal to control the running of the target vehicle.
The above embodiments of the present disclosure have the following advantageous effects: a domain controller by some embodiments of the present disclosure, comprising: the device comprises a first deserializer, a second deserializer, a first shooting processing chip, a second shooting processing chip and a main control chip, wherein: the main control chip is in communication connection with the positioning navigation terminal. The main control chip is respectively in communication connection with the first deserializer and the second deserializer, wherein the main control chip is used for generating an exposure synchronous signal and simultaneously transmitting the exposure synchronous signal to the first deserializer and the second deserializer, and the first deserializer and the second deserializer are used for controlling the camera to acquire barrier image information according to the exposure synchronous signal. The main control chip is respectively in communication connection with the first image pickup processing chip and the second image pickup processing chip. Therefore, the domain controller can generate exposure synchronous signals through the main control chip and send time stamp signals to each camera shooting processing chip, each deserializer can synchronously control each connected camera after receiving the exposure synchronous signals sent by the main control chip to acquire obstacle images, and each processing chip can add time stamp information to the obstacle images received from the deserializers to acquire the obstacle images acquired at the same moment. Therefore, the main control chip can obtain complete obstacle image information at the same moment, so that the accuracy of image data analysis can be improved, and the safety of vehicle running can be improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of the architecture of some embodiments of a domain controller according to the present disclosure;
FIG. 2 is a schematic diagram of other embodiments of a domain controller according to the present disclosure;
fig. 3 is a schematic structural view of a first deserializer, a second deserializer, a first image-pickup processing chip, and a second image-pickup processing chip of a domain controller according to the present disclosure;
fig. 4 is a flow chart of some embodiments of a vehicle travel control method according to the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring first to fig. 1, fig. 1 illustrates a schematic diagram of some embodiments of a domain controller according to the present disclosure. As shown in fig. 1, the domain controller includes: a first deserializer 1, a second deserializer 2, a first image pickup processing chip 3, a second image pickup processing chip 4 and a main control chip 5, wherein: the main control chip 5 is in communication connection with the positioning navigation terminal 6. The main control chip 5 is used for acquiring time information and a time stamp signal from the positioning navigation terminal 6. The first and second deserializers 1 and 2 described above may be deserializers for a video camera. The above-described positioning navigation terminal 6 may be a terminal for navigating and positioning a vehicle. The time information may be a date and time of the current time. The time stamp signal may characterize the date and time of the current time.
As an example, the first image capturing processing Chip 3 and the second image capturing processing Chip 4 may be a SoC (System on Chip), and the main control Chip 5 may be an MCU (Micro controller Unit, micro control unit). The positioning and navigation terminal 6 may be a GNSS (Global Navigation Satellite System ). The time stamp signal may be a PPS (pulse per second) signal.
In some embodiments, the main control chip 5 is communicatively connected to the first deserializer 1 and the second deserializer 2 respectively. The main control chip 5 is configured to generate an exposure synchronization signal and send the exposure synchronization signal to the first deserializer 1 and the second deserializer 2 at the same time, where the first deserializer 1 and the second deserializer 2 are configured to control the camera to collect the image information of the obstacle according to the exposure synchronization signal. The main control chip 5 may transmit the exposure synchronization signal to the first deserializer 1 and the second deserializer 2 at the same time. The obstacle image information may include, but is not limited to, at least one of: an obstacle image set. Each obstacle image in the obstacle image set may be photographed by each camera connected to the deserializer.
As an example, the above-described exposure synchronization signal may be an Fysnc (synchronization) signal.
The main control chip 5 is respectively in communication connection with the first image pickup processing chip 3 and the second image pickup processing chip 4. The main control chip 5 is configured to send the time stamp signal to the first image capturing processing chip 3 and the second image capturing processing chip 4, and the first image capturing processing chip 3 and the second image capturing processing chip 4 are configured to generate time stamp information according to the time stamp signal. The above-described timestamp information may characterize the date and time of the current time.
Optionally, the first image capturing processing chip 3 is communicatively connected to the first deserializer 1. The first image capturing processing chip 3 is configured to acquire first obstacle image information from the first deserializer 1, and to add the time stamp information to the first obstacle image information. The second image processing chip 4 is connected to the second deserializer 2 in communication. Wherein the second image capturing processing chip 4 is configured to obtain second obstacle image information from the second deserializer 2, and add the timestamp information to the second obstacle image information. Thus, the first obstacle image information and the second obstacle image information at the same time can be obtained. The first obstacle image information may include, but is not limited to, at least one of: a first set of obstacle images. Each first obstacle image in the first obstacle image set may be photographed by each camera connected to the first deserializer. The second obstacle image information may include, but is not limited to, at least one of: a second set of obstacle images. Each of the second obstacle images in the second obstacle image set may be photographed by each of the cameras connected to the first deserializer.
The above domain controller is further described below in conjunction with fig. 2 and 1. Fig. 2 is a schematic structural diagram of further embodiments of a domain controller according to the present disclosure. As shown in fig. 2, the master serial peripheral interface 51 is provided on the master chip 5. The first image capturing processing chip 3 is provided with a first image capturing serial peripheral interface 31. The second image capturing processing chip 4 is provided with a second image capturing serial peripheral interface 41. The main control chip 5 is communicatively connected to the first image capturing processing chip 3 through the main control serial peripheral interface 51 and the first image capturing serial peripheral interface 31. Wherein the master serial peripheral interface 51 and the first camera serial peripheral interface 31 are configured to transmit the time information, the time stamp signal, and the first obstacle image information. The main control chip 5 is communicatively connected to the second image capturing processing chip 4 via the main control serial peripheral interface 51 and the second image capturing serial peripheral interface 41. Wherein the master serial peripheral interface 51 and the second camera serial peripheral interface 41 are configured to transmit the time information, the time stamp signal, and the second obstacle image information.
As an example, the above-described master serial peripheral interface 51 may be an SPI (Serial Peripheral Interface ) interface. The first camera serial peripheral interface 31 may be an SPI interface. The second camera serial peripheral interface 41 may be an SPI interface.
Optionally, the main control chip 5 is provided with an exposure synchronous interface 52 and a universal asynchronous receiver/transmitter interface 53: the main control chip 5 is in communication connection with the first deserializer 1 and the second deserializer 2 through the exposure synchronization interface 52. Wherein, the exposure synchronization interface 52 is used for transmitting the exposure synchronization signal. The main control chip 5 is in communication connection with the positioning navigation terminal 6 through the universal asynchronous receiving and transmitting interface 53. Wherein the universal asynchronous receiver/transmitter interface 53 is configured to transmit the time information and the time stamp signal.
By way of example, the universal asynchronous receiver/Transmitter interface 53 may be a UART (Universal Asynchronous Receiver/Transmitter) interface.
The above-mentioned related design of the main control chip 5 is an invention point of the embodiment of the present disclosure, and solves the second technical problem of "the safety of the vehicle is reduced" presented in the background art. Factors that cause the degree of safety of the vehicle running to be reduced tend to be as follows: the synchronous signals are generated by the inside of different deserializers, and the deserializers are asynchronous, so that the exposure time of the camera module is different, and therefore, the image information collected by the camera module is asynchronous, and after the processing chip receives the image data, the time stamp for marking the image data is not at the same time point, and therefore, the accuracy of obtaining the obstacle information by different processing chips is reduced. If the above factors are solved, the safety of the road work can be improved. In order to achieve the effect, the method and the device can use the main control chip to generate the exposure synchronizing signals and send the exposure synchronizing signals to each deserializer through the exposure synchronizing interface, so that different cameras connected with different deserializers can be controlled to perform synchronous exposure so as to acquire obstacle images at the same moment, the accuracy of adding time stamps into the received obstacle images by the image pickup processing chip can be improved, the accuracy of obtaining obstacle information by different processing chips can be improved, and the safety of vehicle driving can be improved.
The above domain controller is further described below in conjunction with fig. 3 and 1. Fig. 3 is a schematic diagram of the structures of a first deserializer, a second deserializer, a first image-capturing processing chip, and a second image-capturing processing chip of the domain controller according to the present disclosure. As shown in fig. 3, the first deserializer 1 is communicatively connected to a first predetermined number of cameras. The second deserializer 2 is in communication with a second predetermined number of cameras. The first image pickup processing chip 3 is provided with a first mobile processor interface 32. The first deserializer 1 is connected to the first image processing chip 3 through the first mobile processor 32 interface communication. Wherein the first mobile processor interface 32 is configured to transmit the first obstacle image information. The second image pickup processing chip 4 is provided with a second mobile processor interface 42. The second deserializer 2 is communicatively connected to the second image processing chip 4 via the second mobile processor interface 42. Wherein the second mobile processor interface 42 is configured to transmit the second obstacle image information.
Specifically, the first deserializer 1 may control each camera connected to the first deserializer 1 to capture a first obstacle image in response to receiving the exposure synchronization signal, and then determine each obtained first obstacle image as first obstacle image information and send the first obstacle image information to the first image capturing processing chip 3. The second deserializer 2 may control each camera connected to the second deserializer 1 to capture a second obstacle image in response to receiving the exposure synchronization signal, and then determine each obtained second obstacle image as second obstacle image information and send to the second image capturing processing chip 4.
As an example, the first preset number may be 2. The second preset number may be 4. The first mobile processor interface 32 and the second mobile processor interface 42 may be MIPI (Mobile Industry Processor Interface ) interfaces. The first obstacle image information and the second obstacle image information may be MIPI format information.
In practice, the master control chip 5 of the domain controller may be configured to perform the following steps:
first, time information and a time stamp control signal are acquired. The time information and the timestamp control signal can be obtained from a positioning navigation system through a universal asynchronous receiving and transmitting interface arranged on the main control chip. The time information may be a date and time of the current time. The time stamp signal may characterize the date and time of the current time.
And a second step of generating an exposure synchronizing signal and simultaneously transmitting the exposure synchronizing signal to a first deserializer and a second deserializer to control the first deserializer and the second deserializer to respectively acquire first obstacle image information and second obstacle image information. The exposure synchronization signals can be simultaneously sent to the first deserializer and the second deserializer through an exposure synchronization interface arranged on the main control chip. The exposure synchronization signal may characterize that the main control chip wants to collect the first obstacle image information and the second obstacle image information. The first obstacle image information may include, but is not limited to, at least one of: a first set of obstacle images. Each first obstacle image in the first obstacle image set may be photographed by each camera connected to the first deserializer. The second obstacle image information may include, but is not limited to, at least one of: a second set of obstacle images. Each of the second obstacle images in the second obstacle image set may be photographed by each of the cameras connected to the second deserializer.
And a third step of transmitting the time information and the time stamp control signal to a first image capturing processing chip and a second image capturing processing chip to control the first image capturing processing chip and the second image capturing processing chip to simultaneously acquire the first obstacle image information and the second obstacle image information from the first deserializer and the second deserializer respectively, and updating the first obstacle image information and the second obstacle image information to obtain first synchronous obstacle image information and second synchronous obstacle image information. The time information and the time stamp control signal can be simultaneously sent to the first image pickup processing chip and the second image pickup processing chip through a main control serial peripheral interface arranged on the main control chip. Then, the first image capturing processing chip may acquire the first obstacle image information from the first deserializer through a first mobile processor interface provided on the first image capturing processing chip in response to receiving the time stamp signal, and then may add the time information to the first obstacle image information to obtain the first synchronous obstacle image information. The second image capturing processing chip may acquire the second obstacle image information from the second deserializer through a second mobile processor interface provided on the second image capturing processing chip in response to receiving the time stamp signal, and may then add the time information to the second obstacle image information to obtain the second synchronous obstacle image information.
And a fourth step of acquiring the first and second synchronous obstacle image information from the first and second image pickup processing chips, and performing feature fusion processing on the first and second synchronous obstacle image information to obtain obstacle feature information. The first synchronous obstacle image information and the second synchronous obstacle image information can be obtained from the first image pickup processing chip and the second image pickup processing chip through a main control serial peripheral interface arranged on the main control chip.
And fifthly, transmitting the obstacle characteristic information to a positioning navigation terminal to control the target vehicle to run. The obstacle characteristic information can be sent to the positioning navigation terminal through a universal asynchronous receiving and transmitting interface arranged on the main control chip so as to control the running of the target vehicle. The positioning navigation terminal can be used for controlling the target vehicle to run according to the obstacle characteristic information. The target vehicle may be a vehicle that is traveling.
The vehicle running control method is taken as an invention point of the embodiment of the disclosure, and solves the technical problem three of 'real-time reduction of vehicle running control' presented in the background art. Factors that cause the real-time performance of the vehicle running control to be reduced tend to be as follows: according to the external synchronization time compensation scheme, the frequencies of the images acquired by different deserializers are different, so that the main control chip is difficult to acquire the image data shot by each camera in real time, and the instantaneity of the obtained obstacle information is reduced. If the above factors are solved, the real-time performance of the vehicle running control can be improved. To achieve this, the domain controller included in the present disclosure may first acquire time information and a time stamp control signal. And secondly, generating an exposure synchronous signal and simultaneously transmitting the exposure synchronous signal to a first deserializer and a second deserializer so as to control the first deserializer and the second deserializer to respectively acquire first obstacle image information and second obstacle image information. Thus, the first deserializer and the second deserializer can simultaneously control the connected cameras to acquire the obstacle images. And then, the time information and the timestamp control signal are sent to a first image pickup processing chip and a second image pickup processing chip so as to control the first image pickup processing chip and the second image pickup processing chip to acquire the first obstacle image information and the second obstacle image information from the first deserializer and the second deserializer respectively at the same time, and update the first obstacle image information and the second obstacle image information to obtain first synchronous obstacle image information and second synchronous obstacle image information. Therefore, the camera shooting processing chip can add timestamp information for all the obstacle images acquired simultaneously so as to represent that all the obstacle images are acquired at the same time. Then, the first and second synchronization obstacle image information are acquired from the first and second image pickup processing chips, and feature fusion processing is performed on the first and second synchronization obstacle image information to obtain obstacle feature information. Thus, the feature information in each obstacle image can be obtained. And finally, the obstacle characteristic information is sent to a positioning navigation terminal to control the target vehicle to run. Thus, the vehicle can be controlled to travel in accordance with the obstacle characteristic information in the obstacle image. Therefore, in the domain controller, a synchronous signal can be generated through the main control chip, and the exposure of the external camera is controlled through the deserializer so as to acquire the obstacle image. Therefore, the camera shooting processing chip can acquire the obstacle image in real time and process the obstacle image without calculating the time difference between the time of collecting the image data and the time of the main control chip. Therefore, the real-time performance of the obtained obstacle information can be improved, and further, the real-time performance of the vehicle running control can be improved.
The above embodiments of the present disclosure have the following advantageous effects: a domain controller by some embodiments of the present disclosure, comprising: the device comprises a first deserializer, a second deserializer, a first shooting processing chip, a second shooting processing chip and a main control chip, wherein: the main control chip is in communication connection with the positioning navigation terminal. The main control chip is respectively in communication connection with the first deserializer and the second deserializer, wherein the main control chip is used for generating an exposure synchronous signal and simultaneously transmitting the exposure synchronous signal to the first deserializer and the second deserializer, and the first deserializer and the second deserializer are used for controlling the camera to acquire barrier image information according to the exposure synchronous signal. The main control chip is respectively in communication connection with the first image pickup processing chip and the second image pickup processing chip. Therefore, the domain controller can generate exposure synchronous signals through the main control chip and send time stamp signals to each camera shooting processing chip, each deserializer can synchronously control each connected camera after receiving the exposure synchronous signals sent by the main control chip to acquire obstacle images, and each processing chip can add time stamp information to the obstacle images received from the deserializers to acquire the obstacle images acquired at the same moment. Therefore, the main control chip can obtain complete obstacle image information at the same moment, so that the accuracy of image data analysis can be improved, and the safety of vehicle running can be improved.
Referring next to fig. 4, the present disclosure also provides a vehicle travel control method for the domain controller of the above embodiments, as shown in fig. 4, which shows a flowchart 400 of some embodiments of the vehicle travel control method of the present disclosure. The vehicle travel control method may include the steps of:
step 401, acquiring time information and a timestamp control signal.
In some embodiments, the master control chip of the domain controller may obtain the time information and the timestamp control signal from the positioning navigation system through a universal asynchronous receiving and transmitting interface provided on the master control chip. The time information may be a date and time of the current time. The time stamp signal may characterize the date and time of the current time.
Step 402, generating an exposure synchronization signal, and sending the exposure synchronization signal to a first deserializer and a second deserializer to control the first deserializer and the second deserializer to acquire first obstacle image information and second obstacle image information, respectively.
In some embodiments, the main control chip may generate an exposure synchronization signal and send the exposure synchronization signal to the first deserializer and the second deserializer simultaneously, so as to control the first deserializer and the second deserializer to collect the first obstacle image information and the second obstacle image information, respectively. The main control chip can send the exposure synchronizing signals to the first deserializer and the second deserializer through an exposure synchronizing interface arranged on the main control chip. The exposure synchronization signal may characterize that the main control chip wants to collect the first obstacle image information and the second obstacle image information. The first obstacle image information may include, but is not limited to, at least one of: a first set of obstacle images. Each first obstacle image in the first obstacle image set may be photographed by each camera connected to the first deserializer. The second obstacle image information may include, but is not limited to, at least one of: a second set of obstacle images. Each of the second obstacle images in the second obstacle image set may be photographed by each of the cameras connected to the second deserializer.
Alternatively, the first deserializer and the second deserializer may acquire the first obstacle image information and the second obstacle image information by:
in the first step, the first deserializer responds to the exposure synchronizing signal, and controls each camera connected with the first deserializer to collect a first obstacle image at the same time, so as to obtain a first obstacle image set. The first deserializer may send a preset acquisition start command to each camera connected to the first deserializer at the same time, so as to control each camera connected to the first deserializer to acquire a first obstacle image at the same time. Here, the preset acquisition start command may indicate that the first deserializer wants to acquire the first obstacle image.
And a second step in which the first deserializer determines the first obstacle image set as the first obstacle image information.
And thirdly, the second deserializer responds to the received exposure synchronous signal, and each camera connected with the second deserializer is controlled to acquire a second obstacle image at the same time, so that a second obstacle image set is obtained. The second deserializer may send a preset acquisition start command to each camera connected to the second deserializer at the same time, so as to control each camera connected to the first deserializer to acquire a second obstacle image at the same time. Here, the preset acquisition start command may indicate that the second deserializer wants to acquire a second obstacle image.
Fourth, the second deserializer determines the second obstacle image set as the second obstacle image information.
Step 403, sending the time information and the timestamp control signal to the first image capturing processing chip and the second image capturing processing chip to control the first image capturing processing chip and the second image capturing processing chip to acquire the first obstacle image information and the second obstacle image information from the first deserializer and the second deserializer respectively at the same time, and updating the first obstacle image information and the second obstacle image information to obtain the first synchronous obstacle image information and the second synchronous obstacle image information.
In some embodiments, the master control chip may send the time information and the timestamp control signal to a first image capturing processing chip and a second image capturing processing chip, so as to control the first image capturing processing chip and the second image capturing processing chip to acquire the first obstacle image information and the second obstacle image information from the first deserializer and the second deserializer, respectively, and update the first obstacle image information and the second obstacle image information to obtain the first synchronous obstacle image information and the second synchronous obstacle image information. The time information and the time stamp control signal can be simultaneously sent to the first image pickup processing chip and the second image pickup processing chip through a main control serial peripheral interface arranged on the main control chip. Then, the first image capturing processing chip may acquire the first obstacle image information from the first deserializer through a first mobile processor interface provided on the first image capturing processing chip in response to receiving the time stamp signal, and then may add the time information to the first obstacle image information to obtain the first synchronous obstacle image information. The second image capturing processing chip may acquire the second obstacle image information from the second deserializer through a second mobile processor interface provided on the second image capturing processing chip in response to receiving the time stamp signal, and may then add the time information to the second obstacle image information to obtain the second synchronous obstacle image information.
Step 404, acquiring first synchronous obstacle image information and second synchronous obstacle image information from the first image capturing processing chip and the second image capturing processing chip, and performing feature fusion processing on the first synchronous obstacle image information and the second synchronous obstacle image information to obtain obstacle feature information.
In some embodiments, the main control chip may acquire the first synchronization obstacle image information and the second synchronization obstacle image information from the first image capturing processing chip and the second image capturing processing chip, and perform feature fusion processing on the first synchronization obstacle image information and the second synchronization obstacle image information to obtain the obstacle feature information. The first synchronous obstacle image information and the second synchronous obstacle image information can be obtained from the first image pickup processing chip and the second image pickup processing chip through a main control serial peripheral interface arranged on the main control chip.
In some optional implementations of some embodiments, the feature fusion processing is performed on the first synchronous obstacle image information and the second synchronous obstacle image information by the main control chip to obtain obstacle feature information, and the method may include the following steps:
In the first step, each first obstacle image in the first obstacle image set included in the first synchronized obstacle image information and each second obstacle image in the second obstacle image set included in the second synchronized obstacle image information are determined as an obstacle image set.
And secondly, inputting each obstacle image in the obstacle image set into a pre-trained obstacle feature fusion network to generate obstacle image feature information, and obtaining an obstacle image feature information set. Wherein the pre-trained obstacle feature fusion network may include, but is not limited to, at least one of: the system comprises a position information extraction module, a semantic information extraction module, an attention module and a channel fusion module.
Specifically, the location information extraction module may include, but is not limited to, at least one of the following: a first position convolution layer, a second position convolution layer, and a third position convolution layer. The first position convolution layer, the second position convolution layer, and the third position convolution layer may be configured to perform convolution operations.
The semantic information extraction module may include, but is not limited to, at least one of: a first semantic convolution layer and a second semantic convolution layer. The first semantic convolution layer and the second semantic convolution layer may be configured to perform convolution operations.
The attention module may include, but is not limited to, at least one of: an average pooling layer, an attention activation layer, a first attention convolution layer, a second attention convolution layer, a third attention convolution layer, a first attention normalization layer, and a second attention normalization layer. The attention-averaged pooling layer described above may be used to perform the averaged pooling operation. The first attention convolution layer, the second attention convolution layer, and the third attention convolution layer may be configured to perform convolution operations.
As an example, the above-mentioned attention-activating layer may be a ReLU (Rectified Linear Unit, linear rectifying function) function. The first attention normalization layer and the second attention normalization layer may be Sigmoid (normalization) functions.
The channel fusion module may include, but is not limited to, at least one of: the device comprises a first dimension-increasing convolution layer, a second dimension-increasing convolution layer, a first dimension-reducing convolution layer, a second dimension-reducing convolution layer, a first depth separable convolution layer, a second depth separable convolution layer, a third depth separable convolution layer, a first upsampling layer, a second upsampling layer, a fusion average pooling layer, a first normalization layer, a second normalization layer and a batch normalization layer. Here, the first and second upwarp layers may be used to perform upwarp operations. The first dimension-reduction convolutional layer and the second dimension-reduction convolutional layer may be used to perform a dimension-reduction convolutional operation. The first depth-separable convolutional layer, the second depth-separable convolutional layer, and the third depth-separable convolutional layer may be used to perform a depth-separable convolutional operation. The fused average pooling layer described above may be used to perform the average pooling operation. The batch normalization layer described above may be used to perform batch normalization operations. The batch normalization layer may include, but is not limited to, at least one of: full link layer and convolutional layer.
As an example, the first normalization layer and the second normalization layer may be Sigmoid functions.
And a third step of determining the obstacle image feature information set as the obstacle feature information.
In some optional implementations of some embodiments, the feature fusion processing is performed on the first synchronous obstacle image information and the second synchronous obstacle image information by the main control chip to obtain obstacle feature information, and the method may include the following steps:
the first step is to input the obstacle image into a position information extraction module included in the obstacle feature fusion network to obtain the obstacle position information. The first position convolution layer, the second position convolution layer and the third position convolution layer included in the position information extraction module may sequentially perform convolution operation on the obstacle image to obtain the obstacle position information. The above obstacle location information may include, but is not limited to, at least one of: obstacle position vector.
And secondly, inputting the obstacle position information into an attention module included in the obstacle feature fusion network to obtain the obstacle position feature information. Firstly, an average pooling layer, a first attention convolution layer, an attention activation layer, a second attention convolution layer and a first attention normalization layer included in the attention module sequentially perform pooling operation, convolution operation, activation operation and normalization operation on an obstacle position vector included in the obstacle position information to obtain a first attention position vector. Then, the product of the obstacle position vector and the first attention position vector may be determined as a second attention position vector. Then, the third attention convolution layer and the second attention normalization layer included in the attention module may sequentially perform a convolution operation and a normalization operation on the second attention position vector to obtain a third attention position vector. Finally, a product of the second attention position vector and the third attention position vector may be determined as the obstacle position feature information. The obstacle location characteristic information may include, but is not limited to, at least one of: obstacle position feature vectors.
And thirdly, inputting the barrier position information into a semantic information extraction module included in the barrier feature fusion network to obtain barrier semantic information. The first semantic convolution layer and the second semantic convolution layer included in the semantic information extraction module may sequentially perform convolution operation on the obstacle location information to obtain the obstacle location information.
And fourthly, inputting the barrier semantic information into an attention module included in the barrier feature fusion network to obtain barrier semantic feature information. The specific implementation of obtaining the semantic feature information of the obstacle and the technical effects thereof may refer to step 404 in the above embodiment, which is not described herein again. The obstacle semantic feature information may include, but is not limited to, at least one of: obstacle semantic feature vectors.
And fifthly, inputting the barrier position characteristic information and the barrier semantic characteristic information into a channel fusion module included in the barrier characteristic fusion network to obtain the barrier image characteristic information. The channel fusion module may perform the following fusion substeps on the obstacle location feature information and the obstacle semantic feature information:
And a first sub-step, wherein the first dimension-lifting convolution layer and the first depth-separable convolution layer included in the channel fusion module can sequentially execute dimension-lifting convolution operation and depth-separable convolution operation on the obstacle position feature vector included in the obstacle position feature information to obtain a first position fusion vector.
And a second sub-step, wherein the first dimension-reducing convolution layer and the first upsampling layer included in the channel fusion module can sequentially execute dimension-reducing convolution operation and upsampling operation on the barrier semantic feature vector included in the barrier semantic information to obtain a first semantic fusion vector.
And a third sub-step, wherein the channel fusion module comprises a second depth separable convolution layer and a fusion average pooling layer, and the second depth separable convolution operation and the fusion average pooling operation can be performed on the sum of the first position fusion vector and the first position fusion vector to obtain a first feature fusion vector.
And a fourth sub-step, wherein the channel fusion module comprises a second upsampling layer, a second dimension-reducing convolution layer and a first normalization layer, and the upsampling operation, the dimension-reducing convolution operation and the normalization operation can be sequentially performed on the first feature fusion vector to obtain a second position fusion vector.
And a fifth substep, wherein the channel fusion module includes a third depth separable convolution layer, a second dimension lifting convolution layer and a second normalization layer, and the third depth separable convolution operation, the dimension lifting convolution operation and the normalization operation can be sequentially performed on the first feature fusion vector to obtain a second semantic fusion vector.
And a sixth substep, the channel fusion module may determine a product of the second position fusion vector and the obstacle position feature vector as a third position fusion vector.
And a seventh sub-step, wherein the channel fusion module may determine a product of the second semantic fusion vector and the obstacle semantic feature vector as a third semantic fusion vector.
And an eighth substep, wherein the batch normalization layer included in the channel fusion module may perform batch normalization operation on the sum of the third position fusion vector and the third semantic fusion vector to obtain the feature information of the obstacle image.
Alternatively, the obstacle feature fusion network may be trained by:
first, a training sample set and an initial obstacle feature fusion network are obtained. Wherein each training sample in the training sample set comprises: sample obstacle image and sample obstacle image feature information, the above initial obstacle feature fusion network comprises: the system comprises an initial position information extraction module, an initial semantic information extraction module, an initial attention module and an initial channel fusion module. The master control chip can acquire the training sample set and the initial obstacle characteristic fusion network from the storage terminal in a wired connection or wireless connection mode. The storage terminal may be a terminal for storing a training sample set and an initial obstacle characteristic fusion network.
Secondly, selecting training samples from the training sample set, and executing the following training substeps:
the first substep inputs a sample obstacle image included in a training sample to an initial position information extraction module included in an initial obstacle feature fusion network to obtain initial obstacle position information. The specific implementation of obtaining the initial obstacle position information and the technical effects thereof may refer to step 404 in the foregoing embodiment, which is not described herein again.
And a second sub-step of inputting the initial obstacle position information to an initial attention module included in the initial obstacle feature fusion network to obtain initial obstacle position feature information. The specific implementation of obtaining the initial obstacle position feature information and the technical effects thereof may refer to step 404 in the foregoing embodiment, which is not described herein again.
And a third sub-step of determining a first sample difference value between the initial obstacle position characteristic information and sample obstacle image characteristic information included in the training sample based on the first preset loss function.
As an example, the first predetermined loss function may be a cross entropy loss function.
And a fourth sub-step of inputting the initial obstacle position information to an initial semantic information extraction module included in the initial obstacle feature fusion network to obtain initial obstacle semantic information. The specific implementation of obtaining the initial obstacle semantic information and the technical effects thereof may refer to step 404 in the foregoing embodiment, which is not described herein again.
And a fifth substep, inputting the initial obstacle semantic information to an initial attention module included in the initial obstacle feature fusion network to obtain the initial obstacle semantic feature information. The specific implementation of obtaining the semantic feature information of the initial obstacle and the technical effects thereof may refer to step 404 in the foregoing embodiment, which is not described herein again.
And a sixth sub-step of determining a second sample difference value between the initial obstacle semantic feature information and sample obstacle image feature information included in the training sample based on a second preset loss function.
As an example, the second predetermined loss function may be a cross entropy loss function.
And a seventh substep, inputting the initial obstacle position characteristic information and the initial obstacle semantic characteristic information into an initial channel fusion module included in the initial obstacle characteristic fusion network to obtain initial obstacle image characteristic information. The specific implementation of obtaining the initial obstacle image feature information and the technical effects thereof may refer to step 404 in the foregoing embodiment, which is not described herein again.
And an eighth substep of determining a third sample difference value between the initial obstacle image characteristic information and sample obstacle image characteristic information included in the training sample based on a third preset loss function.
As an example, the third predetermined loss function may be a cross entropy loss function.
And a ninth substep of determining a sum of the first, second and third sample variance values as an obstacle characteristic fusion variance value.
And a tenth substep of determining the initial obstacle characteristic fusion network as an obstacle characteristic fusion network in response to determining that the obstacle characteristic fusion discrepancy value is less than the target value.
Optionally, the master control chip may further adjust related parameters in the initial obstacle feature fusion network in response to determining that the obstacle feature fusion difference value is greater than or equal to the target value, determine the adjusted initial obstacle feature fusion network as the initial obstacle feature fusion network, and select a training sample from the training sample set, so as to execute the training step again. The related parameters in the initial obstacle characteristic fusion network can be adjusted through a preset adjusting algorithm.
As an example, the above-mentioned preset adjustment algorithm may be a gradient descent algorithm.
The above-mentioned related content of step 404 is taken as an invention point of the embodiment of the present disclosure, and solves the fourth technical problem of "the accuracy of the vehicle running control is reduced" set forth in the background art. Factors that cause the degree of safety of the vehicle running to be reduced tend to be as follows: in the process of extracting semantic information from an image, the bottom layer characteristics are easy to lose, so that the accuracy of the extracted barrier information is reduced. If the above factors are solved, the accuracy of the vehicle running control can be improved. In order to achieve the effect, the method can obtain low-dimensional information in the obstacle image through the position information extraction module included in the obstacle feature fusion network. And obtaining high-dimensional information in the obstacle image through a semantic information extraction module included in the obstacle feature fusion network. Then, the attention module included in the obstacle feature fusion network can be used for extracting low-dimension feature information which can characterize the obstacle feature from the low-latitude information and high-dimension feature information which can characterize the obstacle feature from the high-latitude information. Finally, the low-dimensional characteristic information and the high-dimensional characteristic information can be fused through a channel fusion module included in the obstacle characteristic fusion network, so that obstacle image characteristic information in an obstacle image is obtained. Therefore, compared with the process of extracting single low-dimensional information or extracting single high-latitude information, the process of extracting and fusing the characteristics of the low-dimensional information and the high-latitude information can improve the accuracy of obtaining the characteristic information of the obstacle image, and therefore the accuracy of vehicle driving control can be improved.
Step 405, the obstacle characteristic information is sent to a positioning navigation terminal to control the target vehicle to run.
In some embodiments, the main control chip may send the obstacle characteristic information to a positioning navigation terminal to control the target vehicle to travel. The obstacle characteristic information can be sent to the positioning navigation terminal through a universal asynchronous receiving and transmitting interface arranged on the main control chip so as to control the running of the target vehicle. The positioning navigation terminal can be used for controlling the target vehicle to run according to the obstacle characteristic information. The target vehicle may be a vehicle that is traveling.
The vehicle running control method is taken as an invention point of the embodiment of the disclosure, and solves the technical problem three of 'real-time reduction of vehicle running control' presented in the background art. Factors that cause the real-time performance of the vehicle running control to be reduced tend to be as follows: according to the external synchronization time compensation scheme, the frequencies of the images acquired by different deserializers are different, so that the main control chip is difficult to acquire the image data shot by each camera in real time, and the instantaneity of the obtained obstacle information is reduced. If the above factors are solved, the real-time performance of the vehicle running control can be improved. To achieve this, the domain controller included in the present disclosure may first acquire time information and a time stamp control signal. And secondly, generating an exposure synchronous signal and simultaneously transmitting the exposure synchronous signal to a first deserializer and a second deserializer so as to control the first deserializer and the second deserializer to respectively acquire first obstacle image information and second obstacle image information. Thus, the first deserializer and the second deserializer can simultaneously control the connected cameras to acquire the obstacle images. And then, the time information and the timestamp control signal are sent to a first image pickup processing chip and a second image pickup processing chip so as to control the first image pickup processing chip and the second image pickup processing chip to acquire the first obstacle image information and the second obstacle image information from the first deserializer and the second deserializer respectively at the same time, and update the first obstacle image information and the second obstacle image information to obtain first synchronous obstacle image information and second synchronous obstacle image information. Therefore, the camera shooting processing chip can add timestamp information for all the obstacle images acquired simultaneously so as to represent that all the obstacle images are acquired at the same time. Then, the first and second synchronization obstacle image information are acquired from the first and second image pickup processing chips, and feature fusion processing is performed on the first and second synchronization obstacle image information to obtain obstacle feature information. Thus, the feature information in each obstacle image can be obtained. And finally, the obstacle characteristic information is sent to a positioning navigation terminal to control the target vehicle to run. Thus, the vehicle can be controlled to travel in accordance with the obstacle characteristic information in the obstacle image. Therefore, in the domain controller, a synchronous signal can be generated through the main control chip, and the exposure of the external camera is controlled through the deserializer so as to acquire the obstacle image. Therefore, the camera shooting processing chip can acquire the obstacle image in real time and process the obstacle image without calculating the time difference between the time of collecting the image data and the time of the main control chip. Therefore, the real-time performance of the obtained obstacle information can be improved, and further, the real-time performance of the vehicle running control can be improved.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (8)

1. A domain controller, comprising: the device comprises a first deserializer, a second deserializer, a first shooting processing chip, a second shooting processing chip and a main control chip, wherein:
the main control chip is in communication connection with the positioning navigation terminal, wherein the main control chip is used for acquiring time information and a time stamp signal from the positioning navigation terminal;
the main control chip is respectively in communication connection with the first deserializer and the second deserializer, wherein the main control chip is used for generating an exposure synchronous signal and simultaneously sending the exposure synchronous signal to the first deserializer and the second deserializer, and the first deserializer and the second deserializer are used for controlling the camera to acquire barrier image information according to the exposure synchronous signal;
The master control chip is respectively in communication connection with the first image pickup processing chip and the second image pickup processing chip, wherein the master control chip is used for sending the time information and the time stamp signals to the first image pickup processing chip and the second image pickup processing chip, and the first image pickup processing chip and the second image pickup processing chip are used for generating the time stamp information according to the time stamp signals.
2. The domain controller of claim 1, wherein the first camera processing chip is communicatively coupled to the first deserializer, wherein the first camera processing chip is configured to obtain first obstacle image information from the first deserializer, and to add the timestamp information to the first obstacle image information;
the second image pickup processing chip is in communication connection with the second deserializer, wherein the second image pickup processing chip is used for acquiring second obstacle image information from the second deserializer and adding the timestamp information into the second obstacle image information.
3. The domain controller of claim 2, wherein a master serial peripheral interface is provided on the master control chip;
A first camera serial peripheral interface is arranged on the first camera processing chip;
a second camera serial peripheral interface is arranged on the second camera processing chip;
the main control chip is in communication connection with the first camera processing chip through the main control serial peripheral interface and the first camera serial peripheral interface, wherein the main control serial peripheral interface and the first camera serial peripheral interface are used for transmitting the time information, the time stamp signal and the first barrier image information;
the main control chip is in communication connection with the second camera processing chip through the main control serial peripheral interface and the second camera serial peripheral interface, wherein the main control serial peripheral interface and the second camera serial peripheral interface are used for transmitting the time information, the time stamp signal and the second barrier image information.
4. The domain controller of claim 1, wherein the master control chip is provided with an exposure synchronous interface and a universal asynchronous receiving/transmitting interface:
the main control chip is respectively in communication connection with the first deserializer and the second deserializer through the exposure synchronous interface, wherein the exposure synchronous interface is used for transmitting the exposure synchronous signals;
The main control chip is in communication connection with the positioning navigation terminal through the universal asynchronous receiving and transmitting interface, wherein the universal asynchronous receiving and transmitting interface is used for transmitting the time information and the time stamp signal.
5. The domain controller of claim 2, wherein the first deserializer is communicatively coupled to a first predetermined number of cameras;
the second deserializer is in communication connection with a second preset number of cameras;
a first mobile processor interface is arranged on the first camera shooting processing chip;
the first deserializer is in communication connection with the first camera shooting processing chip through the first mobile processor interface, wherein the first mobile processor interface is used for transmitting the first obstacle image information;
a second mobile processor interface is arranged on the second camera shooting processing chip;
the second deserializer is in communication connection with the second camera shooting processing chip through the second mobile processor interface, wherein the second mobile processor interface is used for transmitting the second obstacle image information.
6. A vehicle running control method for the domain controller according to any one of claims 1 to 5, comprising:
Acquiring time information and a time stamp control signal;
generating an exposure synchronizing signal and simultaneously transmitting the exposure synchronizing signal to a first deserializer and a second deserializer to control the first deserializer and the second deserializer to acquire first obstacle image information and second obstacle image information respectively;
transmitting the time information and the timestamp control signal to a first image pickup processing chip and a second image pickup processing chip to control the first image pickup processing chip and the second image pickup processing chip to acquire the first obstacle image information and the second obstacle image information from the first deserializer and the second deserializer respectively at the same time, and updating the first obstacle image information and the second obstacle image information to acquire first synchronous obstacle image information and second synchronous obstacle image information;
acquiring the first synchronous obstacle image information and the second synchronous obstacle image information from the first image pickup processing chip and the second image pickup processing chip, and performing feature fusion processing on the first synchronous obstacle image information and the second synchronous obstacle image information to obtain obstacle feature information;
And sending the obstacle characteristic information to a positioning navigation terminal to control the target vehicle to run.
7. The method of claim 6, wherein the first and second deserializers acquire first and second obstacle image information by:
the first deserializer responds to the received exposure synchronous signal, and controls each camera connected with the first deserializer to simultaneously acquire a first obstacle image to obtain a first obstacle image set;
the first deserializer determining the first obstacle image set as the first obstacle image information;
the second deserializer responds to the received exposure synchronous signal, and controls each camera connected with the second deserializer to simultaneously acquire a second obstacle image to obtain a second obstacle image set;
the second deserializer determines the second obstacle image set as the second obstacle image information.
8. The method of claim 6, wherein the first synchronization obstacle image information comprises: a first set of obstacle images, the second synchronized obstacle image information comprising: a second obstacle image set; and
The feature fusion processing is performed on the first synchronous obstacle image information and the second synchronous obstacle image information, and the obstacle feature information comprises:
determining each first obstacle image in a first obstacle image set included in the first synchronous obstacle image information and each second obstacle image in a second obstacle image set included in the second synchronous obstacle image information as an obstacle image set;
inputting each obstacle image in the obstacle image set into a pre-trained obstacle feature fusion network to generate obstacle image feature information, and obtaining an obstacle image feature information set;
and determining the obstacle image characteristic information set as the obstacle characteristic information.
CN202311014651.1A 2023-08-14 2023-08-14 Domain controller and vehicle running control method Active CN116743937B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311014651.1A CN116743937B (en) 2023-08-14 2023-08-14 Domain controller and vehicle running control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311014651.1A CN116743937B (en) 2023-08-14 2023-08-14 Domain controller and vehicle running control method

Publications (2)

Publication Number Publication Date
CN116743937A true CN116743937A (en) 2023-09-12
CN116743937B CN116743937B (en) 2023-10-27

Family

ID=87910009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311014651.1A Active CN116743937B (en) 2023-08-14 2023-08-14 Domain controller and vehicle running control method

Country Status (1)

Country Link
CN (1) CN116743937B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351915A (en) * 2020-04-24 2021-02-09 上海商汤临港智能科技有限公司 Vehicle and cabin zone controller
CN112596417A (en) * 2020-11-06 2021-04-02 禾多科技(北京)有限公司 Automatic driving operation domain controller and control system
CN114143415A (en) * 2021-12-10 2022-03-04 安徽酷哇机器人有限公司 Multi-channel video signal processing board and processing method
CN114500766A (en) * 2021-12-30 2022-05-13 中智行(上海)交通科技有限公司 GMSL camera time synchronization control method for automatic driving
CN115129023A (en) * 2021-03-26 2022-09-30 华为技术有限公司 Controller system and control method
US20220324481A1 (en) * 2019-12-24 2022-10-13 Huawei Technologies Co., Ltd. Method and apparatus for planning vehicle trajectory, intelligent driving domain controller, and intelligent vehicle
CN115376347A (en) * 2022-10-26 2022-11-22 禾多科技(北京)有限公司 Intelligent driving area controller and vehicle control method
CN115866320A (en) * 2022-11-03 2023-03-28 深圳市德驰微视技术有限公司 Method and system for quickly starting car backing image function
CN116567182A (en) * 2023-05-05 2023-08-08 广州小鹏汽车科技有限公司 Domain control system, domain control method, vehicle, and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220324481A1 (en) * 2019-12-24 2022-10-13 Huawei Technologies Co., Ltd. Method and apparatus for planning vehicle trajectory, intelligent driving domain controller, and intelligent vehicle
CN112351915A (en) * 2020-04-24 2021-02-09 上海商汤临港智能科技有限公司 Vehicle and cabin zone controller
CN112596417A (en) * 2020-11-06 2021-04-02 禾多科技(北京)有限公司 Automatic driving operation domain controller and control system
CN115129023A (en) * 2021-03-26 2022-09-30 华为技术有限公司 Controller system and control method
CN114143415A (en) * 2021-12-10 2022-03-04 安徽酷哇机器人有限公司 Multi-channel video signal processing board and processing method
CN114500766A (en) * 2021-12-30 2022-05-13 中智行(上海)交通科技有限公司 GMSL camera time synchronization control method for automatic driving
CN115376347A (en) * 2022-10-26 2022-11-22 禾多科技(北京)有限公司 Intelligent driving area controller and vehicle control method
CN115866320A (en) * 2022-11-03 2023-03-28 深圳市德驰微视技术有限公司 Method and system for quickly starting car backing image function
CN116567182A (en) * 2023-05-05 2023-08-08 广州小鹏汽车科技有限公司 Domain control system, domain control method, vehicle, and storage medium

Also Published As

Publication number Publication date
CN116743937B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
CN107079135B (en) Video data transmission method, system, equipment and shooting device
US9159169B2 (en) Image display apparatus, imaging apparatus, image display method, control method for imaging apparatus, and program
CN100556079C (en) Camera-control equipment, camera chain, electronic meeting system and video camera control method
KR100768616B1 (en) Mobile camera system, mobile camera, server and terminal apparatus
EP1798569A2 (en) Method for clocking speed using a wireless terminal and system implementing the same
CN106791483B (en) Image transmission method and device and electronic equipment
WO2013139100A1 (en) Intelligent photographing method, device and mobile terminal based on cloud service
CN110009675B (en) Method, apparatus, medium, and device for generating disparity map
KR101762769B1 (en) Apparatus and method for capturing subject in photographing device
US20110234817A1 (en) Image capturing terminal, external terminal, image capturing system, and image capturing method
US20130182010A2 (en) Device for capturing and displaying images of objects, in particular digital binoculars, digital camera or digital video camera
CN105578027A (en) Photographing method and device
CN115699082A (en) Defect detection method and device, storage medium and electronic equipment
US10602064B2 (en) Photographing method and photographing device of unmanned aerial vehicle, unmanned aerial vehicle, and ground control device
CN111355891A (en) Micro-distance focusing method based on ToF, micro-distance shooting method and shooting device thereof
CN116743937B (en) Domain controller and vehicle running control method
CN114205531A (en) Intelligent photographing method, equipment and device for vehicle and storage medium
CN110706497B (en) Image processing apparatus and computer-readable storage medium
JP2017194898A (en) Sightseeing guide system
CN113269823A (en) Depth data acquisition method and device, storage medium and electronic equipment
CN112927281A (en) Depth detection method, depth detection device, storage medium, and electronic apparatus
CN110766574B (en) Remote teaching system and method
CN115103204A (en) Method and device for realizing edge intelligent application supporting AI engine
CN111328099B (en) Mobile network signal testing method, device, storage medium and signal testing system
CN111757005A (en) Shooting control method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant