CN118400489A - Data processing system, data processing method and vehicle - Google Patents

Data processing system, data processing method and vehicle Download PDF

Info

Publication number
CN118400489A
CN118400489A CN202310434285.9A CN202310434285A CN118400489A CN 118400489 A CN118400489 A CN 118400489A CN 202310434285 A CN202310434285 A CN 202310434285A CN 118400489 A CN118400489 A CN 118400489A
Authority
CN
China
Prior art keywords
video data
controller
data
central controller
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310434285.9A
Other languages
Chinese (zh)
Inventor
韩冰
嵇家刚
许远坤
黄哲文
孙有春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN202310434285.9A priority Critical patent/CN118400489A/en
Publication of CN118400489A publication Critical patent/CN118400489A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a data processing system, a data processing method and a vehicle, and relates to the technical field of data processing, wherein a camera in the system executes shooting action to generate video data and sends the video data to a regional controller; the regional controller performs identification processing on the video data to obtain an identification result, and sends the identification result to the central controller; the central controller generates corresponding instructions based on the recognition result. Therefore, the application is based on the introduction of the area controller, so that the camera in the vehicle is connected to the central controller through the area controller, and the length of the wire harness is reduced; in addition, the regional controller performs recognition processing on the video data shot by the camera to obtain a recognition result, so that the central controller can directly execute corresponding control operation through the recognition result, and the regional controller does not need to transmit complete video data to the central controller, thereby reducing the data transmission bandwidth.

Description

Data processing system, data processing method and vehicle
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data processing system, a data processing method, and a vehicle.
Background
In the related art, a plurality of cameras are usually disposed in a vehicle, and each camera will send data shot by itself to a central computing unit/central control unit of the vehicle, so that the video data received by the central computing unit can complete corresponding operations, such as displaying an external panoramic image.
Because the cameras are generally directly connected with the central computing unit, the connecting cable between the cameras and the central computing unit is long, the wire harness in the vehicle is long, and because each camera needs to transmit data to the central computing unit, the data transmission bandwidth in the vehicle needs to be high to meet the real-time data transmission requirement.
Disclosure of Invention
The application provides a data processing system, a data processing method and a vehicle.
The application relates to a data processing system, which comprises a central controller, a plurality of regional controllers and a plurality of cameras, wherein one regional controller is connected with at least one camera, and each regional controller is also connected with the central controller;
The camera is used for executing shooting actions to generate video data and sending the video data to the area controller;
The regional controller is used for carrying out recognition processing on the received video data to obtain a recognition result, and sending the recognition result to the central controller, wherein the recognition result comprises the positions of all objects outside the vehicle;
the central controller is used for generating corresponding instructions based on the received identification result.
The data processing system provided by the application has the advantages that the camera performs shooting action to generate video data and sends the video data to the area controller; the regional controller performs identification processing on the video data to identify the position and the category of each object outside the vehicle, so as to obtain an identification result, and sends the identification result to the central controller; the central controller generates corresponding instructions based on the recognition result.
Therefore, the application is based on the introduction of the area controller, so that the camera in the vehicle is connected to the central controller through the area controller, and the length of the wire harness is reduced; in addition, the regional controller performs recognition processing on the video data acquired/shot by the camera to obtain a recognition result, so that the central controller can directly execute corresponding control operation through the recognition result, the regional controller does not need to transmit complete video data to the central controller, and the central controller does not need to execute video recognition operation, thereby reducing the data transmission bandwidth and the load of the central controller.
In some embodiments, the plurality of cameras are divided into a plurality of combinations according to the image capturing direction, each of the combinations includes at least one peripheral camera and at least one pan-around camera for capturing the same vehicle exterior direction, the peripheral camera and the pan-around camera of one of the combinations are connected to the same area controller, and the video data includes peripheral video data and pan-around video data;
The panoramic camera is used for executing shooting actions to generate the panoramic video data and sending the panoramic video data to the area controller;
the looking-around camera is used for executing shooting actions to generate the looking-around video data and sending the looking-around video data to the area controller;
the zone controller is further configured to:
Performing recognition processing on the received panoramic video data to obtain a recognition result, and sending the recognition result to the central controller;
Transmitting the look-around video data to the central controller;
The central controller is used for:
And generating corresponding instructions based on the received looking-around video data and the identification result.
Therefore, the embodiment of the application enables the regional controller to simultaneously transmit the identification result and the looking-around video data to the central controller, so that the central controller can generate more instructions based on the identification result and the looking-around video data; meanwhile, the regional controller completes target identification based on the panoramic video data with higher precision and no distortion, so that the validity of an identification result is ensured; in addition, the regional controller transmits the identification result and the looking-around video data to the central controller, so that the problem of high data bandwidth requirement caused by directly transmitting the looking-around video data and the looking-around video data to the central controller is avoided, and the data bandwidth pressure in the vehicle is reduced.
In certain embodiments, the zone controller is further configured to:
The received panoramic video data and the received environmental video data are respectively checked to obtain corresponding check results, wherein the check results comprise complete and incomplete frames which completely represent the panoramic video data/the environmental video data, and the incomplete frames represent the panoramic video data/the environmental video data and comprise incomplete frames;
if the checking result of the panoramic video data is incomplete and the checking result of the panoramic video data is complete in the same combination corresponding panoramic video data and the panoramic video data, deleting the panoramic video data and taking the panoramic video data as new panoramic video data;
And if the checking result of the all-around video data is incomplete and the checking result of the all-around video data is complete in the all-around video data and the all-around video data corresponding to the same combination, deleting the all-around video data and taking the all-around video data as new all-around video data.
Therefore, the embodiment of the application avoids the situation that the regional controller and/or the central controller cannot execute corresponding operation due to the damage/failure or unexpected occurrence of the single camera, and ensures the robustness of the data processing system.
In certain embodiments, the zone controller is further configured to:
selecting, for each of the video frames of the video data, a plurality of first images in the video frame based on a selection search algorithm, the first images being part of the video frame;
Scaling each first image to enable the sizes of the first images to be the same, and obtaining a second image corresponding to each first image;
Performing feature extraction on the second image by using a convolutional neural network to obtain a feature vector of the second image;
Inputting the feature vector into a support vector machine model to obtain a detection result corresponding to the feature vector, wherein the detection result comprises the category of the object in the first image corresponding to the feature vector, and the category comprises lane lines, road boundaries, traffic light symbols, road traffic marks, pedestrians and vehicles;
performing non-maximum suppression on each second image so as to take one of two or more overlapped second images as a third image;
performing regression processing on the position of the third image in the first image to obtain the image position of the third image;
Obtaining the position of the object in the third image based on the distance between the image position of the third image and a preset image center point;
and obtaining the identification result based on the position and the category of the object in each third image.
In this way, the embodiment of the application obtains the identification result corresponding to the video data through the neural network model, thereby ensuring the credibility of the identification result, and therefore, the operation of the central controller based on the instruction/execution generated by the identification result is more accurate/effective.
In some embodiments of the present invention, in some embodiments,
The system further comprises a sensor, wherein the sensor is used for detecting the speed value of each object and sending the speed value to the area controller, the sensor is connected with the area controller, and the identification result comprises obstacle information and environment information;
the zone controller is further configured to:
Determining obstacle objects and environmental objects based on the categories of the objects of each of the third images;
obtaining the obstacle information based on the category, the position and the speed value of each obstacle;
Obtaining environmental information based on the category and the position of each environmental object;
obtaining the identification result based on the environment information and the obstacle information;
the central controller is further configured to:
Marking each obstacle object in a preset coordinate system according to the obstacle information in the identification result to obtain obstacle information;
and generating an automatic driving instruction based on the obstacle information and the environment information, and controlling the vehicle to run according to the automatic driving instruction.
Therefore, the embodiment of the application is based on the division of the obstacle object and the environment object, so that the central controller can execute more accurate instructions when executing obstacle avoidance actions, thereby realizing the functions of vehicle distance detection, pedestrian detection, vehicle detection, non-vehicle detection, lane route detection and the like.
In some embodiments, the area controller is further configured to perform data compression on the video data to obtain compressed data, and send the processed data to the central controller;
the central controller is also used for decompressing the compressed data to obtain the video data and generating corresponding instructions based on the video data.
Thus, the embodiment of the application enables the regional controller to transmit the compressed video data instead of the original/unprocessed video data when transmitting the video data to the central controller, thereby reducing the bandwidth resources required for video data transmission.
In certain embodiments, the system further comprises a display, the central controller being connected to the display;
the central controller is further configured to:
decompressing the received compressed data to obtain the video data;
Generating an outside-vehicle looking-around image according to the video data, and sending the outside-vehicle looking-around image to the display;
the display is used for displaying the received outside-vehicle ring video.
Therefore, the embodiment of the application enables a user to observe the conditions outside the vehicle in real time through the external-vehicle video displayed by the display, and avoids the problem of inconvenient observation caused by the space limitation of the main cab and the reflection angle limitation of the rearview mirror.
In certain embodiments, the zone controller is further configured to:
Performing image signal processing on the received video data to obtain signal optimized video data;
and carrying out recognition processing on the signal optimization data to obtain a recognition result, and sending the recognition result to the central controller.
Therefore, the embodiment of the application can finish the target detection based on the video data with higher quality, thereby improving the precision of the target detection; also, therefore, the reliability/effectiveness of the instruction is made higher when the central controller generates the instruction based on the recognition result.
In some embodiments, the system further comprises a plurality of serializers through which the cameras are connected to the zone controller;
The camera is also used for executing shooting actions to generate video data and sending the video data to the serializer;
the serializer is used for transmitting the serial data to the regional controller after converting the video data into the serial data;
The regional controller is also used for performing deserialization processing on the received serial data to obtain the video data, performing recognition processing on the video data to obtain a recognition result, and sending the recognition result to the central controller.
Therefore, the embodiment of the application can serialize video data, so that the phenomenon of data loss in the process of transmitting the video data to the regional controller can be avoided as much as possible, and the effective transmission of the data is ensured.
In certain embodiments, time synchronization is maintained between each of the zone controllers;
The regional controller is also used for sending the identification result and the timestamp corresponding to the identification result to the central controller;
the central controller is further configured to generate a corresponding instruction based on the received identification result and the timestamp.
Therefore, the embodiment of the application is based on each area controller with time synchronization, so that after the central controller receives the identification results with the time stamps, the central controller can generate effective instructions by utilizing the identification results at the same moment based on the time stamps corresponding to the identification results, thereby ensuring the accurate processing of the data in the vehicle.
The application also provides a data processing method, which comprises the following steps:
The method comprises the steps that a camera performs shooting action to generate video data and sends the video data to a regional controller, wherein the number of the cameras is multiple, and one regional controller is connected with at least one camera;
The regional controllers are used for carrying out recognition processing on the received video data to obtain recognition results and sending the recognition results to the central controller, wherein the recognition results comprise positions and types of all objects outside the vehicle, and each regional controller is also connected with the central controller;
the central controller is used for generating corresponding instructions based on the received identification result.
The application also provides a vehicle which comprises a vehicle body and the data processing system, wherein the data processing system is mounted on the vehicle body.
In the data processing method and the vehicle provided by the application, a camera performs shooting action to generate video data and sends the video data to the area controller; the regional controller performs identification processing on the video data to obtain identification results by video the positions of all objects outside the vehicle, and sends the identification results to the central controller; the central controller generates corresponding instructions based on the recognition result.
Therefore, the application is based on the introduction of the area controller, so that the camera in the vehicle is connected to the central controller through the area controller, and the length of the wire harness is reduced; in addition, the regional controller performs recognition processing on the video data acquired/shot by the camera to obtain a recognition result, so that the central controller can directly execute corresponding control operation through the recognition result, the regional controller does not need to transmit complete video data to the central controller, and the central controller does not need to execute video recognition operation, thereby reducing the data transmission bandwidth and the load of the central controller.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a data processing system in accordance with certain embodiments of the present application;
FIG. 2 is a schematic diagram of a data processing system in accordance with certain embodiments of the present application;
FIG. 3 is a schematic diagram of a data processing system in accordance with certain embodiments of the present application;
FIG. 4 is a schematic diagram of a data processing system in accordance with certain embodiments of the present application;
FIG. 5 is a flow chart of a data processing method according to some embodiments of the present application.
And (3) main component description:
100-data processing system, 101-central controller, 102-area controller, 103-camera, 104-first combination, 109-second combination, 108-third combination, 107-fourth combination, 108-sensor, 109-display, 110-serializer.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the embodiments of the present application and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1, the present application provides a data processing system 100, which includes a central controller 101, a plurality of regional controllers 102 and a plurality of cameras 103, wherein one regional controller 102 is connected with at least one camera 103, and each regional controller 102 is also connected with the central controller 101; the camera 103 is used for performing shooting actions to generate video data and sending the video data to the area controller 102; the area controller 102 is configured to perform recognition processing on the received video data to obtain a recognition result, and send the recognition result to the central controller 101, where the recognition result includes a position and a category of each object outside the vehicle; the central controller 101 is configured to perform a corresponding operation based on the received recognition result.
It will be appreciated that the data processing system 100 shown in fig. 1 is mounted in a vehicle, with the data processing system 100 being configured to perform data processing/interaction within the vehicle.
It will also be appreciated that in the embodiment shown in fig. 1, three cameras 103 or two cameras 103 of the 11 cameras 103 of the data processing system 100 are connected to the same zone controller 102,4 zone controllers 102 are connected to the central controller 101.
It should be noted that, in the embodiment of the present application, the camera 103 may be used to capture an external frame/video to generate corresponding video data. It is to be understood that the number of the cameras 103 is set as what can be set according to actual conditions, and the case of 11 cameras 103 shown in fig. 1 is only one example.
It should be further understood that the specific model and type of the camera 103 may be set according to practical situations, for example, in some embodiments, each camera 103 in the present embodiment is a peripheral camera and/or a pan-around camera.
Further, after the camera 103 performs an image capturing operation to generate video data, the video data is transmitted to the area controller 102 instead of the central controller 101. The area controller 102 receives video data, performs corresponding video processing/data processing thereon, and transmits the obtained processing result to the central controller 101.
It should be understood that the area controller in the embodiment of the present application may be understood as a processor/arithmetic unit, and the area controller 102 may implement/perform part of the functions of the central controller 101, such as data compression, object detection, and image signal processing. It is conceivable that the functions that the zone controller 102 can implement/perform are what can be set according to actual circumstances.
It should be further understood that each of the area controllers 102 in the embodiment of the present application is connected to one or more cameras 103, and each of the area controllers 102 is connected to the central controller 101, so that the video data captured by one or more cameras 103 can be sent to the central controller 101 through a common link/line/cable (i.e., the link between the area controller 102 and the central controller 101), and as shown in fig. 1, the data of two cameras 103 or three cameras 103 will be transferred to the central controller 101 through a common link.
Also, the connection relationship of the plurality of cameras 103 and one central controller 101 in the vehicle is simplified, and the in-vehicle harness length is reduced. In addition, it is conceivable that how many cameras 103 one area controller 102 can be connected to is what can be set according to actual situations, and the situation shown in fig. 1 is only one possible embodiment.
Further, since the area controller 102 according to the embodiment of the present application can be used to perform object detection, when the area controller 102 receives the video data sent by the camera 103, the object/object in the video data is detected to determine the position and the category of each object in the video/external frame, and based on the position and the category of each object, a recognition result is generated, and finally the recognition result is sent to the central controller 101. It will be appreciated that the bandwidth required to transmit the identification result to the central controller 101 is smaller than if the video data were transmitted to the central controller 101.
The central controller 101 receives the recognition result, and then can know each object outside the vehicle, and therefore, can complete automatic driving according to the position of each object, or generate vehicle driving prompt information to remind the driver. For example, in the process that the driver controls the vehicle to perform the reversing operation, when the central controller 101 of the vehicle knows that the distance between the position of a pedestrian (i.e. an object with a category of "pedestrian") behind the vehicle and the vehicle is smaller than the preset distance based on the recognition result, the central controller 101 will control the sound generating unit in the vehicle to play the prompt voice of "pedestrian exists in the reversing path". It will be appreciated that the central controller 101 of an embodiment of the present application is configured to generate corresponding instructions/commands to control various devices/modules within the vehicle based on data communicated from the various sensors/modules/devices.
Thus, the present application is based on the introduction of the zone controller 102, so that the camera 103 in the vehicle is connected to the central controller 101 through the zone controller 102, so that the harness length is reduced; in addition, since the area controller 102 performs recognition processing on the video data collected/captured by the camera 103 to obtain a recognition result, the central controller 101 can directly perform a corresponding control operation through the recognition result, so that the area controller 102 does not need to transmit the complete video data to the central controller 101, and the central controller 101 does not need to perform the video recognition operation, thereby reducing the data transmission bandwidth and the load of the central controller 101.
Optionally, in some embodiments of the present application, the plurality of cameras 103 are divided into a plurality of combinations according to the camera direction, each combination including at least one peripheral camera and at least one pan-around camera for photographing the same vehicle exterior direction, and each of the peripheral camera and the pan-around camera of one combination is connected to the same area controller 102, and the video data includes peripheral video data and pan-around video data; the panoramic camera is used for performing shooting actions to generate panoramic video data and sending the panoramic video data to the area controller 102; the pan around camera is used to perform a photographing action to generate pan around video data and transmit the pan around video data to the zone controller 102.
Further, the area controller 102 is further configured to:
performing recognition processing on the received panoramic video data to obtain a recognition result, and sending the recognition result to the central controller 101;
the ring-looking video data is transmitted to the central controller 101.
Further, the central controller 101 is configured to:
based on the received looking around video data and the recognition result, a corresponding instruction is generated.
For a clearer description of embodiments of the present application, reference is made to FIG. 2, which is a schematic illustration of a data processing system 100 according to some embodiments of the present application. In fig. 2, 4 sets of combinations are shown, including a first combination 104 for taking an outside east direction (i.e., right side of fig. 2), a second combination 105 for taking an outside south direction (i.e., lower side of fig. 3), a third combination 106 for taking an outside west direction (i.e., left side of fig. 3), and a fourth combination 107 for taking an outside north direction (and upper side of fig. 3). In some embodiments, as shown in fig. 2, the first combination 104, the second combination 105, the third combination 106, and the fourth combination 107 each include one pan-around camera, and one or two cameras 103 remaining in the combination are pan-around cameras.
It can be appreciated that, compared to the pan-around camera, the pan-around camera has a smaller photographing angle, and the pan-around camera is often used for photographing a specific direction (such as the forward direction) with high accuracy in the related art. The photographing angle of the looking-around camera is larger, and photographing can be completed at an angle of 108 degrees or 360 degrees.
It can be further understood that the edge of the video shot by the panoramic camera may have distortion due to the overlarge shooting angle of the panoramic camera, and the video shot by the panoramic camera does not have distortion, so that the embodiment of the application completes target detection based on the panoramic video data of the panoramic camera, and the panoramic video data of the panoramic camera is used for generating other instructions.
Thus, the embodiment of the present application enables the area controller 102 to simultaneously transmit the recognition result and the looking-around video data to the central controller 101, so that the central controller 101 can generate more instructions based on the recognition result and the looking-around video data; meanwhile, the area controller 102 completes target recognition based on the panoramic video data with higher precision/no distortion, so that the validity of a recognition result is ensured; in addition, the area controller 102 transmits the identification result and the looking-around video data to the central controller 101, so that the problem of high data bandwidth requirement caused by directly transmitting the looking-around video data and the looking-around video data to the central controller 101 is avoided, and the data bandwidth pressure in the vehicle is reduced.
Optionally, to further reduce the load of the central processor 101, in some embodiments, the area controller performs corresponding recognition/detection processing on the looking-around video data to obtain a corresponding looking-around video recognition result, and then sends the looking-around video data recognition result, the looking-around video data, and the recognition result to the central controller 101, so that the central controller does not need to perform corresponding recognition/detection processing on the looking-around video data, thereby reducing the load.
Optionally, in some embodiments, the looking-around video recognition result generated by the area controller 102 is parking related data, where the parking related data includes parking space line information.
Optionally, to avoid situations where the target recognition is unavailable or the vehicle exterior vision image cannot be generated due to a single pan-around camera or a pan-around camera failure, in some embodiments, the area controller 102 is further configured to:
Respectively checking the received periscope video data and the received surrounding video data to obtain corresponding checking results, wherein the checking results comprise complete and incomplete frames which completely represent the periscope video data/the surrounding video data, and the incomplete frames represent the periscope video data/the surrounding video data and comprise incomplete frames;
if the checking result of the surrounding video data is incomplete and the checking result of the surrounding video data is complete in the surrounding video data and the surrounding video data corresponding to the same combination, deleting the surrounding video data and taking the surrounding video data as new surrounding video data;
if the checking result of the surrounding video data is incomplete and the checking result of the surrounding video data is complete in the surrounding video data and the surrounding video data corresponding to the same combination, deleting the surrounding video data and taking the surrounding video data as new surrounding video data.
It can be understood that, when an unexpected situation occurs, such as that the camera 103 is blocked, that the camera 103 does not shoot images/videos, that the video is largely lost in frames, that the video quality is poor, etc., the pictures shot by the camera 103 are not completely discontinuous in reality, so that the area controller 102 is affected to perform target detection, and the central controller 101 is also affected to generate an external looking-around image.
Thus, before performing object recognition and data compression, the area controller 102 according to the embodiment of the present application will verify the panoramic video data and the panoramic video data photographed in the same vehicle direction, so as to verify whether the panoramic video data/the panoramic video data include a complete picture.
Further, if the result of checking the video data of only the panoramic camera is incomplete or the result of checking the video data of only the panoramic camera is incomplete under one combination, the area control will replace the incomplete data with the video data of which the other result of checking is complete.
For example, taking the first combination 104 in fig. 2 as an example, the first combination 104 includes a panoramic camera and a panoramic camera, if the verification result of the panoramic video data of the panoramic camera is incomplete and the verification result of the panoramic video data of the panoramic camera is complete, the area controller 102 deletes the panoramic video data and uses the panoramic video data as the panoramic video data and the panoramic video data simultaneously to perform data compression and object detection, and thus, the central controller 101 will generate an external panoramic image based on the panoramic video data.
Thus, the embodiment of the present application avoids the situation that the regional controller 102 and/or the central controller 101 cannot execute the corresponding operation due to the damage/failure or the unexpected occurrence of the single camera 103, and ensures the robustness of the data processing system 100.
Optionally, to ensure that the central controller 101 can generate a normal vehicle exterior vision image from the panoramic video data and ensure that the area controller 102 can perform a normal target detection based on the panoramic video data, in some embodiments, if the area controller 102 needs to use the panoramic video data as new panoramic video data, the area controller 102 will perform a corresponding style adjustment on the panoramic video data so that factors such as a shooting angle, resolution, and size of the panoramic video data are consistent with the panoramic video data. Similarly, when using the panoramic video data as the new panoramic video data, the region controller 102 will also perform the corresponding style adjustment operation.
Optionally, to improve the recognition accuracy of the area controller 102, in some embodiments of the present application, the area controller 102 is further configured to:
Selecting a plurality of first images from the video frames for each video frame of the video data based on a selection search algorithm, the first images belonging to a portion of the video frame;
scaling each first image to make the sizes of the first images identical to each other, and obtaining a second image corresponding to each first image;
extracting features of the second image by using a convolutional neural network to obtain feature vectors of the second image;
inputting the feature vector into a support vector machine model to obtain a detection result corresponding to the feature vector, wherein the detection result comprises the category of the object in the first image corresponding to the feature vector, and the category comprises lane lines, road boundaries, traffic light symbols, road traffic marks, pedestrians and vehicles;
performing non-maximum suppression on each second image so as to take one of two or more overlapped second images as a third image;
carrying out regression processing on the position of the third image in the first image to obtain the image position of the third image;
Obtaining the position of an object in the third image based on the distance between the image position of the third image and the center point of the preset image;
and obtaining a recognition result based on the position and the category of the object in each third image.
That is, the zone controller 102 according to the embodiment of the present application is equipped with a neural network model for completing detection of an object/target outside the vehicle.
Specifically, after receiving the video data, the region controller 102 performs selection of the candidate frame/region frame (Region proposal) for each image frame/video frame corresponding to the video data based on the selection search (SELECTIVE SEARCH) strategy, so as to simulate the possible location of the target/object through the region frame, that is, assume that the image content enclosed/surrounded by the region frame in the video frame is taken as the object. It will be appreciated that because the size/dimensions of the different objects are different, the required wrap range for the different objects in the video frame may be different, i.e. the size of the different region boxes may be different.
And then, scaling the image surrounded by the area frame, namely scaling the first image so that the sizes of the different first images are the same after scaling, and obtaining a second image corresponding to the first image. Therefore, the subsequent processing can be completed based on the second image with the same size, so that the situation that the prediction accuracy is too low due to the fact that the image information is too large due to the fact that the sizes are different is avoided.
And then, carrying out feature extraction on each second image based on the convolutional neural network (Convolutional Neural Networks, CNN) to obtain feature vectors capable of representing the second images. It will be appreciated that the specific configuration of the convolutional neural network in the embodiments of the present application may be set according to the actual situation, such as in some embodiments, the convolutional neural network is implemented based on a VGG network (Visual Geometry Group Network ).
Then, the embodiment of the application carries out two classifications on each characteristic vector based on a pre-trained support vector machine (Support Vector Machine, SVM) so as to determine the class of the object/target in the corresponding second image through the characteristic vector, thereby obtaining the detection result corresponding to each second image. It should be understood that the number of the support vector machines is set according to the actual situation, for example, because the present application needs to detect whether the first image corresponding to the feature vector includes any one of the lane line, the road boundary, the traffic light symbol, the road traffic sign, the pedestrian and the vehicle, the number of the support vector machines is 6, that is, one feature vector is input to 6 different support vector machines to respectively perform two classification.
Next, non-maximum suppression (Non-Maximun Suppresion) is performed on each second image to perform corresponding processing on two or more second images wrapping the same object, so that one second image of the two or more second images, which is complete and does not contain too many background elements, of the object is retained and is used as a third image. Optionally, to reduce the load of the area controller 102 and also to improve the inference efficiency, in some embodiments, after the feature vector corresponding to the second image is input to the support vector machine to generate the prediction probability, the area controller 102 performs non-maximum suppression only on the second image with the prediction probability greater than 0.5, that is, performs non-maximum suppression on the second image where the object may be "present".
Then, regression correction is performed on the third image to correct the position of the third image, that is, the size of the region frame corresponding to the third image is corrected, so that the size and the position of the corrected region frame can be more fit to the object in the frame, thereby obtaining the image position of the third image. In some embodiments, the image location is in the form of (x, y, h, w), x and y representing the abscissa of the lower left corner of the region box corresponding to the third image, and h and w representing the height and width of the region box corresponding to the third image.
Then, since the image position of the third image is obtained based on the preset image coordinate system/camera coordinate system, and the position of the vehicle in the image coordinate system (i.e. the image center point) is pre-stored in the area controller 102 of the present application, the position/distance of the object in the third image relative to the vehicle is determined by using the pixel distance between the image position of the third image and the image center point and then using the mapping relationship between the pixel distance and the real distance, that is, based on the prior knowledge of how many meters the 1 pixel unit corresponds to the real/actual one.
And finally, integrating the positions of the objects of the third images relative to the vehicle to obtain the identification result corresponding to the video data.
In this way, the embodiment of the present application obtains the recognition result corresponding to the video data through the neural network model, thereby ensuring the reliability of the recognition result, and therefore, the operation of the central controller 101 based on the instruction/execution generated by the recognition result is more accurate/effective.
In addition, it can be understood that the number of cameras 103 in the embodiment of the present application is plural, so that there are cases where different cameras shoot objects at the same location, so in some embodiments, the area controller 102 may further perform object coincidence detection when obtaining the recognition result, so as to determine whether there is coincidence/the same information in different recognition results generated according to different video data (i.e. whether there is the same object at the same location in the recognition results corresponding to two video data), and if so, one of the duplicate information is reserved to reduce redundancy. Similarly, when the central controller 101 obtains the recognition results of the different area controllers 102, the object coincidence detection is also performed, so as to delete and retain the information process of the same object in the different recognition results.
In addition, it can be understood that what action is performed or what instruction is triggered by the central controller 101 based on the object type and/or the object position is what can be set according to the actual situation. For example, in some embodiments, when the central controller 101 determines that the vehicle is closer to the road solid line based on the recognition result, the sound generating unit in the vehicle is controlled to play the solid line lane change warning information. In other embodiments, when the central controller 101 determines that the road traffic is marked as a school link based on the identification result, it will control the sound generating unit in the vehicle to play the speed-reducing prompt message for the school link.
Thus, the embodiment of the present application enables the central controller 101 to recognize the type and position of each object in the result and trigger more effective/accurate instructions, thereby effectively assisting the user in completing the driving of the vehicle or making the execution of the automatic driving more accurate.
Alternatively, since the processing speeds/processing times of the different area controllers 102 may be different, when each area controller 102 receives the video data at the current time, the generation times of the recognition results corresponding to the video data generated by the different area controllers may be different, in other words, the central processor 101 will receive the recognition results of the video data at the same time. Thus, to ensure that the central controller 101 can generate the correct instructions based on the recognition results received at different times, in some embodiments of the present application, time synchronization is maintained between each of the zone controllers 102.
Further, the area controller 102 is further configured to send the identification result and a timestamp corresponding to the identification result to the central controller;
The central controller 101 is further configured to generate a corresponding instruction based on the received identification result and the timestamp
That is, because the time of each area controller 102 is synchronized, after each area controller 102 sends the identification result and the timestamp corresponding to the identification result to the central controller 101, the central controller 101 may generate the corresponding instruction by using the identification result of the same timestamp after receiving the identification result of each area controller 102 for the same timestamp. For example, taking fig. 2 as an example, after the central controller 101 receives the recognition results with 16 points of time stamps sent by the 4 area controllers 102, the central controller 101 generates corresponding instructions by using the 4 recognition results with 16 points of time stamps, so as to ensure the validity of the instructions.
Optionally, in some embodiments in which some recognition results are generated based on images/video frames/image frames, the time stamp refers to a capturing time of the image to which the recognition result corresponds.
In this way, the embodiment of the application is based on each area controller 102 with time synchronization, so that after receiving the identification result with the time stamp, the central controller 101 can generate an effective instruction by using each identification result at the same moment based on the time stamp corresponding to each identification result, thereby ensuring accurate processing of data in the vehicle.
Optionally, in order to enable the central controller 101 to generate more accurate/efficient instructions, in some embodiments, referring specifically to fig. 3, a schematic diagram of the data processing system 100 according to some embodiments of the present application is shown, that is, the data processing system 100 according to the embodiment of the present application further includes a sensor 108, where the sensor 108 is configured to detect a velocity value of each object and send the velocity value to the area controller 102, and the sensor 108 is connected to the area controller 102, and the recognition result includes obstacle information and environmental information.
Further, the area controller 102 is further configured to:
determining an obstacle object and an environmental object based on the class of the object of each third image;
Obtaining barrier information based on the category, the position and the speed value of each barrier object;
obtaining environmental information based on the category and the position of each environmental object;
obtaining a recognition result based on the environmental information and the obstacle information;
further, the central controller 101 is also configured to:
Marking each obstacle object in a preset coordinate system according to the obstacle information in the identification result to obtain the obstacle information;
And generating an automatic driving instruction based on the obstacle information and the environment information, and controlling the vehicle to run according to the automatic driving instruction.
It should be noted that, although 4 sensors 108 are shown in fig. 2, and each sensor 108 is connected to one zone controller 102, in practical situations, the number and connection relationship of the sensors 108 may be set according to practical situations, for example, in some embodiments, only one sensor 108 is included in the data processing system, and the sensor 108 is connected to all zone controllers 102.
Further, after the area controller 102 determines the speeds of the respective objects outside the vehicle according to the sensors 108, the area controller 102 classifies the respective objects into environmental objects in the vehicle running environment and obstacle objects affecting the running of the vehicle according to the categories of the respective objects outside the vehicle.
In some embodiments, if the class of object is any one of lane lines, road boundaries, traffic light signs, and road traffic signs, the object will be classified as an environmental object.
In some embodiments, if the class of object is a pedestrian or a car, then the object is classified as an obstacle.
Furthermore, after the area controller distinguishes the environmental object and the obstacle object, the information of the two objects is packed and integrated to form corresponding environmental information and obstacle information.
In some implementations, the environmental information includes: the number of lanes, the attribute of each lane line, the color/symbol of the traffic light, the position of the distance of the traffic light, the road traffic sign, the position of the road traffic sign. The "lane line width" is w in (x, y, h, w) of the candidate frame corresponding to the third image to which the environmental object belongs, and the lane line attribute includes a lane line type, a lane line width and a lane line position.
Similarly, the obstacle information includes: position, length, width, height, object type, speed and lane in which the vehicle is located. Since the central controller 101 needs to complete the instruction generation according to the coordinates of different obstacles, the "position" in the fault obstacle information is characterized by x and y in the (x, y, h, w) candidate frame corresponding to the third image to which the obstacle belongs.
After the area controller 102 completes the environmental information and the obstacle information, the two are combined as the above-described recognition result, and the recognition result is packaged into a data packet to be transmitted to the central controller 101. In some embodiments, the structure of the data packet after the recognition result is packed is as shown in table 1:
TABLE 1
Application layer Recognition result
Transport layer UDP/TCP
Network layer IP
Link layer MAC
Physical layer PHY
That is, the identification result in the embodiment of the present application is encapsulated into different message/data segments layer by layer according to the corresponding transmission protocols of each layer of the application layer, the transmission layer, the network layer, the link layer and the physical layer. Specifically, the environmental information and the obstacle information are combined into an identification result, and the identification result is encapsulated into a UDP (User Datagram Protocol ) message or a TCP (Transmission Control Protocol, transmission control protocol) message; the UDP message or TCP message is combined with the corresponding header field and packaged into an IP (Internet Protocol ) message; the IP message is packaged into an MAC message based on a link layer transmission protocol; finally, the MAC packet is encapsulated as a PHY (physical layer) packet and transmitted to the central controller 101 through a cable. The central controller 101 performs layer-by-layer encapsulation/parsing on the received PHY packet to obtain an identification result.
Accordingly, after the central controller 101 obtains the recognition result, each obstacle is marked in the coordinate system based on the position, length, width, height, type of object, speed value, and lane where each obstacle is located in the coordinate system with the geometric center of the vehicle as the origin, so as to obtain the obstacle distribution information of the surrounding of the vehicle. Subsequently, the central controller 101 will acquire the positioning signal of the vehicle, and acquire map information corresponding to the positioning signal; and finally, generating corresponding automatic driving instructions based on the positioning signals, the map information, the environment information and the obstacle distribution information.
It will be understood that, as shown in fig. 2, if the central controller 101 needs to receive the recognition results sent by the 4 area controllers 102, and each recognition result includes an obstacle corresponding to one direction (for example, the recognition result corresponding to the first combination 104 includes an obstacle on the east side of the vehicle), after the central controller 101 marks the obstacle in each recognition result on the coordinate system in turn, since the obstacles in different directions may overlap (for example, the first combination 104 and the fourth combination 107 will each capture an obstacle in the northeast direction), the central controller 101 will retain one of the multiple obstacles whose coordinates overlap, so as to obtain an accurate and effective obstacle distribution situation.
In this way, the embodiment of the present application is based on the division of the obstacle object and the environmental object, so that the central controller 101 can execute more accurate instructions when executing the obstacle avoidance operation, thereby realizing the functions of vehicle distance detection, pedestrian detection, vehicle detection, non-vehicle detection, lane route detection, and the like.
Optionally, because the central controller 101 may need to acquire the image data/video data outside the vehicle to execute the corresponding instruction, the area controller 102 needs more bandwidth resources to directly transmit the acquired video data to the central controller 101, so in order to reduce the bandwidth requirement, in some embodiments of the present application, the area controller 102 is further configured to perform data compression on the video data to obtain compressed data, and send the processed data to the central controller 101;
The central controller 101 is further configured to decompress the compressed data to obtain video data, and generate corresponding instructions based on the video data.
That is, the zone controller 102 according to the embodiment of the present application further has a video compression function, and can compress video data having a large data amount, so that transmission can be completed with a small bandwidth when transmitting video data. In some embodiments, zone controller 102 performs compression based on the H264 protocol or the H265 protocol.
In some embodiments, the region controller 102 performs compression based on the H264 protocol, because the compression based on the H264 protocol enables higher compression ratio of the compressed data and can meet the requirements of different rates, different resolutions, and different transmission occasions.
Thus, embodiments of the present application allow the zone controller 102 to transmit compressed video data instead of raw/unprocessed video data when transmitting the video data to the central controller 101, thereby reducing the bandwidth resources required for video data transmission.
Alternatively, in some embodiments where the video data includes both the panoramic video data and the panoramic video data, the area controller 102 of the present application will perform recognition processing on the panoramic video data to obtain a recognition result, and send the recognition result to the central controller 101; and performs data compression on the ring-looking video data to obtain compressed data, and transmits the processed data to the central controller 101.
Optionally, because the user needs to observe the condition/environment outside the vehicle during driving, but because of the space limitation of the main cab and the reflection angle limitation of the rearview mirror, it is difficult for the user to completely observe all directions outside the vehicle, in some embodiments, referring specifically to fig. 2 or fig. 3, that is, the data processing system 100 of the embodiment of the present application further includes a display 109, and the central controller 101 is connected to the display 109.
Further, the central controller 101 is also configured to:
decompressing the received compressed data to obtain video data;
an outside-vehicle panoramic image is generated from the video data, and sent to the display 109.
The display 109 is used for displaying the received vehicle exterior vision image.
That is, after the central controller 101 receives the video data of each camera 103 through the area controller 102, the video data of each camera 103 is spliced/fused to generate an outside-vehicle looking-around image. The central controller 101 then sends the vehicle exterior vision effect to the display 109. The display 109 displays the received exterior vehicle vision effect so that a user can observe the exterior vehicle condition through the display 109.
In some embodiments, the display 109 is a display device/screen that is disposed in a fixed location within the vehicle.
In other embodiments, the display 109 is a terminal device (e.g., a mobile phone and/or tablet computer) of the user, so that the user can conveniently observe the situation outside the vehicle through the terminal device.
In this way, the embodiment of the application enables the user to observe the condition outside the vehicle in real time through the outside environment image displayed by the display 109, thereby avoiding the problem of inconvenient observation caused by the space limitation of the main cab and the reflection angle limitation of the rearview mirror.
Optionally, in some embodiments where the video data includes the panoptic video data and the panoptic video data, the area controller 102 performs recognition processing on the panoptic video data to obtain a recognition result, sends the recognition result to the central controller 101, performs data compression on the panoptic video data to obtain compressed data, and sends the processed data to the central controller 101", the central controller 101 decompresses the received compressed data to obtain the panoptic video data, and fuses the panoptic video data sent by the area controllers to generate the above-mentioned panoptic image.
Optionally, to improve the accuracy of target detection, in some embodiments of the present application, the area controller 102 is further configured to:
image signal processing is carried out on the received video data to obtain signal optimized video data;
and carrying out recognition processing on the signal optimization data to obtain a recognition result, and sending the recognition result to the central controller.
That is, the area controller 102 also performs Image Signal Processing (ISP) on the video data to improve the video quality/image quality thereof before performing the object detection operation. It will be appreciated that the specific manner of image signal processing is what may be set/selected according to the circumstances, such as in some embodiments, image signal processing includes a combination of one or more of noise removal processing, contrast adjustment processing, and brightness adjustment processing.
Therefore, the embodiment of the application can finish the target detection based on the video data with higher quality, thereby improving the precision of the target detection; also, therefore, the central controller 101 is made to generate an instruction based on the recognition result with higher reliability/validity of the instruction.
Alternatively, in order for the central controller to generate higher quality exterior looking-around images, in some embodiments of the present application, the region controller 102 performs the above-described image signal processing on both the looking-around video data and the looking-around video data.
Optionally, in order to reduce the data loss phenomenon between the camera 103 and the area controller 102, in some embodiments of the present application, referring to fig. 4 specifically, fig. 4 is a schematic diagram of a data processing system according to some embodiments of the present application, that is, the data processing system 100 of the present application further includes a plurality of serializers 110, where the camera 103 is connected to the area controller 102 through the serializers 110; the camera 103 is further configured to perform a shooting action to generate video data, and send the video data to the serializer 110; the serializer 110 is configured to convert video data into serial data and then send the serial data to the area controller 102; the area controller 102 is further configured to perform deserializing processing on the received serial data to obtain video data, perform recognition processing on the video data to obtain a recognition result, and send the recognition result to the central controller 101.
That is, since the video data of the camera 103 may be lost during the process of being transferred to the area controller 102, the method and the device of the present application avoid the phenomenon, and the embodiment of the present application controls the camera 103 to send the video data to the serializer 110 for serial data conversion, so that the serialized video data can avoid the phenomenon of data loss as much as possible during the transmission process.
After receiving the serialized video data, i.e. after receiving the serial data, the area controller 102 deserializes the serial data to restore the serial data into normal video data.
In some embodiments, serializer 110 completes serialization of video data based on a gigabit multimedia serial link protocol or a flat panel display link protocol.
In this way, the embodiment of the present application allows serialization of video data, so that the phenomenon of data loss during the process of transmitting video data to the area controller 102 can be avoided as much as possible, thereby ensuring effective data transmission.
Optionally, to further reduce the length of the cable in the vehicle, in some embodiments of the present application, reference may be made to any one of fig. 1, 2 and 3, specifically, the camera 103 is connected to the adjacent/similar area controller 102, so that the length of the cable connecting the camera 103 and the area controller 102 is as short as possible, thereby reducing the difficulty of wiring in the vehicle.
Further, it can be appreciated that in some embodiments, one zone controller 102 may be connected to multiple cameras through one serializer 110. In other embodiments, one local controller 110 may be connected to multiple cameras through multiple serializers 110, for example, when one local controller 102 needs to receive video data transmitted by 3 cameras 103, each of the 3 cameras will be connected to the local controller 102 through a separate serializer 110.
Referring to fig. 5, an embodiment of the present application further provides a data processing method, including:
0210, the cameras execute shooting actions to generate video data and send the video data to the area controllers, wherein the number of the cameras is multiple, and one area controller is connected with at least one camera;
0220, the regional controllers perform recognition processing on the received video data to obtain recognition results, and send the recognition results to the central controller, wherein the recognition results comprise positions and categories of various objects outside the vehicle, and each regional controller is also connected with the central controller;
0230, the central controller is used for generating corresponding instruction based on the received identification result.
It can be appreciated that, in the data processing method provided in the embodiment of the present application, each step executed by each device in the data processing system 100 can be implemented, and the same technical effects can be achieved, so that repetition is avoided, and detailed description is omitted here.
The embodiment of the application also provides a vehicle, which comprises a vehicle body and the data processing system 100, wherein the data processing system is mounted on the vehicle body.
It can be understood that the vehicle body in the embodiment of the application is a combination of various devices/equipment, and the vehicle body is provided with the devices/equipment which are set according to actual situations. It will also be appreciated that the data processing system 100 of embodiments of the present application will cooperate with various devices/apparatus on the vehicle body to assist a user in driving and/or controlling the vehicle and achieve the technical effects achieved by the data processing system 100.
In the description of the present specification, reference to the terms "certain embodiments," "in one example," "illustratively," and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiments or examples is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (12)

1. A data processing system, comprising a central controller, a plurality of regional controllers and a plurality of cameras, wherein one regional controller is connected with at least one camera, and each regional controller is also connected with the central controller;
The camera is used for executing shooting actions to generate video data and sending the video data to the area controller;
The regional controller is used for carrying out recognition processing on the received video data to obtain a recognition result, and sending the recognition result to the central controller, wherein the recognition result comprises the position and the category of each object outside the vehicle;
the central controller is used for generating corresponding instructions based on the received identification result.
2. The data processing system of claim 1, wherein said plurality of cameras are divided into a plurality of combinations according to a camera direction, each of said combinations including at least one peri-camera and at least one pan-around camera for taking the same off-vehicle direction, said peri-camera and said pan-around camera of one of said combinations each being connected to the same said zone controller, said video data including peri-video data and said pan-around video data;
The panoramic camera is used for executing shooting actions to generate the panoramic video data and sending the panoramic video data to the area controller;
the looking-around camera is used for executing shooting actions to generate the looking-around video data and sending the looking-around video data to the area controller;
the zone controller is further configured to:
Performing recognition processing on the received panoramic video data to obtain a recognition result, and sending the recognition result to the central controller;
Transmitting the look-around video data to the central controller;
The central controller is used for:
And generating corresponding instructions based on the received looking-around video data and the identification result.
3. The data processing system of claim 2, wherein the zone controller is further configured to:
The received panoramic video data and the received environmental video data are respectively checked to obtain corresponding check results, wherein the check results comprise complete and incomplete frames which completely represent the panoramic video data/the environmental video data, and the incomplete frames represent the panoramic video data/the environmental video data and comprise incomplete frames;
if the checking result of the panoramic video data is incomplete and the checking result of the panoramic video data is complete in the same combination corresponding panoramic video data and the panoramic video data, deleting the panoramic video data and taking the panoramic video data as new panoramic video data;
And if the checking result of the all-around video data is incomplete and the checking result of the all-around video data is complete in the all-around video data and the all-around video data corresponding to the same combination, deleting the all-around video data and taking the all-around video data as new all-around video data.
4. The data processing system of claim 1, wherein the zone controller is further configured to:
selecting, for each of the video frames of the video data, a plurality of first images in the video frame based on a selection search algorithm, the first images being part of the video frame;
Scaling each first image to enable the sizes of the first images to be the same, and obtaining a second image corresponding to each first image;
Performing feature extraction on the second image by using a convolutional neural network to obtain a feature vector of the second image;
Inputting the feature vector into a support vector machine model to obtain a detection result corresponding to the feature vector, wherein the detection result comprises the category of the object in the first image corresponding to the feature vector, and the category comprises lane lines, road boundaries, traffic light symbols, road traffic marks, pedestrians and vehicles;
performing non-maximum suppression on each second image so as to take one of two or more overlapped second images as a third image;
performing regression processing on the position of the third image in the first image to obtain the image position of the third image;
Obtaining the position of the object in the third image based on the distance between the image position of the third image and a preset image center point;
and obtaining the identification result based on the position and the category of the object in each third image.
5. The data processing system of claim 4, further comprising a sensor for detecting a speed value of each of the objects and transmitting the speed value to the zone controller, the sensor being connected to the zone controller, the recognition result including obstacle information and environmental information;
the zone controller is further configured to:
Determining obstacle objects and environmental objects based on the categories of the objects of each of the third images;
obtaining the obstacle information based on the category, the position and the speed value of each obstacle;
Obtaining environmental information based on the category and the position of each environmental object;
obtaining the identification result based on the environment information and the obstacle information;
the central controller is further configured to:
Marking each obstacle object in a preset coordinate system according to the obstacle information in the identification result to obtain obstacle information;
and generating an automatic driving instruction based on the obstacle information and the environment information, and controlling the vehicle to run according to the automatic driving instruction.
6. The data processing system of claim 1, wherein the region controller is further configured to perform data compression on the video data to obtain compressed data, and to send the processed data to the central controller;
the central controller is also used for decompressing the compressed data to obtain the video data and generating corresponding instructions based on the video data.
7. The data processing system of claim 6, further comprising a display, wherein the central controller is coupled to the display;
the central controller is further configured to:
decompressing the received compressed data to obtain the video data;
Generating an outside-vehicle looking-around image according to the video data, and sending the outside-vehicle looking-around image to the display;
the display is used for displaying the received outside-vehicle ring video.
8. The data processing system of claim 1, wherein the zone controller is further configured to:
Performing image signal processing on the received video data to obtain signal optimized video data;
and carrying out recognition processing on the signal optimization data to obtain a recognition result, and sending the recognition result to the central controller.
9. The data processing system of claim 1, further comprising a plurality of serializers through which the cameras are connected to the zone controller;
The camera is also used for executing shooting actions to generate video data and sending the video data to the serializer;
the serializer is used for transmitting the serial data to the regional controller after converting the video data into the serial data;
The regional controller is also used for performing deserialization processing on the received serial data to obtain the video data, performing recognition processing on the video data to obtain a recognition result, and sending the recognition result to the central controller.
10. The data processing system of claim 1, wherein time synchronization is maintained between each of said zone controllers;
The regional controller is also used for sending the identification result and the timestamp corresponding to the identification result to the central controller;
the central controller is further configured to generate a corresponding instruction based on the received identification result and the timestamp.
11. A method of data processing, comprising:
The method comprises the steps that a camera performs shooting action to generate video data and sends the video data to a regional controller, wherein the number of the cameras is multiple, and one regional controller is connected with at least one camera;
The regional controllers are used for carrying out recognition processing on the received video data to obtain recognition results and sending the recognition results to the central controller, wherein the recognition results comprise positions and types of all objects outside the vehicle, and each regional controller is also connected with the central controller;
the central controller is used for generating corresponding instructions based on the received identification result.
12. A vehicle comprising a vehicle body and the data processing system of any one of claims 1-10, the data processing system being onboard the vehicle body.
CN202310434285.9A 2023-04-13 2023-04-13 Data processing system, data processing method and vehicle Pending CN118400489A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310434285.9A CN118400489A (en) 2023-04-13 2023-04-13 Data processing system, data processing method and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310434285.9A CN118400489A (en) 2023-04-13 2023-04-13 Data processing system, data processing method and vehicle

Publications (1)

Publication Number Publication Date
CN118400489A true CN118400489A (en) 2024-07-26

Family

ID=91987939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310434285.9A Pending CN118400489A (en) 2023-04-13 2023-04-13 Data processing system, data processing method and vehicle

Country Status (1)

Country Link
CN (1) CN118400489A (en)

Similar Documents

Publication Publication Date Title
CN109413415B (en) Camera controller testing system and testing method
JP2019096072A (en) Object detection device, object detection method and program
CN112738171B (en) Vehicle control method, device, system, equipment and storage medium
CN110278418B (en) Video data processing system and method and vehicle
CN107273788B (en) Imaging system for performing lane detection in a vehicle and vehicle imaging system
CN103583041A (en) Vehicle camera system and method for providing a continuous image of the vehicle surroundings
CN113852795B (en) Video picture adjusting method and device, electronic equipment and storage medium
US20130286207A1 (en) Imaging apparatus, imaging processing method, image processing device and imaging processing system
CN104512335A (en) Parking support apparatus
JP6903925B2 (en) Imaging display system, passenger equipment
CN105774657B (en) Single-camera panoramic reverse image system
JP2015070350A (en) Monitor image presentation system
JP2024515761A (en) Data-driven dynamically reconstructed disparity maps
CN114240816A (en) Road environment sensing method and device, storage medium, electronic equipment and vehicle
CN118400489A (en) Data processing system, data processing method and vehicle
CN116495004A (en) Vehicle environment sensing method, device, electronic equipment and storage medium
CN111932687B (en) In-vehicle mixed reality display method and device
CN116101174A (en) Collision reminding method and device for vehicle, vehicle and storage medium
US20110080495A1 (en) Method and camera system for the generation of images for the transmission to an external control unit
JPWO2020122143A1 (en) Measurement system, measurement method, and measurement program
JP5759709B2 (en) Image recognition device
JP4797441B2 (en) Image processing apparatus for vehicle
CN110979319A (en) Driving assistance method, device and system
JP6209921B2 (en) Predictive course presentation system
WO2022166308A1 (en) Control instruction generation method and device, and control method and device for visual sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination