CN117880467A - Video display control method, cabin driving fusion chip and vehicle - Google Patents

Video display control method, cabin driving fusion chip and vehicle Download PDF

Info

Publication number
CN117880467A
CN117880467A CN202410037700.1A CN202410037700A CN117880467A CN 117880467 A CN117880467 A CN 117880467A CN 202410037700 A CN202410037700 A CN 202410037700A CN 117880467 A CN117880467 A CN 117880467A
Authority
CN
China
Prior art keywords
video stream
video
intelligent driving
display
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410037700.1A
Other languages
Chinese (zh)
Inventor
李佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Changxing Zhijia Automobile Technology Co ltd
Original Assignee
Suzhou Changxing Zhijia Automobile Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Changxing Zhijia Automobile Technology Co ltd filed Critical Suzhou Changxing Zhijia Automobile Technology Co ltd
Priority to CN202410037700.1A priority Critical patent/CN117880467A/en
Publication of CN117880467A publication Critical patent/CN117880467A/en
Pending legal-status Critical Current

Links

Landscapes

  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The application relates to the technical field of vehicle control, in particular to a video display control method, a cabin driving fusion chip and a vehicle. Comprising the following steps: the intelligent driving control module determines intelligent driving function state information according to a plurality of video streams and sends the intelligent driving function state information to the video display module; if the video display module determines that the intelligent driving function state information indicates that the working state of at least one intelligent driving function is on, determining at least one first target video stream from a plurality of video streams; and controlling the display to present at least one first target video stream. The method and the device realize unified access to the video stream, avoid secondary handling of the video stream and shorten response time. And the corresponding target video stream is displayed while the intelligent driving function is triggered, so that the confidence of the user on automatic driving can be enhanced, in addition, if an extreme dangerous condition occurs, the user can take over the vehicle in advance according to the target video stream displayed by the display, and the driving safety is improved.

Description

Video display control method, cabin driving fusion chip and vehicle
Technical Field
The application relates to the technical field of vehicle control, in particular to a video display control method, a cabin driving fusion chip and a vehicle.
Background
On one side of an intelligent driving system of the vehicle, the video stream of cameras around the vehicle body is perceived and identified, so that an intelligent driving function is completed; and on the other hand, the video stream is forwarded to an intelligent cabin system to complete the DVR function. And the two functions are not provided for the user at the same time, and no linkage relation exists. Namely, when an intelligent driving scene (such as lane changing, ramp up and down, overtaking, crossing, straight running of the lane, triggering of an automatic emergency braking system (AEB) and the like) is realized, the DVR does not pull the video stream for display, the confidence of a user on intelligent driving cannot be enhanced, and under the condition of extreme danger, the user cannot take over the vehicle in advance.
Disclosure of Invention
The technical problem to be solved by the application is as follows: how to enable the DVR of the intelligent driving system and the intelligent cabin system to simultaneously acquire video streams of cameras around the automobile body in the intelligent driving scene, thereby improving the confidence of the user on automatic driving.
In view of the above technical problems, according to a first aspect of the present application, a video display control method is provided, which is applied to a cabin driving fusion chip; the cabin driving fusion chip is arranged in the vehicle; an intelligent driving control module and a video display module are arranged in the cabin driving fusion chip; the intelligent driving control module is connected with the video display module; the video display module is also connected with a display;
The method comprises the following steps:
the intelligent driving control module determines intelligent driving function state information according to a plurality of video streams and sends the intelligent driving function state information to the video display module; the intelligent driving function state information is used for representing the working states of a plurality of intelligent driving functions; each video stream corresponds to a view of the vehicle's entire body;
if the video display module determines that the intelligent driving function state information indicates that the working state of at least one intelligent driving function is on, determining at least one first target video stream from a plurality of video streams according to the intelligent driving function with each working state being on;
the video display module controls the display to display at least one first target video stream.
In an exemplary embodiment of the present application, the determining at least one first target video stream from a plurality of video streams according to the intelligent driving function that each working state is on includes:
determining at least one first target video stream from a plurality of video streams according to a preset video stream function mapping table and an intelligent driving function with each working state being on; the video stream function mapping table comprises mapping relations between each intelligent driving function and at least one video stream.
In an exemplary embodiment of the present application, the video display module controls the display to display at least one of the first target video streams, including:
if the number of the first target video streams is a plurality of, the video display module fuses the plurality of the first target video streams into a first key video stream; otherwise, determining the first target video stream as a first key video stream;
and controlling the display to display the first key video stream.
In an exemplary embodiment of the present application, if the number of the first target video streams is a plurality of, the video display module merges the plurality of first target video streams into a first key video stream, including:
image stitching is carried out on video frames corresponding to the first target video at the same moment to obtain each stitched video frame;
creating rendering threads with the same number of layers as each spliced video frame; each rendering thread corresponds to a rendering task of one layer; rendering tasks of different layers are different;
controlling each rendering thread to execute a corresponding rendering task to obtain a plurality of rendered layers corresponding to each spliced video frame;
combining a plurality of rendered layers corresponding to each spliced video frame according to a preset image synthesis technology to obtain each rendered video frame;
And obtaining a first key video stream according to each rendered video frame.
In an exemplary embodiment of the present application, the cabin driving fusion chip is further provided with a shared memory module; the shared memory module is connected with the intelligent driving control module and the video display module at the same time;
the shared memory module is used for receiving and storing video streams uploaded by a plurality of image sensors on the vehicle, so that the intelligent driving control module and the video display module can acquire each video stream from the shared memory module.
In an exemplary embodiment of the present application, the shared memory module sends a storage address corresponding to each video stream to the intelligent driving control module and the video display module after each power-up of the vehicle.
In an exemplary embodiment of the present application, the method further includes:
if the latest intelligent driving function state information received by the video display module indicates that the working state of at least one intelligent driving function is on in the process of displaying the first key video stream by the display, determining at least one second target video stream from a plurality of video streams according to the at least one intelligent driving function with the current working state being on;
And if the view angle corresponding to any second target video stream is different from the view angle corresponding to each first target video stream, acquiring a second key video stream according to at least one second target video stream, and controlling the display to display the first key video stream and the second key video stream simultaneously.
In an exemplary embodiment of the present application, the corresponding display areas of the first key video stream and the second key video stream on the display are different.
In an exemplary embodiment of the present application, the controlling the display to simultaneously display the first key video stream and the second key video stream includes:
acquiring display priorities of a first key video stream and the second key video stream;
determining the display areas of the first key video stream and the second key video stream according to the display priority;
and controlling the display according to the display area to display the first key video stream and the second key video stream simultaneously.
According to a second aspect of the present application, a cabin-driving fusion chip is provided, for implementing the video display control method described above, where the cabin-driving fusion chip is disposed in a vehicle; an intelligent driving control module and a video display module are arranged in the cabin driving fusion chip; the intelligent driving control module is connected with the video display module; the video display module is also connected with a display;
The intelligent driving control module is used for determining intelligent driving function state information according to a plurality of video streams and sending the intelligent driving function state information to the video display module; the intelligent driving function state information is used for representing the working states of a plurality of intelligent driving functions; each video stream corresponds to a view of the vehicle's entire body;
the video display module is used for determining at least one first target video stream from a plurality of video streams according to the intelligent driving function with each working state being on if the intelligent driving function state information is determined to indicate that the working state of at least one intelligent driving function is on; and controlling said display to present at least one of said first target video streams.
According to a third aspect of the present application, there is provided a vehicle provided with the cabin-driving fusion chip described above.
The application has at least the following beneficial effects:
the video display control method is applied to a cabin driving fusion chip of a vehicle, wherein the cabin driving fusion chip is formed by arranging an intelligent driving control module and a video display module in one chip, the intelligent driving module firstly determines current intelligent driving state information according to a plurality of video streams of the whole body of the vehicle and sends the intelligent driving function state information to the video display module, and the intelligent driving function state information is used for representing working states of a plurality of intelligent driving functions. When the working state of at least one intelligent driving function is on, namely, the vehicle is in a lane change, an upper ramp, a lower ramp, overtaking, crossing, the lane is straight, an automatic emergency braking system (AEB) is triggered, and the like, the video display module selects a video stream corresponding to the intelligent driving function which is currently on from a plurality of video streams around the vehicle body as a target video stream, and a display connected with the video display module is utilized to display the target video stream. The unified access to the video stream is realized, the secondary handling of the video stream is avoided, and the response time is shortened. And the corresponding target video stream is displayed while the intelligent driving function is triggered, so that the confidence of the user on automatic driving can be enhanced, in addition, if an extreme dangerous condition occurs, the user can take over the vehicle in advance according to the target video stream displayed by the display, and the driving safety is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a video display control method according to an embodiment of the present application;
FIG. 2 is a block diagram illustrating the operation of a pod fusion chip according to one embodiment of the present disclosure;
fig. 3 is a flowchart of another video display control method according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
As shown in fig. 1 and 2, one embodiment of the present application provides a video display control method applied to a cabin driving fusion chip; the cabin driving fusion chip is arranged in the vehicle; an intelligent driving control module and a video display module are arranged in the cabin driving fusion chip (cabin driving fusion ECU); the intelligent driving control module is connected with the video display module; the video display module is also connected with a display (central control screen).
Here, the cabin driving fusion (Cockpit Integration) refers to the concept that various systems and functions of the automobile cabin are integrated to realize overall collaborative operation. The main characteristics of cabin driving integration include (1) deep integration of various electronic devices in a cockpit, such as a navigation system, an entertainment system, a driving auxiliary system and the like; (2) A unified operation platform and a unified man-machine interaction interface are adopted, such as a central control screen, touch control and voice interaction and the like; (3) The cross-domain sharing of data and resources is realized among different systems, for example, a navigation map can be used for lane change prompting and the like; (4) The function connection and the cooperative operation are carried out among different systems, so that the seamless connection among the function modules is realized; (5) Supporting Over-The-Air (OTA) software upgrading, and realizing upgrading at any time and function optimization; (6) The open interface standard is adopted, so that the third party application integration is facilitated; (7) The bottom layer adopts a computing platform to uniformly manage the running time and resources, so that the running efficiency is improved.
According to the cabin driving fusion chip, the CPU, GPU, DSP, NPU, ISP and other operation resources are integrated, the interaction efficiency of the experience of a driver is improved, the system operation stability and the resource utilization rate are improved, the cooperative application among the functional modules of the vehicle is realized, and the later-stage function expansion, upgrading and updating are facilitated. An integrated intelligent driving space is constructed.
Further, an intelligent driving control module and a video display module are arranged in the cabin driving fusion chip; the intelligent driving control module can complete transverse and longitudinal intelligent control of the vehicle execution system under a specific design running condition; the video display module is used for linking with the display to display the video stream of the whole vehicle body. The video stream of the vehicle body is a left front video stream, a left rear video stream, a right front video stream, a right rear video stream, a front video stream, a rear video stream, or the like acquired by a camera of the vehicle body.
The method comprises the following steps:
s100, an intelligent driving control module determines intelligent driving function state information according to a plurality of video streams and sends the intelligent driving function state information to the video display module; the intelligent driving function state information is used for representing the working states of a plurality of intelligent driving functions; each video stream corresponds to a view of the vehicle's entire body.
Specifically, intelligent driving may be activated by the driver when the vehicle state satisfies the activation condition. After activation, the intelligent driving control module acquires video streams of different visual angles of the whole vehicle body, analyzes the video streams, and further the cabin driving fusion chip can send control instructions (longitudinal acceleration instructions and transverse rotation angle instructions) to the vehicle control ECU, and the vehicle control ECU controls the transverse and longitudinal execution mechanism to perform acceleration and deceleration control and rotation angle control, so that the following intelligent driving function scene is completed: lane change, up-down ramp, overtaking, crossing, straight running of the lane, triggering of an automatic emergency braking system (Autonomous Emergency Braking, AEB), triggering of post-collision early warning (Rear Collision Warning, RCW) and the like. The intelligent driving control module sends the intelligent driving function state information to the video display module through a DDS protocol.
And S200, if the video display module determines that the intelligent driving function state information indicates that the working state of at least one intelligent driving function is on, determining at least one first target video stream from a plurality of video streams according to the intelligent driving function with each working state being on.
Specifically, when the intelligent driving function status information indicates that the working status of at least one intelligent driving function (lane change, ramp up and down, overtaking, crossing, straight running of the own lane, triggering AEB, etc.) is on, at least one corresponding first target video is determined according to the on intelligent driving power, specifically:
Determining at least one first target video stream from a plurality of video streams according to a preset video stream function mapping table and an intelligent driving function with each working state being on; the video stream function mapping table comprises mapping relations between each intelligent driving function and at least one video stream.
Here, the preset video stream function mapping table includes a plurality of intelligent driving functions (lane change, up-down ramp, overtaking, crossing, own lane straight running, triggering AEB, etc.), that is, the mapping relationship between each intelligent driving function and the corresponding video stream: as an example: when the video stream is in the left lane change, determining that the target video stream is a left front side video stream and a left rear side video stream; when the video stream is in the right lane change, determining that the target video stream is a right front side video stream and a right rear side video stream; when the ramp is up/down, determining that the target video stream is left front, left rear, right front and right rear video stream; when the vehicle passes through the crossroad, and is in straight movement/left rotation/right rotation or overtaking, determining that the target video stream is the video stream of the front-view camera, the left rear-view camera, the right front-view camera, the right rear-view camera and the rear-view camera; when an automatic emergency braking system (AEB) is triggered, determining that a target video stream is a forward-looking video stream; and when a rear collision early warning (RCW) is triggered, determining the target video stream as a rearview video stream.
S300, the video display module controls the display to display at least one first target video stream.
Specifically, step S300 further includes:
s310, if the number of the first target video streams is a plurality of, the video display module fuses the plurality of the first target video streams into a first key video stream; otherwise, the first target video stream is determined to be the first key video stream.
Here, if the number of the first target video streams is plural, the plural first target video streams are merged into the first key video stream for display when the display is displayed. The fusion steps are as follows:
and S311, performing image stitching on the video frames corresponding to the first target video at the same time to obtain each stitched video frame.
S312, creating rendering threads with the same number of layers as each spliced video frame; each rendering thread corresponds to a rendering task of one layer; rendering tasks of different layers are different; here, each spliced video frame has several layers, and each layer needs to be rendered differently when performing image rendering; therefore, a plurality of rendering threads are created, each rendering thread corresponding to a rendering task of one layer.
S313, controlling each rendering thread to execute a corresponding rendering task to obtain a plurality of rendered layers corresponding to each spliced video frame.
S314, merging the plurality of rendered layers corresponding to each spliced video frame according to a preset image synthesis technology to obtain each rendered video frame; here, each layer of each spliced video frame is respectively rendered and then combined to obtain each rendered video frame.
And S315, obtaining a first key video stream according to each rendered video frame.
S320, controlling the display to display the first key video stream.
In this embodiment, in order to perform fusion display on first target video streams corresponding to a plurality of different viewing angles, first, image stitching is performed on video frames corresponding to each first target video at the same time, and after the images corresponding to the video frames at the same time are stitched, in order to enable the video streams to be played more smoothly on a display and to have better effect presentation, image rendering is performed on each stitched video frame.
In an exemplary embodiment of the present application, as shown in fig. 3, the method further includes:
and S400, if the latest intelligent driving function state information received by the video display module indicates that the working state of at least one intelligent driving function is on in the process of displaying the first key video stream on the display, determining at least one second target video stream from a plurality of video streams according to the at least one intelligent driving function with the current working state being on.
Specifically, if the working state of at least one intelligent driving function is already started at present, that is, the display is displaying a first key video stream corresponding to the working state of the started intelligent driving function; in the process of displaying the first key video stream (i.e. in the execution process of step S300), at least one intelligent driving function is newly started. And if the at least one intelligent driving function which is started up is different from the intelligent driving function corresponding to the first key video stream which is being displayed, determining the video stream corresponding to the at least one intelligent driving function which is started up as a second target video stream.
S500, if the view angle corresponding to any second target video stream is different from the view angle corresponding to each first target video stream, obtaining a second key video stream according to at least one second target video stream, and controlling the display to display the first key video stream and the second key video stream simultaneously.
Specifically, if the view angle corresponding to any second target video stream is different from the view angle corresponding to each first target video stream, and the number of the second target video streams is multiple, fusing the multiple second target video streams to obtain a second key video stream; and if the number of the second target video streams is one, determining the second target video stream as a second key video stream. At this time, the first key video stream is not displayed, so the display is controlled to display the first key video stream and the second key video stream simultaneously. As an example: in the process that a display displays a video stream (a first key video stream) obtained by fusing a left front side video stream and a left rear side video stream corresponding to a left lane change, triggering rear collision early warning (RCW); and at the moment, determining a rearview video stream corresponding to rear collision early warning (RCW) as a second target video stream. At this time, the view angle corresponding to the second target video stream is different from the view angle corresponding to each first target video stream (left front side video stream, left rear side video stream), and then the rear view video stream corresponding to the Rear Collision Warning (RCW) is determined as the second key video stream.
It should be noted that, if all the viewing angles corresponding to the video stream corresponding to the at least one newly opened intelligent driving function are included in the viewing angles corresponding to the first key video stream, the first key video stream showing time is prolonged until the newly opened intelligent driving function is closed. Here, for the case that the view angles corresponding to the video stream corresponding to the at least one newly opened intelligent driving function are all included in the view angles corresponding to the first key video stream, the display time is directly prolonged, instead of fusing the video streams again, resources consumed in fusing the video streams are avoided, and smoothness of viewing of users is improved.
In this application, the display areas corresponding to the first key video stream and the second key video stream on the display are different.
Further, controlling the display to display the first key video stream and the second key video stream simultaneously includes:
s510, acquiring the display priority of the first key video stream and the second key video stream.
Here, the presentation priority of each key video stream may be determined according to the remaining presentation time thereof, and as an example, the remaining presentation time of the first key video stream is 5 seconds, and the remaining presentation time of the second key video stream is 40 seconds, and at this time, the presentation priority of the second key video stream is higher than the presentation priority of the first key video stream. Or, as an example: the preset display priority of the first key video stream corresponding to the collision early warning (Rear Collision Warning, RCW) after triggering is higher than the preset display priority of the second key video stream corresponding to the crossing.
And S520, determining the display areas of the first key video stream and the second key video stream according to the display priority.
Here, the higher the presentation priority, the larger the display area, and conversely, the smaller the display area. And if one of the key video streams is displayed completely, the other key video stream can be displayed in full screen.
And S530, controlling the display to display the first key video stream and the second key video stream simultaneously according to the display area.
Here, when each key video stream is displayed, the target potential collision objects in the video stream can be marked through the intelligent driving perception technology, so that the target potential collision objects in the key video stream are displayed at the same time when the key video stream is displayed. The method comprises the following steps: firstly, preprocessing (including denoising, data calibration, coordinate conversion and the like) the input first key video stream and/or second key video stream; secondly, performing target detection on the preprocessed first key video stream and/or the preprocessed second key video stream by using a computer vision and machine learning algorithm, and determining a plurality of obstacles; and gives each obstacle a unique ID; then, predicting a moving track of each obstacle based on the historical position data of each obstacle; classifying collision priorities (cautiously, generally, neglecting) of the obstacles according to the current position and the predicted movement track of each obstacle; further, if the current position of the obstacle is not within the preset distance in front of the vehicle, is not on the lane and is not near the intersection, setting the current position of the obstacle to be ignored; if the obstacle is in the traffic intersection or on the lane and the distance from the vehicle is smaller than the preset distance, setting the obstacle as a general one; if the predicted moving track of the obstacle crosses the predicted moving track of the own vehicle within 5 seconds in the future, setting the predicted moving track as a general one; still further, the predetermined number of obstacles closest to the self-set are determined as cautious, i.e., target potential collision objects, in order of distance to the set-up general obstacle. And finally, setting a labeling frame for the target potential collider, and simultaneously displaying the target potential collider and the labeling frame thereof when the display displays the first key video stream and/or the second key video stream so as to remind a user of focusing attention.
In this embodiment, for at least one newly opened intelligent driving function in the display process of the first key video stream, if the newly opened intelligent driving function is the same as the first key video stream being displayed, or all the viewing angles corresponding to the video stream corresponding to the newly opened at least one intelligent driving function are included in the viewing angles corresponding to the first key video stream, the first key video stream is directly displayed continuously, without splitting a screen, and viewing is smoother. At this time, a mark can be displayed on the display to mark the name and start time of the currently executed intelligent driving function, so that the user can clearly determine the current state of the vehicle and improve the trust of intelligent driving. And if the newly opened intelligent driving function is different from the first key video stream being displayed, determining the video stream corresponding to the newly opened intelligent driving function as a second target video stream. Further, if the view angle corresponding to any second target video stream is different from the view angle corresponding to each first target video stream, merging into one second key video stream when the number of the second target video streams is multiple, directly determining the second key video stream when the number of the second target video streams is one, and carrying out split screen display on the first key video stream and the second key video stream. The split screen display is carried out on the vehicle, so that a user can intuitively know the change of the intelligent driving function of the current vehicle in the intelligent driving process, and the confidence of the user on automatic driving is enhanced.
In addition, when the first key video stream and/or the second key video stream are displayed, the target potential collided objects and corresponding labeling frames thereof which are displayed simultaneously are displayed, so that a user can pay attention to the target potential collided objects which are possibly threatening the current vehicle, the confidence of the user on intelligent driving of the vehicle can be enhanced, the driving safety of the vehicle can be judged according to the target potential collided objects, and the vehicle can be taken over in time when the vehicle is possibly dangerous.
In an exemplary embodiment of the present application, the cabin driving fusion chip is further provided with a shared memory module; the shared memory module is connected with the intelligent driving control module and the video display module at the same time;
the shared memory module is used for receiving and storing video streams uploaded by a plurality of image sensors on the vehicle, so that the intelligent driving control module and the video display module can acquire each video stream from the shared memory module.
Here, a cockpit fusion chip (ECU) utilizes Zero-Copy technology so that the intelligent driving function, DVR function can directly perform data consumption. The video stream access module creates a shared memory module, and the shared memory module sends a storage address corresponding to each video stream to the intelligent driving control module and the video display module after the vehicle is electrified every time. And the shared memory module updates the video stream in real time according to cameras around the vehicle body, and stores the latest video stream. And when the intelligent driving function is closed, releasing the shared memory in the shared memory module.
In this embodiment, a shared memory module is provided, which is connected with the intelligent driving control module and the video display module at the same time, the intelligent driving control module and the video display module can both obtain corresponding video streams from the shared memory module, while vehicles in related technologies are not provided with the shared memory module, the intelligent driving system of the vehicle does not need to forward the video streams to the intelligent cabin system while performing intelligent driving according to the corresponding video streams, and the intelligent cabin system cannot directly obtain the video streams, so that the vehicles without the shared memory module carry the video streams for the second time, and the response time is longer.
The embodiment realizes unified access to the video stream through the arrangement of the shared memory module, avoids secondary handling of the video stream, and shortens response time. And the corresponding target video stream is displayed while the intelligent driving function is triggered, so that the confidence of the user on automatic driving can be enhanced, in addition, if an extreme dangerous condition occurs, the user can take over the vehicle in advance according to the target video displayed by the display, and the driving safety is improved.
In an exemplary embodiment of the present application, a cabin-driving fusion chip is further provided, for implementing the above-mentioned video display control method, where the cabin-driving fusion chip is disposed in a vehicle; an intelligent driving control module and a video display module are arranged in the cabin driving fusion chip; the intelligent driving control module is connected with the video display module; the video display module is also connected with a display;
The intelligent driving control module is used for determining intelligent driving function state information according to a plurality of video streams and sending the intelligent driving function state information to the video display module; the intelligent driving function state information is used for representing the working states of a plurality of intelligent driving functions; each video stream corresponds to a view of the vehicle's entire body;
the video display module is used for determining at least one first target video stream from a plurality of video streams according to the intelligent driving function with each working state being on if the intelligent driving function state information is determined to indicate that the working state of at least one intelligent driving function is on; and controlling said display to present at least one of said first target video streams.
The cabin driving fusion chip provided by the embodiment of the application is used for realizing the video display control method and achieving the corresponding technical effects, and is not repeated for brevity of description.
In an exemplary embodiment of the present application, there is also provided a vehicle provided with the above cabin-driving fusion chip.
The vehicle provided by the embodiment of the application is provided with the cabin driving fusion chip, so that the functions of the cabin driving fusion chip can be realized, and the corresponding technical effects can be achieved, so that the description is omitted herein for brevity.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the present application may be implemented as a system, method, or program product. Accordingly, aspects of the present application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device according to this embodiment of the present application. The electronic device is only one example and should not impose any limitation on the functionality and scope of use of the embodiments of the present application.
The electronic device is in the form of a general purpose computing device. Components of an electronic device may include, but are not limited to: the at least one processor, the at least one memory, and a bus connecting the various system components, including the memory and the processor.
Wherein the memory stores program code that is executable by the processor to cause the processor to perform steps according to various exemplary embodiments of the present application described in the section "exemplary methods" above in the present specification.
The storage may include readable media in the form of volatile storage, such as Random Access Memory (RAM) and/or cache memory, and may further include Read Only Memory (ROM).
The storage may also include a program/utility having a set (at least one) of program modules including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The bus may be one or more of several types of bus structures including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures.
The electronic device may also communicate with one or more external devices (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device, and/or with any device (e.g., router, modem, etc.) that enables the electronic device to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface. And, the electronic device may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through a network adapter. The network adapter communicates with other modules of the electronic device via a bus. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with an electronic device, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the present application may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to the various exemplary embodiments of the present application as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
The program product described above may take the form of any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described figures are only illustrative of the processes involved in the method according to exemplary embodiments of the present application, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily conceivable by those skilled in the art within the technical scope of the present application should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. The video display control method is characterized by being applied to a cabin driving fusion chip; the cabin driving fusion chip is arranged in the vehicle; an intelligent driving control module and a video display module are arranged in the cabin driving fusion chip; the intelligent driving control module is connected with the video display module; the video display module is also connected with a display;
The method comprises the following steps:
the intelligent driving control module determines intelligent driving function state information according to a plurality of video streams and sends the intelligent driving function state information to the video display module; the intelligent driving function state information is used for representing working states of a plurality of intelligent driving functions; each video stream corresponds to a view of the vehicle's entire body;
if the video display module determines that the intelligent driving function state information indicates that the working state of at least one intelligent driving function is on, determining at least one first target video stream from a plurality of video streams according to the intelligent driving function with each working state being on;
the video presentation module controls the display to present at least one of the first target video streams.
2. The method according to claim 1, wherein the determining at least one first target video stream from a plurality of video streams according to the intelligent driving function that is on for each operation state includes:
determining at least one first target video stream from a plurality of video streams according to a preset video stream function mapping table and an intelligent driving function with each working state being on; the video stream function mapping table comprises mapping relations between each intelligent driving function and at least one video stream.
3. The video display control method of claim 1, wherein the video presentation module controlling the display to present at least one of the first target video streams comprises:
if the number of the first target video streams is a plurality of, the video display module fuses the plurality of first target video streams into a first key video stream; otherwise, determining the first target video stream as a first key video stream;
and controlling the display to display the first key video stream.
4. The method according to claim 3, wherein if the number of the first target video streams is plural, the video presentation module merges the plural first target video streams into a first key video stream, comprising:
image stitching is carried out on video frames corresponding to the first target video at the same moment to obtain each stitched video frame;
creating rendering threads with the same number of layers as each spliced video frame; each rendering thread corresponds to a rendering task of one layer; rendering tasks of different layers are different;
controlling each rendering thread to execute a corresponding rendering task to obtain a plurality of rendered layers corresponding to each spliced video frame;
Combining a plurality of rendered layers corresponding to each spliced video frame according to a preset image synthesis technology to obtain each rendered video frame;
and obtaining a first key video stream according to each rendered video frame.
5. The video display control method according to any one of claims 1 to 4, wherein the cabin-driving fusion chip is further provided with a shared memory module; the shared memory module is connected with the intelligent driving control module and the video display module at the same time;
the shared memory module is used for receiving and storing video streams uploaded by a plurality of image sensors on the vehicle, so that each video stream can be acquired from the shared memory module by the intelligent driving control module and the video display module.
6. The video display control method according to claim 5, wherein the shared memory module sends a storage address corresponding to each video stream to the intelligent driving control module and the video display module after each power-up of the vehicle.
7. A video display control method according to claim 3, characterized in that the method further comprises:
if the latest intelligent driving function state information received by the video display module indicates that the working state of at least one intelligent driving function is on in the process of displaying the first key video stream by the display, determining at least one second target video stream from a plurality of video streams according to the at least one intelligent driving function with the current working state being on;
And if the view angle corresponding to any second target video stream is different from the view angle corresponding to each first target video stream, acquiring a second key video stream according to at least one second target video stream, and controlling the display to display the first key video stream and the second key video stream simultaneously.
8. The video display control method according to claim 7, wherein the first key video stream and the second key video stream are different in corresponding display area on the display.
9. The video display control method according to claim 7 or 8, wherein the controlling the display to simultaneously present the first key video stream and the second key video stream includes:
acquiring display priorities of a first key video stream and the second key video stream;
determining the display areas of the first key video stream and the second key video stream according to the display priority;
and controlling the display to display the first key video stream and the second key video stream simultaneously according to the display area.
10. A cabin-driving fusion chip for implementing the video display control method according to any one of claims 1 to 9, the cabin-driving fusion chip being disposed in a vehicle; an intelligent driving control module and a video display module are arranged in the cabin driving fusion chip; the intelligent driving control module is connected with the video display module; the video display module is also connected with a display;
The intelligent driving control module is used for determining intelligent driving function state information according to a plurality of video streams and sending the intelligent driving function state information to the video display module; the intelligent driving function state information is used for representing working states of a plurality of intelligent driving functions; each video stream corresponds to a view of the vehicle's entire body;
the video display module is used for determining at least one first target video stream from a plurality of video streams according to the intelligent driving function with each working state being on if the intelligent driving function state information is determined to indicate that the working state of at least one intelligent driving function is on; and controlling the display to show at least one of the first target video streams.
11. A vehicle provided with a cabin-ride fusion chip as defined in claim 10.
CN202410037700.1A 2024-01-10 2024-01-10 Video display control method, cabin driving fusion chip and vehicle Pending CN117880467A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410037700.1A CN117880467A (en) 2024-01-10 2024-01-10 Video display control method, cabin driving fusion chip and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410037700.1A CN117880467A (en) 2024-01-10 2024-01-10 Video display control method, cabin driving fusion chip and vehicle

Publications (1)

Publication Number Publication Date
CN117880467A true CN117880467A (en) 2024-04-12

Family

ID=90596591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410037700.1A Pending CN117880467A (en) 2024-01-10 2024-01-10 Video display control method, cabin driving fusion chip and vehicle

Country Status (1)

Country Link
CN (1) CN117880467A (en)

Similar Documents

Publication Publication Date Title
US20230339417A1 (en) Scene generation method, apparatus and system, device and storage medium
KR101730315B1 (en) Electronic device and method for image sharing
US11900815B2 (en) Augmented reality wayfinding in rideshare applications
CN109847337A (en) Game control method and device, storage medium
US20200219320A1 (en) Multimodal user interface for a vehicle
WO2018230527A1 (en) Drive assist method, drive assist program, and vehicle control device
CN113954871A (en) Testing predictions for autonomous vehicles
US11062598B2 (en) Method, mobile user device, and computer program for producing visual information for at least one occupant of a vehicle
WO2021082483A1 (en) Method and apparatus for controlling vehicle
US20220355819A1 (en) Autonomous driving vehicle controlling
US20200356090A1 (en) Client control for an autonomous vehicle ridesharing service
KR102214887B1 (en) Safety visualization for the background of the navigation interface
US11148671B2 (en) Autonomous systems human controller simulation
WO2022062491A1 (en) Vehicle-mounted smart hardware control method based on smart cockpit, and smart cockpit
JP2018154245A (en) Drive support device, drive support method, and program
CN115848377A (en) Lane changing control method, system, equipment and medium under different traffic environments
US20230356746A1 (en) Presentation control device and non-transitory computer readable medium
CN117880467A (en) Video display control method, cabin driving fusion chip and vehicle
JP7359843B2 (en) Display control device, display control method and program
CN111038259A (en) Display control method and device of vehicle-mounted integrated screen, vehicle and storage medium
JP7331875B2 (en) Presentation controller and presentation control program
JP4702037B2 (en) Integrated output controller
CN110465956B (en) Vehicle robot, vehicle machine, method and robot system
US20220327986A1 (en) Signal processing device and vehicle display apparatus including the same
JP7334768B2 (en) Presentation control device and presentation control program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination