CN111050153A - Vehicle, vehicle equipment and three-dimensional realization method of vehicle equipment - Google Patents

Vehicle, vehicle equipment and three-dimensional realization method of vehicle equipment Download PDF

Info

Publication number
CN111050153A
CN111050153A CN201811192114.5A CN201811192114A CN111050153A CN 111050153 A CN111050153 A CN 111050153A CN 201811192114 A CN201811192114 A CN 201811192114A CN 111050153 A CN111050153 A CN 111050153A
Authority
CN
China
Prior art keywords
target image
depth
dimensional
image
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811192114.5A
Other languages
Chinese (zh)
Other versions
CN111050153B (en
Inventor
朱健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pateo Connect and Technology Shanghai Corp
Original Assignee
Shanghai Pateo Electronic Equipment Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Pateo Electronic Equipment Manufacturing Co Ltd filed Critical Shanghai Pateo Electronic Equipment Manufacturing Co Ltd
Priority to CN201811192114.5A priority Critical patent/CN111050153B/en
Publication of CN111050153A publication Critical patent/CN111050153A/en
Application granted granted Critical
Publication of CN111050153B publication Critical patent/CN111050153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application relates to the technical field of display, and provides a vehicle, a vehicle equipment and a vehicle equipment three-dimensional realization method, wherein the vehicle equipment acquires a two-dimensional display interface, and according to the two-dimensional display interface acquires a corresponding real scene, the real scene is simulated, the simulated scene is subjected to third-dimensional depth processing, perspective processing and light and shadow processing to obtain a three-dimensional display interface, and the three-dimensional display interface is subjected to operability optimization processing to optimize the interactive operation performance between the three-dimensional display interface and a user. The method and the device can combine the three-dimensional user interface with the two-dimensional plane design, create a new interface visual expression form with a space three-dimensional effect and a natural light and shadow effect, have a good visual effect and strong user operability, and enhance user experience.

Description

Vehicle, vehicle equipment and three-dimensional realization method of vehicle equipment
Technical Field
The application relates to the technical field of display, in particular to a vehicle machine equipment three-dimensional realization method, vehicle machine equipment adopting the vehicle machine equipment three-dimensional realization method and a vehicle.
Background
At present, the purchase rate of automobiles is higher and higher, so that the safety of automobiles also becomes a very important problem. In recent years, map display devices represented by car-mounted devices have been increasingly popular, and the related art has been advanced. Among them, recent in-vehicle devices can perform split display (for example, display in a left-right double screen) with an increase in the size of a display screen and an improvement in resolution. Therefore, the user can obtain more information from the vehicle-mounted equipment and enjoy better visual experience.
Specifically, in the traveling direction up (heading up) display mode, the in-vehicle device can display a map around the current location of the vehicle on one divided screen and a map at a position further forward (above the display screen, that is, in the traveling direction of the vehicle) on the other divided screen by dividing the display screen into two left and right divided screens, thereby expanding the display range of the map on the traveling direction side of the vehicle.
However, as the display mode of the in-vehicle device, a display mode in which a predetermined direction is upward, for example, north-up (normal up) is often used in addition to the traveling direction being upward.
In summary, the existing display modes cannot provide users with better experience in the display layer, and especially the two-dimensional display mode brings users with certain difficulty in understanding, for example, when the two-dimensional display mode is used for navigation, the two-dimensional display mode has a very obvious recognition defect, which is not beneficial to the users to use the navigation quickly.
Aiming at various defects in the prior art, the inventor of the application provides a vehicle, a vehicle machine device and a three-dimensional realization method of the vehicle machine device through deep research.
Disclosure of Invention
An object of the application is to provide a vehicle, a car machine device and a three-dimensional realization method of the car machine device, which can combine a three-dimensional user interface with a two-dimensional plane design, create a new form of interface visual expression with a space three-dimensional effect and a natural light and shadow effect, and have the advantages of good visual effect, strong user operability and enhanced user experience.
In order to solve the technical problem, the present application provides a method for realizing three-dimensional stereoscopic display of a vehicle-mounted device, which is an implementation manner, and the method for realizing three-dimensional stereoscopic display of the vehicle-mounted device includes the steps of:
the vehicle-mounted equipment acquires a two-dimensional display interface;
acquiring a corresponding reality scene according to the two-dimensional display interface, and simulating the reality scene;
performing third-dimension depth processing, perspective processing and light and shadow processing on the simulated scene to obtain a three-dimensional display interface;
and performing operability optimization processing on the three-dimensional display interface to optimize the interactive operation performance with the user.
As an embodiment, the third dimension depth processing specifically includes:
and combining the two-dimensional display interface and the real scene, displaying in a three-dimensional stereo mode, and highlighting the depth space of the third dimension.
As an embodiment, the performing operability optimization processing on the three-dimensional stereoscopic display interface specifically includes:
and processing the simulated scene to obtain a visual simulation effect similar to a real object.
As one embodiment, after the step of performing operability optimization processing on the three-dimensional stereoscopic display interface to optimize the interaction performance with the user, the method further includes:
receiving first and second target image data representing first and second target images for simultaneous display on the three-dimensional stereoscopic display interface, wherein at least one of the first and second target image data represents three-dimensional stereoscopic image data;
determining a perceived depth of the first target image and determining a perceived depth of the second target image, wherein at least one of determining the perceived depth of the first target image and determining the perceived depth of the second target image is obtained by reading metadata associated with at least one of the first target image and the second target image or by depth analysis of at least one of the first target image data and the second target image data;
converting the perceived depth of at least one of the first and second target image data by compressing the perceived depth of at least one of the first and second target image data in the presence of depth overlap of the first and second target image data;
converting at least one of the first and second target image data to position the perceived second target image at a location that will be perceived by a user as being at a depth when viewed on a stereoscopic display such that the second target image appears to be entirely between the first target image and the user when viewed stereoscopically.
As an implementation manner, the step of acquiring a corresponding reality scene according to the two-dimensional display interface specifically includes:
acquiring a depth reality scene video and color reality scene data by using a depth camera, storing the image data into an array when receiving a frame of new color image data/depth image data, starting a new thread to asynchronously perform image processing on the current color image data, wherein the image processing specifically comprises drawing pixel data of a current picture into a bitmap of a user interface, and filtering partial images of the depth reality scene video and the color video according to detailed information of each frame of image of the depth reality scene video and the color video, so as to realize filtering processing;
converting the filtered depth reality scene video into three-dimensional point cloud data by combining a plane coordinate and a depth value, and establishing a virtual scene model according to the three-dimensional point cloud data;
acquiring a color corresponding to each point in the three-dimensional point cloud data from the filtered color real scene data, and applying the color to the virtual scene model to obtain a color virtual scene model;
and converting the data of the color virtual scene model into a 3D file format.
As an embodiment, the step of acquiring the depth reality scene video and the color reality scene data by using the depth camera specifically includes the following processes:
acquiring a depth reality scene video and color reality scene data using the depth camera;
reading video streams of the depth reality scene video and color reality scene data collected by the depth camera;
and reading the detailed information of each frame of image of the depth reality scene video and the color video collected by the depth camera, and filtering partial images of the depth reality scene video and the color video according to the detailed information of each frame of image.
As one embodiment, after the step of performing operability optimization processing on the three-dimensional stereoscopic display interface to optimize the interaction performance with the user, the method further includes:
the method comprises the steps that the in-vehicle equipment receives a signal of a screen for converting a screen into a first target image;
calculating the relation between the first target image and the second target image which is currently displayed according to the channel sequence;
outputting a part of the image of the first target image onto an area where an image of the second target image is output, based on the calculated relationship;
the image of the first target image is output without outputting the image of the second target image, and the reduced screen of the second target image is output to the screen, and the first target image is output so as to be connected to the second target image.
As one of the embodiments, the step of outputting the reduced screen of the second target image to the screen and outputting the first target image to be connected to the second target image further includes:
outputting the image of the first target image in an up-down direction or a left-right direction of the image of the second target image so that the image of the first target image overlaps the image of the second target image;
or, the image of the first target image is output in the up-down direction or the left-right direction of the second target image, so that a part of the image of the second target image disappears from the screen according to the output of the first target image.
In order to solve the technical problem, the present application further provides a car machine device, as one of the implementation manners, the car machine device includes a processor, and the processor is configured to execute program data, so as to implement the three-dimensional realization method of the car machine device.
In order to solve the technical problem, the present application further provides a vehicle, and as one of the embodiments, the vehicle is provided with the in-vehicle device.
According to the vehicle, the vehicle equipment and the three-dimensional realization method of the vehicle equipment, the vehicle equipment acquires the two-dimensional display interface, acquires the corresponding real scene according to the two-dimensional display interface, simulates the real scene, performs third-dimension depth processing, perspective processing and light and shadow processing on the simulated scene to obtain the three-dimensional display interface, and performs operability optimization processing on the three-dimensional display interface to optimize the interactive operation performance between the three-dimensional display interface and a user. The method and the device can combine the three-dimensional user interface with the two-dimensional plane design, create a new interface visual expression form with a space three-dimensional effect and a natural light and shadow effect, have a good visual effect and strong user operability, and enhance user experience.
The foregoing description is only an overview of the technical solutions of the present application, and in order to make the technical means of the present application more clearly understood, the present application may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present application more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of a car-mounted device three-dimensional implementation method according to the present application.
Fig. 2 is a schematic structural diagram of an embodiment of a vehicle-mounted device according to the present application.
Fig. 3 is an effect schematic diagram of an embodiment of a car machine device three-dimensional implementation method according to the present application.
Detailed Description
To further clarify the technical measures and effects adopted by the present application to achieve the intended purpose, the following detailed description of the embodiments, methods, steps, features and effects of the present application will be made with reference to the accompanying drawings and preferred embodiments.
The foregoing and other technical matters, features and effects of the present application will be apparent from the following detailed description of preferred embodiments, which is to be read in connection with the accompanying drawings. While the present application has been described in terms of specific embodiments and examples for achieving the desired objects and objectives, it is to be understood that the invention is not limited to the disclosed embodiments, but is to be accorded the widest scope consistent with the principles and novel features as defined by the appended claims.
Referring to fig. 1 and fig. 3, fig. 1 is a schematic flowchart illustrating an embodiment of a three-dimensional implementation method for a vehicle equipment, and fig. 3 is a schematic diagram illustrating an effect of the embodiment of the three-dimensional implementation method for the vehicle equipment.
It should be particularly noted that the three-dimensional stereoscopic implementation method of the in-vehicle device according to the present embodiment may include, but is not limited to, the following steps.
S101, the vehicle equipment acquires a two-dimensional display interface;
step S102, acquiring a corresponding reality scene according to the two-dimensional display interface, and simulating the reality scene;
step S103, performing third-dimension depth processing, perspective processing and light and shadow processing on the simulated scene to obtain a three-dimensional stereoscopic display interface;
and step S104, performing operability optimization processing on the three-dimensional display interface to optimize the interactive operation performance between the three-dimensional display interface and a user.
The three-dimensional display interface of the present embodiment is realized as shown in fig. 3.
The third-dimension depth processing according to this embodiment may specifically include: and combining the two-dimensional display interface and the real scene, displaying in a three-dimensional stereo mode, and highlighting the depth space of the third dimension.
It should be noted that, in the present embodiment, the performing an operability optimization process on the three-dimensional stereoscopic display interface specifically includes: and processing the simulated scene to obtain a visual simulation effect similar to a real object.
In a specific embodiment, after the step of performing operability optimization processing on the three-dimensional stereoscopic display interface to optimize the interaction performance with the user, the method may further include: receiving first and second target image data representing first and second target images for simultaneous display on the three-dimensional stereoscopic display interface, wherein at least one of the first and second target image data represents three-dimensional stereoscopic image data; determining a perceived depth of the first target image and determining a perceived depth of the second target image, wherein at least one of determining the perceived depth of the first target image and determining the perceived depth of the second target image is obtained by reading metadata associated with at least one of the first target image and the second target image or by depth analysis of at least one of the first target image data and the second target image data; converting the perceived depth of at least one of the first and second target image data by compressing the perceived depth of at least one of the first and second target image data in the presence of depth overlap of the first and second target image data; converting at least one of the first and second target image data to position the perceived second target image at a location that will be perceived by a user as being at a depth when viewed on a stereoscopic display such that the second target image appears to be entirely between the first target image and the user when viewed stereoscopically.
Further, in this embodiment, the step of acquiring a corresponding reality scene according to the two-dimensional display interface may specifically include: acquiring a depth reality scene video and color reality scene data by using a depth camera, storing the image data into an array when receiving a frame of new color image data/depth image data, starting a new thread to asynchronously perform image processing on the current color image data, wherein the image processing specifically comprises drawing pixel data of a current picture into a bitmap of a user interface, and filtering partial images of the depth reality scene video and the color video according to detailed information of each frame of image of the depth reality scene video and the color video, so as to realize filtering processing; converting the filtered depth reality scene video into three-dimensional point cloud data by combining a plane coordinate and a depth value, and establishing a virtual scene model according to the three-dimensional point cloud data; acquiring a color corresponding to each point in the three-dimensional point cloud data from the filtered color real scene data, and applying the color to the virtual scene model to obtain a color virtual scene model; converting data of the color virtual scene model into a 3D (three-dimensional) file format.
It should be particularly noted that, the step of acquiring the depth reality scene video and the color reality scene data by using the depth camera according to the embodiment may specifically include a process of acquiring the depth reality scene video and the color reality scene data by using the depth camera; reading video streams of the depth reality scene video and color reality scene data collected by the depth camera; and reading the detailed information of each frame of image of the depth reality scene video and the color video collected by the depth camera, and filtering partial images of the depth reality scene video and the color video according to the detailed information of each frame of image.
It should be noted that, after the step of performing operability optimization processing on the three-dimensional stereoscopic display interface to optimize the interaction performance with the user according to the embodiment, the method may further include: the method comprises the steps that the in-vehicle equipment receives a signal of a screen for converting a screen into a first target image; calculating the relation between the first target image and the second target image which is currently displayed according to the channel sequence; outputting a part of the image of the first target image onto an area where an image of the second target image is output, based on the calculated relationship; the image of the first target image is output without outputting the image of the second target image, and the reduced screen of the second target image is output to the screen, and the first target image is output so as to be connected to the second target image.
In a specific display process, the step of outputting the reduced screen of the second target image to the screen and outputting the first target image so as to connect the first target image with the second target image in the embodiment further includes: outputting the image of the first target image in an up-down direction or a left-right direction of the image of the second target image so that the image of the first target image overlaps the image of the second target image; or, the image of the first target image is output in the up-down direction or the left-right direction of the second target image, so that a part of the image of the second target image disappears from the screen according to the output of the first target image.
It is understood that through the above embodiment, the two-dimensional UI user interface can be designed to be displayed in a three-dimensional form, a depth space of a third dimension is highlighted, a perspective effect is enhanced through simulation of real scene shooting, an all-dimensional space is displayed, the limitation of a scene is avoided, and the control of a user on the interface is enlarged.
Specifically, the three-dimensional UI according to the present embodiment combines two-dimensional planar design with three-dimensional rendering to create a new form of interface visual expression having a spatial three-dimensional effect and a natural light and shadow effect. Meanwhile, the embodiment has the visual effect of simulating real objects, has authenticity and operability, and is beneficial to interactive operation of users. In addition, the design of the scene UI of the embodiment can realize a new form of user experience effect, the user can experience immersive visual experience, the three-dimensional panoramic display can give people a sense of closeness, and meanwhile, the user can browse information in the interface more thoroughly without being limited by a visual angle.
Through the embodiment, the three-dimensional user interface and the two-dimensional plane design can be combined, the new interface visual expression form with the space three-dimensional effect and the natural light and shadow effect is created, the visual effect is good, the operability of a user is strong, and the user experience is enhanced.
Referring to fig. 2 in conjunction with the above embodiments, fig. 2 is a schematic structural diagram of an embodiment of a vehicle-mounted machine according to the present application.
In this embodiment, the present application provides a car machine device, as one of the embodiments, the car machine device includes a processor 21, and the processor 21 is configured to execute program data, so as to implement the three-dimensional realization method of the car machine device shown in fig. 1 and the embodiment thereof.
It is worth mentioning that, this embodiment the car machine equipment can also be provided with a car machine communication module, the car machine communication module supports 3G network, 4G network, 5G network, WIFI network and/or bluetooth.
It should be noted that, in this embodiment, the in-vehicle device displays the three-dimensional scene display interface in a physical screen or virtual screen projection manner.
With reference to fig. 2, as an implementation manner of the present application, a vehicle is further provided, where the vehicle is configured with the in-vehicle device shown in fig. 2 and the implementation manner, and the vehicle may establish a network connection with a cloud server through the in-vehicle device.
It should be noted that, in the present embodiment, the car machine device, the vehicle, and the cloud server may all adopt a WIFI technology or a 5G technology, for example, a 5G car networking network is used to implement network connection between each other, the 5G technology adopted in the present embodiment may be a technology oriented to scene, the present application uses the 5G technology to play a key support role for the vehicle, and it simultaneously implements a contact person, a contact object, or a connection vehicle, and may specifically adopt the following three typical application scenarios to constitute.
The first is eMBB (enhanced Mobile Broadband), so that the user experience rate is 0.1-1 gpbs, the peak rate is 10gbps, and the traffic density is 10Tbps/km 2;
for the second ultra-reliable low-delay communication, the main index which can be realized by the method is that the end-to-end time delay is in the ms (millisecond) level; the reliability is close to 100%;
the third is mMTC (mass machine type communication), and the main index which can be realized by the application is the connection number density, 100 ten thousand other terminals are connected per square kilometer, and the connection number density is 10^6/km 2.
Through the mode, the characteristics of the super-reliable of this application utilization 5G technique, low time delay combine for example radar and camera etc. just can provide the ability that shows for the vehicle, can realize interdynamic with the vehicle, utilize the interactive perception function of 5G technique simultaneously, and the user can do an output to external environment, and the unable light can detect the state, can also do some feedbacks etc.. Further, the present application may also be applied to cooperation of automatic driving, such as vehicle formation and the like.
In addition, the communication enhancement automatic driving perception capability can be achieved by utilizing the 5G technology, and the requirements of passengers in the automobile on AR (augmented reality)/VR (virtual reality), games, movies, mobile office and other vehicle-mounted information entertainment and high precision can be met. According to the method and the device, the downloading amount of the 3D high-precision positioning map at the centimeter level can be 3-4 Gb/km, the data volume of the map per second under the condition that the speed of a normal vehicle is limited to 120km/h (kilometer per hour) is 90 Mbps-120 Mbps, and meanwhile, the real-time reconstruction of a local map fused with vehicle-mounted sensor information, modeling and analysis of dangerous situations and the like can be supported.
In the present application, the method and the system for realizing three-dimensional vehicle equipment of a vehicle CAN be applied to a vehicle system having the vehicle equipment or a vehicle TBOX, and CAN also be connected to a CAN bus of the vehicle.
In one embodiment, the CAN bus may include three network channels CAN _1, CAN _2, and CAN _3, and the vehicle may be provided with one ethernet network channel, three of the CAN network channels may be connected to the ethernet network channel through two car networking gateways, for example, wherein the CAN _1 network channel comprises a hybrid power assembly system, wherein the CAN _2 network channel comprises an operation guarantee system, wherein the CAN _3 network channel comprises an electric dynamometer system, the Ethernet network channel comprises a high-level management system, the advanced management system comprises a man-vehicle-road simulation system and a comprehensive information acquisition unit which are connected with an Ethernet network channel as nodes, the vehicle networking gateways of the CAN _1 network channel, the CAN _2 network channel and the Ethernet network channel CAN be integrated in the comprehensive information acquisition unit; the car networking gateway of the CAN _3 network channel and the Ethernet network channel CAN be integrated in a man-car-road simulation system.
Further, the nodes connected to the CAN _1 network channel include: an engine ECU (electronic Control Unit), a motor MCU, a BATTERY BMS (BATTERY MANAGEMENT SYSTEM, a BATTERY management system), an automatic Transmission TCU (Transmission Control Unit), and a hybrid processor HCU (hybrid vehicle Control Unit); the nodes connected with the CAN _2 network channel are as follows: the system comprises a rack measurement and control system, an accelerator sensor group, a power analyzer, an instantaneous oil consumption instrument, a direct-current power supply cabinet, an engine water temperature control system, an engine oil temperature control system, a motor water temperature control system and an engine intercooling temperature control system; the nodes connected with the CAN _3 network channel are as follows: an electric dynamometer processor.
The preferable speed of the CAN _1 network channel is 250Kbps, and a J1939 protocol is adopted; the rate of the CAN _2 network channel is 500Kbps, and a CANopen protocol is adopted; the rate of the CAN _3 network channel is 1Mbps, and a CANopen protocol is adopted; the rate of the Ethernet network channel is 10/100Mbps, and a TCP/IP protocol is adopted.
In one embodiment, the vehicle networking gateway may be equipped with an IEEE802.3 interface, a DSPI interface, an eSCI interface, a CAN interface, an MLB interface, a LIN interface, and/or an I2C interface.
In one embodiment, for example, the IEEE802.3 interface may be used to connect to a wireless router to provide a WIFI network for the entire vehicle; the DSPI (provider manager component) interface is used for connecting a Bluetooth adapter and an NFC (near field communication) adapter and can provide Bluetooth connection and NFC connection; the eSCI interface is used for connecting the 4G/5G module and communicating with the Internet; the CAN interface is used for connecting a vehicle CAN bus; the MLB interface is used for connecting an MOST (media oriented system transmission) bus in the vehicle, and the LIN interface is used for connecting a LIN (local interconnect network) bus in the vehicle; the IC interface is used for connecting a DSRC (dedicated short-range communication) module and a fingerprint identification module. In addition, the application can merge different networks by mutually converting different protocols by adopting the MPC5668G chip.
In addition, in the vehicle TBOX system, Telematics BOX in the present embodiment is simply referred to as a vehicle TBOX or a Telematics.
Telematics is a synthesis of Telecommunications and information science (information) in remote communication, and is defined as a service system that provides information through a computer system built in a vehicle, a wireless communication technology, a satellite vehicle-mounted device, and an internet technology that exchanges information such as text and voice. In short, the vehicle is connected to the internet (vehicle networking system) through a wireless network, and various information necessary for driving and life is provided for the vehicle owner.
In addition, Telematics is a combination of wireless communication technology, satellite navigation system, network communication technology and vehicle-mounted computer, when a fault occurs during vehicle running, the vehicle is remotely diagnosed by connecting a service center through wireless communication, and the computer built in the engine can record the state of main parts of the vehicle and provide accurate fault position and reason for maintenance personnel at any time. The vehicle can receive information and check traffic maps, road condition introduction, traffic information, safety and public security services, entertainment information services and the like through the user communication terminal, and in addition, the vehicle of the embodiment can be provided with electronic games and network application in a rear seat. It is easy to understand that, this embodiment provides service through Telematics, can make things convenient for the user to know traffic information, the parking stall situation that closes on the parking area, confirms current position, can also be connected with the network server at home, in time knows electrical apparatus running condition, the safety condition and guest's condition of visiting etc. at home.
In one embodiment, the vehicle may further be provided with an Advanced Driver Assistance System (ADAS), which may collect environmental data inside and outside the vehicle at the first time by using the various sensors mounted on the vehicle, and perform technical processes such as identification, detection, and tracking of static and dynamic objects, so that a Driver can detect a possible danger at the fastest time to attract attention and improve safety. Correspondingly, the ADAS of the present application may also employ sensors such as radar, laser, and ultrasonic sensors, which can detect light, heat, pressure, or other variables for monitoring the state of the vehicle, and are usually located on the front and rear bumpers, side view mirrors, the inside of the steering column, or on the windshield of the vehicle. It is obvious that various intelligent hardware used by the ADAS function can be accessed to the car networking system by means of an ethernet link to realize communication connection and interaction.
The host computer of the present embodiment vehicle may comprise suitable logic, circuitry, and/or code that may enable operation and/or functional operation of the five layers above the OSI model (Open System Interconnection, Open communication systems Interconnection reference model). Thus, the host may generate and/or process packets for transmission over the network, and may also process packets received from the network. At the same time, the host may provide services to a local user and/or one or more remote users or network nodes by executing corresponding instructions and/or running one or more applications. In various embodiments of the present application, the host may employ one or more security protocols.
In one embodiment, the network connection for the vehicle networking system may be a switch, which may have AVB functionality (Audio Video brightening, meeting the IEEE802.1 set of standards), and/or may include one or more unshielded twisted pair wires, each of which may have an 8P8C module connector.
In a preferred embodiment, the vehicle networking system specifically comprises a vehicle body control module BCM, a power bus P-CAN, a vehicle body bus I-CAN, a combination meter CMIC, a chassis control device and a vehicle body control device.
In this embodiment, the body control module BCM may integrate the functions of the car networking gateway to perform signal conversion, message forwarding, and the like between different network segments, i.e., the power bus P-CAN and the body bus I-CAN, for example, if a processor connected to the power bus needs to communicate with a processor connected to the body bus I-CAN, the body control module BCM may perform signal conversion, message forwarding, and the like between the two processors.
The power bus P-CAN and the vehicle body bus I-CAN are respectively connected with a vehicle body control module BCM.
The combination instrument CMIC is connected with a power bus P-CAN, and the combination instrument CMIC is connected with a vehicle body bus I-CAN. Preferably, the combination meter CMIC of the present embodiment is connected to different buses, such as a power bus P-CAN and a vehicle body bus I-CAN, and when the combination meter CMIC needs to acquire processor information that is attached to any bus, it is not necessary to perform signal conversion and message forwarding through a vehicle body control module BCM, so that gateway pressure CAN be reduced, network load CAN be reduced, and the speed of acquiring information by the combination meter CMIC CAN be increased.
The chassis control device is connected with the power bus P-CAN. The vehicle body control device is connected with a vehicle body bus I-CAN. In some examples, the chassis control device and the vehicle body control device CAN respectively broadcast data such as information to the power bus P-CAN and the vehicle body bus I-CAN, so that other vehicle-mounted processors and other devices hung on the power bus P-CAN or the vehicle body bus I-CAN CAN acquire the broadcast information, and communication between the vehicle-mounted devices such as different processors is realized.
In addition, in the car networking system of the vehicle of the embodiment, two CAN buses, namely a power bus P-CAN and a car body bus I-CAN, CAN be used, the car body control module BCM is used as a gateway, and the structure that the combination instrument CMIC is connected with both the power bus P-CAN and the car body bus I-CAN is adopted, so that the operation that information of a chassis control device or a car body control device is forwarded to the combination instrument CMIC through the gateway when the combination instrument CMIC is hung on one of the two buses in the traditional mode CAN be omitted, therefore, the pressure of the car body control module BCM as the gateway is relieved, the network load is reduced, and information of vehicle-mounted equipment hung on a plurality of buses, such as the power bus P-CAN and the car body bus I-CAN, CAN be sent to the combination instrument CMIC for display and the information transmission is strong in real-time.
Although the present application has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the application, and all changes, substitutions and alterations that fall within the spirit and scope of the application are to be understood as being included within the following description of the preferred embodiment.

Claims (10)

1. A three-dimensional realization method for vehicle equipment is characterized by comprising the following steps:
the vehicle-mounted equipment acquires a two-dimensional display interface;
acquiring a corresponding reality scene according to the two-dimensional display interface, and simulating the reality scene;
performing third-dimension depth processing, perspective processing and light and shadow processing on the simulated scene to obtain a three-dimensional display interface;
and performing operability optimization processing on the three-dimensional display interface to optimize the interactive operation performance with the user.
2. The three-dimensional realization method of the in-vehicle equipment according to claim 1, wherein the third-dimension depth processing specifically comprises:
and combining the two-dimensional display interface and the real scene, displaying in a three-dimensional stereo mode, and highlighting the depth space of the third dimension.
3. The on-board unit device three-dimensional realization method according to claim 1, wherein the performing operability optimization processing on the three-dimensional display interface specifically comprises:
and processing the simulated scene to obtain a visual simulation effect similar to a real object.
4. The on-board unit device three-dimensional realization method according to any one of claims 1 to 3, wherein after the step of performing operability optimization processing on the three-dimensional display interface to optimize interaction performance with a user, the method further comprises:
receiving first and second target image data representing first and second target images for simultaneous display on the three-dimensional stereoscopic display interface, wherein at least one of the first and second target image data represents three-dimensional stereoscopic image data;
determining a perceived depth of the first target image and determining a perceived depth of the second target image, wherein at least one of determining the perceived depth of the first target image and determining the perceived depth of the second target image is obtained by reading metadata associated with at least one of the first target image and the second target image or by depth analysis of at least one of the first target image data and the second target image data;
converting the perceived depth of at least one of the first and second target image data by compressing the perceived depth of at least one of the first and second target image data in the presence of depth overlap of the first and second target image data;
converting at least one of the first and second target image data to position the perceived second target image at a location that will be perceived by a user as being at a depth when viewed on a stereoscopic display such that the second target image appears to be entirely between the first target image and the user when viewed stereoscopically.
5. The on-board unit three-dimensional realization method according to any one of claims 1 to 3, wherein the step of obtaining the corresponding real scene according to the two-dimensional display interface specifically includes:
acquiring a depth reality scene video and color reality scene data by using a depth camera, storing the image data into an array when receiving a frame of new color image data/depth image data, starting a new thread to asynchronously perform image processing on the current color image data, wherein the image processing specifically comprises drawing pixel data of a current picture into a bitmap of a user interface, and filtering partial images of the depth reality scene video and the color video according to detailed information of each frame of image of the depth reality scene video and the color video, so as to realize filtering processing;
converting the filtered depth reality scene video into three-dimensional point cloud data by combining a plane coordinate and a depth value, and establishing a virtual scene model according to the three-dimensional point cloud data;
acquiring a color corresponding to each point in the three-dimensional point cloud data from the filtered color real scene data, and applying the color to the virtual scene model to obtain a color virtual scene model;
and converting the data of the color virtual scene model into a 3D file format.
6. The on-board unit three-dimensional realization method according to claim 5, wherein the step of acquiring the depth reality scene video and the color reality scene data by using the depth camera specifically comprises the following steps:
acquiring a depth reality scene video and color reality scene data using the depth camera;
reading video streams of the depth reality scene video and color reality scene data collected by the depth camera;
and reading the detailed information of each frame of image of the depth reality scene video and the color video collected by the depth camera, and filtering partial images of the depth reality scene video and the color video according to the detailed information of each frame of image.
7. The on-board unit device three-dimensional realization method according to any one of claims 1 to 3, wherein after the step of performing operability optimization processing on the three-dimensional display interface to optimize interaction performance with a user, the method further comprises:
the method comprises the steps that the in-vehicle equipment receives a signal of a screen for converting a screen into a first target image;
calculating the relation between the first target image and the second target image which is currently displayed according to the channel sequence;
outputting a part of the image of the first target image onto an area where an image of the second target image is output, based on the calculated relationship;
the image of the first target image is output without outputting the image of the second target image, and the reduced screen of the second target image is output to the screen, and the first target image is output so as to be connected to the second target image.
8. The in-vehicle device three-dimensional realization method according to claim 7, wherein the step of outputting the reduced screen of the second target image to the screen and outputting the first target image to be connected to the second target image further comprises:
outputting the image of the first target image in an up-down direction or a left-right direction of the image of the second target image so that the image of the first target image overlaps the image of the second target image;
or, the image of the first target image is output in the up-down direction or the left-right direction of the second target image, so that a part of the image of the second target image disappears from the screen according to the output of the first target image.
9. The in-vehicle device is characterized by comprising a processor, wherein the processor is used for executing program data to realize the three-dimensional realization method of the in-vehicle device according to any one of claims 1 to 8.
10. A vehicle characterized in that it is equipped with a vehicle machine apparatus according to claim 9.
CN201811192114.5A 2018-10-12 2018-10-12 Vehicle, vehicle equipment and three-dimensional realization method of vehicle equipment Active CN111050153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811192114.5A CN111050153B (en) 2018-10-12 2018-10-12 Vehicle, vehicle equipment and three-dimensional realization method of vehicle equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811192114.5A CN111050153B (en) 2018-10-12 2018-10-12 Vehicle, vehicle equipment and three-dimensional realization method of vehicle equipment

Publications (2)

Publication Number Publication Date
CN111050153A true CN111050153A (en) 2020-04-21
CN111050153B CN111050153B (en) 2022-07-29

Family

ID=70229878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811192114.5A Active CN111050153B (en) 2018-10-12 2018-10-12 Vehicle, vehicle equipment and three-dimensional realization method of vehicle equipment

Country Status (1)

Country Link
CN (1) CN111050153B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115220842A (en) * 2022-07-21 2022-10-21 亿咖通(湖北)技术有限公司 Vehicle navigation method, system, vehicle and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2493202A2 (en) * 2011-02-24 2012-08-29 Nintendo Co., Ltd. Display control program, display control apparatus, display control system, and display control method
CN106157359A (en) * 2015-04-23 2016-11-23 中国科学院宁波材料技术与工程研究所 A kind of method for designing of virtual scene experiencing system
CN106445370A (en) * 2015-06-07 2017-02-22 苹果公司 Devices and methods for navigating between user interfaces
US20170094227A1 (en) * 2015-09-25 2017-03-30 Northrop Grumman Systems Corporation Three-dimensional spatial-awareness vision system
US20180192032A1 (en) * 2016-04-08 2018-07-05 Maxx Media Group, LLC System, Method and Software for Producing Three-Dimensional Images that Appear to Project Forward of or Vertically Above a Display Medium Using a Virtual 3D Model Made from the Simultaneous Localization and Depth-Mapping of the Physical Features of Real Objects

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2493202A2 (en) * 2011-02-24 2012-08-29 Nintendo Co., Ltd. Display control program, display control apparatus, display control system, and display control method
CN106157359A (en) * 2015-04-23 2016-11-23 中国科学院宁波材料技术与工程研究所 A kind of method for designing of virtual scene experiencing system
CN106445370A (en) * 2015-06-07 2017-02-22 苹果公司 Devices and methods for navigating between user interfaces
US20170094227A1 (en) * 2015-09-25 2017-03-30 Northrop Grumman Systems Corporation Three-dimensional spatial-awareness vision system
US20180192032A1 (en) * 2016-04-08 2018-07-05 Maxx Media Group, LLC System, Method and Software for Producing Three-Dimensional Images that Appear to Project Forward of or Vertically Above a Display Medium Using a Virtual 3D Model Made from the Simultaneous Localization and Depth-Mapping of the Physical Features of Real Objects

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115220842A (en) * 2022-07-21 2022-10-21 亿咖通(湖北)技术有限公司 Vehicle navigation method, system, vehicle and storage medium

Also Published As

Publication number Publication date
CN111050153B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN110853385A (en) Road condition video sharing method and system based on 5G technology, and first vehicle and second vehicle
CN110798658A (en) Remote vehicle moving control method, mobile terminal and vehicle
CN110901524B (en) Vehicle, vehicle equipment and vehicle-mounted atmosphere lamp style automatic adjusting method thereof
CN111688488A (en) Vehicle, vehicle machine equipment and virtual scene control method thereof
CN111198573A (en) Vehicle, vehicle equipment and unmanned aerial vehicle control method based on vehicle equipment
CN111204300B (en) Vehicle, vehicle equipment and display element color setting method
CN111200720A (en) Internet of vehicles platform, vehicle remote video monitoring method and system
CN110958559B (en) Vehicle, vehicle-mounted equipment and instant interaction method based on map display
CN111199030A (en) Vehicle, vehicle equipment and automatic activation method of vehicle-mounted third-party application software
CN111124443A (en) Vehicle, vehicle equipment and vehicle instrument upgrading method
CN110954127A (en) Vehicle, user terminal and navigation map theme automatic adjusting method thereof
CN111660965A (en) Vehicle and intelligent steering wheel adjusting method and device based on seat adjustment
CN110920388A (en) Vehicle, vehicle equipment and vehicle equipment style adjusting method based on environmental factors
CN111284325B (en) Vehicle, vehicle equipment and vehicle along-the-road object detailed information display method thereof
CN110855732A (en) Vehicle, vehicle-mounted equipment thereof and remote control implementation method based on 5G internet of vehicles
CN111050153B (en) Vehicle, vehicle equipment and three-dimensional realization method of vehicle equipment
CN111328042B (en) Vehicle and equipment operation information display method and system thereof
CN111026475A (en) Vehicle, vehicle equipment and vehicle service processing method based on map software
CN111619348A (en) Vehicle, vehicle equipment and display control method based on instrument panel
CN111210541A (en) Vehicle, vehicle equipment and intelligent speed limit information prompting method
CN111824006A (en) Navigation-based automatic adjusting method and system for vehicle headlamp and vehicle
CN111289009A (en) Vehicle, vehicle equipment and vehicle equipment interest point input searching method thereof
CN110871810A (en) Vehicle, vehicle equipment and driving information prompting method based on driving mode
CN112085963B (en) Vehicle, parking management device and parking lot recommendation method thereof
CN111050279A (en) Vehicle, vehicle-mounted equipment and hot spot data sharing method based on map display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 208, building 4, 1411 Yecheng Road, Jiading District, Shanghai, 201821

Applicant after: Botai vehicle networking technology (Shanghai) Co.,Ltd.

Address before: Room 208, building 4, 1411 Yecheng Road, Jiading District, Shanghai, 201821

Applicant before: SHANGHAI PATEO ELECTRONIC EQUIPMENT MANUFACTURING Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Room 3701, No. 866 East Changzhi Road, Hongkou District, Shanghai, 200080

Patentee after: Botai vehicle networking technology (Shanghai) Co.,Ltd.

Country or region after: China

Address before: Room 208, building 4, 1411 Yecheng Road, Jiading District, Shanghai, 201821

Patentee before: Botai vehicle networking technology (Shanghai) Co.,Ltd.

Country or region before: China