CN115904294A - Environment visualization method, system, storage medium and electronic device - Google Patents

Environment visualization method, system, storage medium and electronic device Download PDF

Info

Publication number
CN115904294A
CN115904294A CN202310027956.XA CN202310027956A CN115904294A CN 115904294 A CN115904294 A CN 115904294A CN 202310027956 A CN202310027956 A CN 202310027956A CN 115904294 A CN115904294 A CN 115904294A
Authority
CN
China
Prior art keywords
image data
data
depth
sending
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310027956.XA
Other languages
Chinese (zh)
Other versions
CN115904294B (en
Inventor
李峰
李瑞东
耿广彬
曲有成
孔祥刚
张玉河
董毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANDONG MATRIX SOFTWARE ENGINEERING CO LTD
Original Assignee
SHANDONG MATRIX SOFTWARE ENGINEERING CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANDONG MATRIX SOFTWARE ENGINEERING CO LTD filed Critical SHANDONG MATRIX SOFTWARE ENGINEERING CO LTD
Priority to CN202310027956.XA priority Critical patent/CN115904294B/en
Publication of CN115904294A publication Critical patent/CN115904294A/en
Application granted granted Critical
Publication of CN115904294B publication Critical patent/CN115904294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The application provides an environment visualization method, a system, a storage medium and an electronic device, which relate to the field of image data processing and comprise the following steps: acquiring image data with depth information; sending the image data to a video encoder so that the video encoder can output depth video data; and sending the depth video data to corresponding display equipment according to the source of the image data, and displaying a naked eye 3D picture of the image data on the display equipment. This application obtains degree of depth video data through acquireing the image data that has the degree of depth information after video encoder encodes, can direct display bore hole 3D picture on display device, can effectively promote the sense of reality and the distance of interior rear-view mirror system and distinguish the power, promotes visual image's 3D and experiences the effect.

Description

Environment visualization method, system, storage medium and electronic device
Technical Field
The present application relates to the field of image data processing, and in particular, to an environment visualization method, system, storage medium, and electronic device.
Background
The conventional 3D video technology usually needs a professional image capturing device and a professional display device supporting the special video specification thereof, and even needs to match with special 3D glasses, which creates a huge barrier for the popularization of 3D video.
Meanwhile, when a driver drives a vehicle, the driver only depends on the main rearview mirror and the rearview mirrors on the two sides, and the distance between the target in the reflecting picture and the driver is difficult to visually sense, so that the driver has an error in the distance control duration, and the driving is influenced.
Disclosure of Invention
The application aims to provide an environment visualization method, an environment visualization system, a storage medium and electronic equipment, which can realize naked eye 3D of the vehicle environment and facilitate visual perception of the distance between a vehicle and a target by a driver.
In order to solve the technical problem, the application provides an environment visualization method, which has the following specific technical scheme:
acquiring image data with depth information;
sending the image data to a video encoder so that the video encoder can output depth video data;
and sending the depth video data to corresponding display equipment according to the source of the image data, and displaying a naked eye 3D picture of the image data on the display equipment.
Optionally, after the image data with the depth information is acquired, the method further includes:
and storing the image data according to a preset format.
Optionally, the acquiring the image data with the depth information includes:
acquiring image data with depth information through image acquisition equipment and laser radar equipment; wherein the depth information includes distance information of each object in the image data.
Optionally, the acquiring, by the image acquisition device and the laser radar device, the image data with the depth information includes:
acquiring point cloud data through image acquisition equipment and laser radar equipment;
filling the point cloud data into a data space by establishing a space coordinate system, and setting the range and the scale of the data space to obtain the space coordinate system;
filling each point cloud data into the space coordinate system, wherein a hollow area in the space coordinate system is filled according to surrounding point cloud data;
taking a z-axis of the space coordinate system as effective data, taking an x-axis and a y-axis as pixel coordinates of the image data, and taking z-axis data as depth data;
and combining the pixel data of the depth data on the z-axis to obtain the image data with the depth information.
Optionally, sending the depth video data to a corresponding display device according to the source of the image data includes:
if the image data are from a main rearview mirror, the depth video data are sent to a display device positioned at the top end of the upper side of a front windshield;
if the image data is from a left side rearview mirror, sending the depth video data to a display device positioned at the lower left corner of the front windshield;
and if the image data is from a right side rearview mirror, sending the depth video data to a display device positioned at the lower right corner of the front windshield.
Optionally, the display device is an LED screen or an OLED screen attached to the windshield.
Optionally, after the depth video data is sent to a corresponding display device according to the source of the image data and a naked eye 3D picture of the image data is displayed on the display device, the method further includes:
and if dangerous targets with depth information smaller than a preset value exist in the depth video data, displaying marks of the dangerous targets on the display equipment.
The present application further provides an environment visualization system, comprising:
the data acquisition module is used for acquiring image data with depth information;
the encoding module is used for sending the image data to a video encoder so that the video encoder can output depth video data;
and the data display module is used for sending the depth video data to corresponding display equipment according to the source of the image data and displaying a naked eye 3D picture of the image data on the display equipment.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method as set forth above.
The present application further provides an electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method described above when calling the computer program in the memory.
The application provides an environment visualization method, which comprises the following steps: acquiring image data with depth information; sending the image data to a video encoder so that the video encoder can output depth video data; and sending the depth video data to corresponding display equipment according to the source of the image data, and displaying a naked eye 3D picture of the image data on the display equipment.
This application obtains degree of depth video data through acquireing the image data that has the degree of depth information after video encoder encodes, can direct display bore hole 3D picture on display device, can effectively promote the sense of reality and the distance of interior rear-view mirror system and distinguish the power, promotes visual image's 3D and experiences the effect.
The application also provides an environment visualization system, a storage medium and an electronic device, which have the beneficial effects and are not repeated here.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an environment visualization method provided in an embodiment of the present application;
fig. 2 is a schematic position diagram of a display device according to an embodiment of the present application;
fig. 3 is a schematic position diagram of another display device provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an environment visualization system according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart of an environment visualization method according to an embodiment of the present disclosure, the method including:
s101: acquiring image data with depth information;
this step is intended to acquire image data with depth information, which refers to distance information, and is actually to acquire image data with information on the distance between objects in the image. The depth information includes distance information of each object in the image data
The method for acquiring the image data with the depth information is not limited, and the image data with the depth information may be acquired through an image acquisition device and a laser radar device. The image acquisition equipment is used for acquiring an environment image, the laser radar equipment is used for determining the distance information of each target in the environment, and the image acquisition equipment and the laser radar equipment are combined to obtain image data with depth information.
Specifically, the step may include the following steps:
firstly, acquiring point cloud data through image acquisition equipment and laser radar equipment;
filling the point cloud data into a data space by establishing a space coordinate system, and setting the range and the scale of the data space to obtain the space coordinate system;
filling each point cloud data into the space coordinate system, wherein a hollow area in the space coordinate system is filled according to the peripheral point cloud data;
fourthly, taking a z axis of the space coordinate system as effective data, taking an x axis and a y axis as pixel coordinates of the image data, and taking z axis data as depth data;
and fifthly, combining the pixel data of the depth data on the z axis to obtain the image data with the depth information.
The point cloud data is a set of vectors in a three-dimensional coordinate system, and can be regarded as three-dimensional vectors. Specifically, the image acquisition device can determine the plane coordinates of each target in the environment, and the laser radar device determines the distance information between the targets and then gives corresponding z-axis data to the targets so as to obtain the point cloud data of each target in the environment. The spatial coordinate system established in the second step is not limited to the establishment manner, and the spatial coordinate system may be established with itself as an origin. Taking the application of the method in the field of vehicle driving as an example, a space coordinate system can be established by taking the vehicle itself as an origin, so that the relative distance between each target in the environment and the vehicle can be directly determined. And then, carrying out data processing on the x axis, the y axis and the z axis in the space coordinate system to obtain the image data with the depth information.
Further, after the present step is performed, the image data may be stored according to a preset format.
S102: sending the image data to a video encoder so that the video encoder can output depth video data;
this step requires encoding of image data to output depth video data. The encoded format is not limited herein, and may be, for example, RGB-D format or 3D-HEVC format, etc., which may be determined according to the video data format supported by the display device shown. Accordingly, the video encoder of which format is adopted in this step is not limited, and a video encoder with a type adapted to the display format of the display device may be adopted.
S103: and sending the depth video data to corresponding display equipment according to the source of the image data, and displaying a naked eye 3D picture of the image data on the display equipment.
And after the depth video data are obtained, the depth video data are sent to corresponding display equipment according to the source of the image data.
Taking a vehicle as an example, the step can be executed according to the following procedures:
if the image data are from a main rearview mirror, the depth video data are sent to a display device positioned at the top end of the upper side of a front windshield;
if the image data is from a left side rearview mirror, sending the depth video data to a display device positioned at the lower left corner of the front windshield;
and if the image data is from a right side rearview mirror, sending the depth video data to a display device positioned at the lower right corner of the front windshield.
Referring to fig. 2, fig. 2 is a schematic position diagram of a display device provided in an embodiment of the present application, which can be applied to a vehicle, in fig. 2, three display devices are respectively disposed at positions shown in the figure, wherein display devices respectively displaying a main rear-view mirror, a left side rear-view mirror and a right side rear-view mirror are disposed on a front windshield, so that a driver can visually see an environmental scene corresponding to each rear-view mirror and a possible dangerous target. Because this application embodiment has adopted bore hole 3D picture, the picture that generates is more vivid, and the distance information when being close to of unusual object in rear is experienced to the driver accuracy of being convenient for.
Referring to fig. 3, fig. 3 is a schematic position diagram of another display device provided in the embodiment of the present application, and compared to fig. 2, in fig. 3, only a naked-eye 3D image corresponding to the main rearview mirror is displayed in the front windshield, and images corresponding to the other two side rearview mirrors are displayed by display devices respectively disposed on the door windshields of the main driver seat and the passenger seat.
The display device used is not limited, and may be an LED screen or an OLED screen attached to a windshield.
According to the embodiment of the application, the image data with the depth information is acquired, the depth video data is acquired after the video encoder is encoded, naked eye 3D pictures can be directly displayed on the display device, the sense of reality and the distance distinguishing ability of the inside rear view mirror system can be effectively improved, and the 3D experience effect of the visual images is improved.
Further, on the basis of the above embodiment, after the naked-eye 3D picture of the image data is displayed on the display device, whether a target is a dangerous target may be determined according to depth information. And if dangerous targets with depth information smaller than a preset value exist in the depth video data, marking the dangerous targets on the display equipment. The preset value is not limited and can be set by a person skilled in the art, for example, the preset value can be 10cm, 20cm or even 1 m.
In the following, an environment visualization system provided by an embodiment of the present application is introduced, and the environment visualization system described below and the environment visualization method described above may be referred to in a corresponding manner.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an environment visualization system provided in an embodiment of the present application, and the present application further provides an environment visualization system, including:
the data acquisition module is used for acquiring image data with depth information;
the encoding module is used for sending the image data to a video encoder so that the video encoder can output depth video data;
and the data display module is used for sending the depth video data to corresponding display equipment according to the source of the image data and displaying a naked eye 3D picture of the image data on the display equipment.
Based on the above embodiment, as a preferred embodiment, the method further includes:
and the storage module is used for storing the image data according to a preset format.
Based on the above embodiment, as a preferred embodiment, the data acquisition module includes:
the data acquisition unit is used for acquiring image data with depth information through image acquisition equipment and laser radar equipment; wherein the depth information includes distance information of each object in the image data.
Based on the above embodiment, as a preferred embodiment, the data acquisition unit is a unit for performing the following steps:
acquiring point cloud data through image acquisition equipment and laser radar equipment;
filling the point cloud data into a data space by establishing a space coordinate system, and setting the range and the scale of the data space to obtain the space coordinate system;
filling each point cloud data into the space coordinate system, wherein a hollow area in the space coordinate system is filled according to surrounding point cloud data;
taking a z axis of the space coordinate system as effective data, taking an x axis and a y axis as pixel coordinates of the image data, and taking z axis data as depth data;
and combining the pixel data of the depth data on the z-axis to obtain the image data with the depth information.
Based on the above embodiment, as a preferred embodiment, the data display module includes:
the first display unit is used for sending the depth video data to display equipment positioned at the top end of the upper side of a front windshield if the image data originates from a main rearview mirror;
the second display unit is used for sending the depth video data to display equipment positioned at the lower left corner of the front windshield if the image data is from a left side rearview mirror;
and the third display unit is used for sending the depth video data to the display equipment positioned at the lower right corner of the front windshield if the image data is from the right rearview mirror.
Based on the above embodiment, as a preferred embodiment, the method further includes:
and the danger detection module is used for displaying a mark of the dangerous target on the display equipment if the dangerous target with the depth information smaller than a preset value exists in the depth video data.
The present application also provides a computer readable storage medium having stored thereon a computer program which, when executed, may implement the steps provided by the above-described embodiments. The storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The present application further provides an electronic device, which may include a memory and a processor, where the memory stores a computer program, and when the processor calls the computer program in the memory, the steps provided in the foregoing embodiments may be implemented. Of course, the electronic device may also include various network interfaces, power supplies, and the like.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system provided by the embodiment, the description is relatively simple because the system corresponds to the method provided by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. An environment visualization method, comprising:
acquiring image data with depth information;
sending the image data to a video encoder so that the video encoder can output depth video data;
and sending the depth video data to corresponding display equipment according to the source of the image data, and displaying a naked eye 3D picture of the image data on the display equipment.
2. The environment visualization method according to claim 1, further comprising, after acquiring the image data with the depth information:
and storing the image data according to a preset format.
3. The environment visualization method according to claim 1, wherein acquiring image data with depth information comprises:
acquiring image data with depth information through image acquisition equipment and laser radar equipment; wherein the depth information includes distance information of each object in the image data.
4. The environment visualization method according to claim 3, wherein the acquiring of the image data with the depth information by the image acquisition device and the lidar device comprises:
acquiring point cloud data through image acquisition equipment and laser radar equipment;
filling the point cloud data into a data space by establishing a space coordinate system, and setting the range and the scale of the data space to obtain the space coordinate system;
filling each point cloud data into the space coordinate system, wherein a hollow area in the space coordinate system is filled according to the peripheral point cloud data;
taking a z-axis of the space coordinate system as effective data, taking an x-axis and a y-axis as pixel coordinates of the image data, and taking z-axis data as depth data;
and combining the pixel data of the depth data on the z-axis to obtain the image data with the depth information.
5. The environment visualization method according to claim 1, wherein sending the depth video data to the respective display device according to the source of the image data comprises:
if the image data are from a main rearview mirror, the depth video data are sent to a display device positioned at the top end of the upper side of a front windshield;
if the image data is from a left side rearview mirror, sending the depth video data to a display device positioned at the lower left corner of the front windshield;
and if the image data is from a right side rearview mirror, sending the depth video data to a display device positioned at the lower right corner of the front windshield.
6. The environment visualization method according to claim 1 or 5, wherein the display device is a windshield-attached LED screen or an OLED screen.
7. The environment visualization method according to claim 1, wherein after the depth video data is sent to a corresponding display device according to a source of the image data and a naked eye 3D picture of the image data is displayed on the display device, the method further comprises:
and if dangerous targets with depth information smaller than a preset value exist in the depth video data, displaying marks of the dangerous targets on the display equipment.
8. An environment visualization system, comprising:
the data acquisition module is used for acquiring image data with depth information;
the encoding module is used for sending the image data to a video encoder so that the video encoder can output depth video data;
and the data display module is used for sending the depth video data to corresponding display equipment according to the source of the image data and displaying a naked eye 3D picture of the image data on the display equipment.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the environment visualization method according to any one of claims 1 to 7.
10. An electronic device, characterized in that it comprises a memory in which a computer program is stored and a processor which, when it is called in said memory, implements the steps of the environment visualization method according to any of claims 1 to 7.
CN202310027956.XA 2023-01-09 2023-01-09 Environment visualization method, system, storage medium and electronic equipment Active CN115904294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310027956.XA CN115904294B (en) 2023-01-09 2023-01-09 Environment visualization method, system, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310027956.XA CN115904294B (en) 2023-01-09 2023-01-09 Environment visualization method, system, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN115904294A true CN115904294A (en) 2023-04-04
CN115904294B CN115904294B (en) 2023-06-09

Family

ID=86481945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310027956.XA Active CN115904294B (en) 2023-01-09 2023-01-09 Environment visualization method, system, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115904294B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2365694A2 (en) * 2008-11-18 2011-09-14 LG Electronics Inc. Method and apparatus for processing image signal
CN105611278A (en) * 2016-02-01 2016-05-25 欧洲电子有限公司 Image processing method and system for preventing naked eye 3D viewing dizziness and display device
CN105711597A (en) * 2016-02-25 2016-06-29 江苏大学 System and method for sensing local driving environment in front
CN107194962A (en) * 2017-04-01 2017-09-22 深圳市速腾聚创科技有限公司 Point cloud and plane picture fusion method and device
CN110298281A (en) * 2019-06-20 2019-10-01 汉王科技股份有限公司 Video structural method, apparatus, electronic equipment and storage medium
CN111971967A (en) * 2018-04-11 2020-11-20 交互数字Vc控股公司 Method and apparatus for encoding/decoding a point cloud representing a 3D object
CN112114667A (en) * 2020-08-26 2020-12-22 济南浪潮高新科技投资发展有限公司 AR display method and system based on binocular camera and VR equipment
CN112581629A (en) * 2020-12-09 2021-03-30 中国科学院深圳先进技术研究院 Augmented reality display method and device, electronic equipment and storage medium
WO2022022694A1 (en) * 2020-07-31 2022-02-03 北京智行者科技有限公司 Method and system for sensing automated driving environment
CN114564310A (en) * 2022-03-02 2022-05-31 始途科技(杭州)有限公司 Data processing method and device, electronic equipment and readable storage medium
WO2022134441A1 (en) * 2020-12-25 2022-06-30 合众新能源汽车有限公司 Image processing method, device and system, and computer readable medium
WO2022156175A1 (en) * 2021-01-20 2022-07-28 上海西井信息科技有限公司 Detection method, system, and device based on fusion of image and point cloud information, and storage medium
WO2022222121A1 (en) * 2021-04-23 2022-10-27 华为技术有限公司 Panoramic image generation method, vehicle-mounted image processing apparatus, and vehicle
CN115520100A (en) * 2022-09-21 2022-12-27 北京宾理信息科技有限公司 Automobile electronic rearview mirror system and vehicle

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2365694A2 (en) * 2008-11-18 2011-09-14 LG Electronics Inc. Method and apparatus for processing image signal
CN105611278A (en) * 2016-02-01 2016-05-25 欧洲电子有限公司 Image processing method and system for preventing naked eye 3D viewing dizziness and display device
CN105711597A (en) * 2016-02-25 2016-06-29 江苏大学 System and method for sensing local driving environment in front
CN107194962A (en) * 2017-04-01 2017-09-22 深圳市速腾聚创科技有限公司 Point cloud and plane picture fusion method and device
CN111971967A (en) * 2018-04-11 2020-11-20 交互数字Vc控股公司 Method and apparatus for encoding/decoding a point cloud representing a 3D object
CN110298281A (en) * 2019-06-20 2019-10-01 汉王科技股份有限公司 Video structural method, apparatus, electronic equipment and storage medium
WO2022022694A1 (en) * 2020-07-31 2022-02-03 北京智行者科技有限公司 Method and system for sensing automated driving environment
CN112114667A (en) * 2020-08-26 2020-12-22 济南浪潮高新科技投资发展有限公司 AR display method and system based on binocular camera and VR equipment
CN112581629A (en) * 2020-12-09 2021-03-30 中国科学院深圳先进技术研究院 Augmented reality display method and device, electronic equipment and storage medium
WO2022134441A1 (en) * 2020-12-25 2022-06-30 合众新能源汽车有限公司 Image processing method, device and system, and computer readable medium
WO2022156175A1 (en) * 2021-01-20 2022-07-28 上海西井信息科技有限公司 Detection method, system, and device based on fusion of image and point cloud information, and storage medium
WO2022222121A1 (en) * 2021-04-23 2022-10-27 华为技术有限公司 Panoramic image generation method, vehicle-mounted image processing apparatus, and vehicle
CN114564310A (en) * 2022-03-02 2022-05-31 始途科技(杭州)有限公司 Data processing method and device, electronic equipment and readable storage medium
CN115520100A (en) * 2022-09-21 2022-12-27 北京宾理信息科技有限公司 Automobile electronic rearview mirror system and vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
隗寒冰;白林: "基于多源异构信息融合的智能汽车目标检测算法", 重庆交通大学学报(自然科学版) *

Also Published As

Publication number Publication date
CN115904294B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
EP1742489B1 (en) Image display device and graphic processor for stereoscopic display of 3D graphic objects
CN109660783B (en) Virtual reality parallax correction
JP5999032B2 (en) In-vehicle display device and program
US8571304B2 (en) Method and apparatus for generating stereoscopic image from two-dimensional image by using mesh map
US20140285523A1 (en) Method for Integrating Virtual Object into Vehicle Displays
US20160307374A1 (en) Method and system for providing information associated with a view of a real environment superimposed with a virtual object
WO2015123775A1 (en) Systems and methods for incorporating a real image stream in a virtual image stream
US11050997B2 (en) Dynamic display system capable of generating images corresponding to positions of users
US20210110791A1 (en) Method, device and computer-readable storage medium with instructions for controllling a display of an augmented-reality head-up display device for a transportation vehicle
KR20170135952A (en) A method for displaying a peripheral area of a vehicle
JP2011099711A (en) Display device
JP5349224B2 (en) Image processing apparatus and image processing method
KR102223852B1 (en) Image display system and method thereof
JP2010287029A (en) Periphery display device
CN112242009A (en) Display effect fusion method, system, storage medium and main control unit
WO2016102304A1 (en) Method for presenting an image overlay element in an image with 3d information, driver assistance system and motor vehicle
WO2018222122A1 (en) Methods for perspective correction, computer program products and systems
CN109764888A (en) Display system and display methods
CN112484743B (en) Vehicle-mounted HUD fusion live-action navigation display method and system thereof
CN115904294B (en) Environment visualization method, system, storage medium and electronic equipment
WO2023112971A1 (en) Three-dimensional model generation device, three-dimensional model generation method, and three-dimensional model generation program
KR20210008503A (en) Rear view method and apparatus using augmented reality camera
US20190137770A1 (en) Display system and method thereof
CN115984122A (en) HUD backlight display system and method
US20180108173A1 (en) Method for improving occluded edge quality in augmented reality based on depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant