CN113306492A - Method and device for generating automobile A column blind area image - Google Patents

Method and device for generating automobile A column blind area image Download PDF

Info

Publication number
CN113306492A
CN113306492A CN202110794751.5A CN202110794751A CN113306492A CN 113306492 A CN113306492 A CN 113306492A CN 202110794751 A CN202110794751 A CN 202110794751A CN 113306492 A CN113306492 A CN 113306492A
Authority
CN
China
Prior art keywords
image
blind area
column
scene
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110794751.5A
Other languages
Chinese (zh)
Inventor
袁丹寿
李晨轩
张祺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hozon New Energy Automobile Co Ltd
Original Assignee
Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hozon New Energy Automobile Co Ltd filed Critical Hozon New Energy Automobile Co Ltd
Priority to CN202110794751.5A priority Critical patent/CN113306492A/en
Publication of CN113306492A publication Critical patent/CN113306492A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention provides a method and a device for generating an automobile A column blind area image, wherein the method comprises the following steps: acquiring an image of a column A blind area scene in a world coordinate system and depth information of the column A blind area scene through a first camera device; performing three-dimensional reconstruction on the A-column blind area scene based on the image and the depth information to obtain a three-dimensional scene of the A-column blind area mirror image; acquiring three-dimensional coordinates of human eyes in the vehicle in the world coordinate system through a second camera device, and updating the three-dimensional coordinates of the human eyes at set time intervals; updating the three-dimensional coordinates of the human eyes and the coordinates of the inner side screen of the A column in the world coordinate system at a specific time interval to obtain the range of the sight blind area; and generating an image of the sight blind area according to the three-dimensional scene and the range of the sight blind area, and projecting the image of the sight blind area onto a screen on the inner side of the A column.

Description

Method and device for generating automobile A column blind area image
Technical Field
The invention mainly relates to an image processing technology, in particular to a method and a device for generating an A-pillar blind area image of an automobile.
Background
The vehicle A column is a connecting column which is positioned at the left front part and the right front part of the vehicle and is used for connecting a roof and a front cabin, and plays the roles of supporting a roof of a vehicle body and enhancing the rigidity of the vehicle body. The area of view blocked by the vehicle a-pillar is called a vehicle a-pillar blind spot. The problem of influence of a vehicle A column blind area on the visual field of drivers and passengers is solved, and the method is an important subject for realizing the driving safety of the vehicles and improving the driving comfort level.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method and a device for generating an automobile A-pillar blind area image, and solving the problem of influence of an automobile A-pillar blind area on the visual field of drivers and passengers.
In order to solve the technical problem, the invention provides a method for generating an automobile A-pillar blind area image, which comprises the following steps:
acquiring an image of a column A blind area scene in a world coordinate system and depth information of the column A blind area scene through a first camera device;
performing three-dimensional reconstruction on the A-column blind area scene based on the image and the depth information to obtain a three-dimensional scene of the A-column blind area mirror image;
acquiring three-dimensional coordinates of human eyes in the vehicle in the world coordinate system through a second camera device, and updating the three-dimensional coordinates of the human eyes at set time intervals;
updating the three-dimensional coordinates of the human eyes and the coordinates of the inner side screen of the A column in the world coordinate system at a specific time interval to obtain the range of the sight blind area;
and generating an image of the sight blind area according to the three-dimensional scene and the range of the sight blind area, and projecting the image of the sight blind area onto a screen on the inner side of the A column.
In an embodiment of the invention, the first camera device comprises a depth camera.
In one embodiment of the invention, the images include black and white images and color images.
In an embodiment of the invention, the color map comprises an RGB map.
In an embodiment of the invention, the depth information comprises a distance of the a-pillar blind spot scene to the first camera.
In an embodiment of the present invention, updating the three-dimensional coordinates of the human eye and the coordinates of the inside screen of the a-pillar in the world coordinate system at the specific time interval to obtain the range of the blind sight region includes obtaining the range of the blind sight region by coordinate transformation.
In an embodiment of the present invention, the obtaining, by the second camera device, three-dimensional coordinates of the human eye in the vehicle in the world coordinate system includes obtaining the coordinates of the human eye by performing face detection on an image captured by the second camera device and performing depth measurement on pixels in the image.
In an embodiment of the invention, the depth measurement comprises obtaining a range of an actual position of the scene corresponding to a pixel in the image to the second camera.
The invention also provides a device for generating the automobile A column blind area image, which comprises:
the first camera device is used for acquiring an image of the A column blind area scene in a world coordinate system and depth information of the A column blind area scene;
the second camera device is used for acquiring the three-dimensional coordinates of the human eyes in the vehicle in the world coordinate system and updating the three-dimensional coordinates of the human eyes at set time intervals;
the image processing device is used for carrying out three-dimensional reconstruction on the A-column blind area scene based on the image and the depth information to obtain a three-dimensional scene of the A-column blind area mirror image; updating the three-dimensional coordinates of the human eyes and the coordinates of the inner side screen of the A column in the world coordinate system at a specific time interval to obtain the range of the sight blind area; generating an image of the sight blind area according to the three-dimensional scene and the range of the sight blind area;
and the image display device projects the image of the sight blind area onto a screen on the inner side of the A column.
Compared with the prior art, the invention has the following advantages: the technical scheme of this application can realize making A post blind area scene show and reach real ' transparent ' effect, improves the security that the car was driven and taken and driver and crew's experience comfort level.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principle of the invention. In the drawings:
fig. 1 is a flowchart of a method for generating an a-pillar blind area image of an automobile according to an embodiment of the present application.
FIG. 2 is a schematic effect diagram of a method for generating an A-pillar blind area image of an automobile according to the present application.
Fig. 3 is a schematic diagram of an apparatus for generating an image of a pillar blind area of an automobile according to an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described herein, and thus the present invention is not limited to the specific embodiments disclosed below.
As used herein, the terms "a," "an," "the," and/or "the" are not intended to be inclusive and include the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
The relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In the description of the present application, it is to be understood that the orientation or positional relationship indicated by the directional terms such as "front, rear, upper, lower, left, right", "lateral, vertical, horizontal" and "top, bottom", etc., are generally based on the orientation or positional relationship shown in the drawings, and are used for convenience of description and simplicity of description only, and in the case of not making a reverse description, these directional terms do not indicate and imply that the device or element being referred to must have a particular orientation or be constructed and operated in a particular orientation, and therefore, should not be considered as limiting the scope of the present application; the terms "inner and outer" refer to the inner and outer relative to the profile of the respective component itself.
Spatially relative terms, such as "above … …," "above … …," "above … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial relationship to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is turned over, devices described as "above" or "on" other devices or configurations would then be oriented "below" or "under" the other devices or configurations. Thus, the exemplary term "above … …" can include both an orientation of "above … …" and "below … …". The device may be otherwise variously oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
It should be noted that the terms "first", "second", and the like are used to define the components, and are only used for convenience of distinguishing the corresponding components, and the terms have no special meanings unless otherwise stated, and therefore, the scope of protection of the present application is not to be construed as being limited. Further, although the terms used in the present application are selected from publicly known and used terms, some of the terms mentioned in the specification of the present application may be selected by the applicant at his or her discretion, the detailed meanings of which are described in relevant parts of the description herein. Further, it is required that the present application is understood not only by the actual terms used but also by the meaning of each term lying within.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations are added to or removed from these processes.
The embodiment of the application describes a method and a device for generating an A-pillar blind area image of an automobile.
Fig. 1 is a flowchart of a method for generating an a-pillar blind area image of an automobile according to an embodiment of the present application.
As shown in fig. 1, the method for generating an a-pillar blind area image of an automobile includes, step 101, acquiring an image of an a-pillar blind area scene in a world coordinate system and depth information of the a-pillar blind area scene through a first camera device; 102, performing three-dimensional reconstruction on the A-column blind area scene based on the image and the depth information to obtain a three-dimensional scene of the A-column blind area mirror image; 103, acquiring three-dimensional coordinates of human eyes in the vehicle in the world coordinate system through a second camera device, and updating the three-dimensional coordinates of the human eyes at set time intervals; 104, updating the three-dimensional coordinates of the human eyes and the coordinates of the inner side screen of the A column in the world coordinate system at a specific time interval to obtain the range of a sight blind area; step 105; and generating an image of the sight blind area according to the three-dimensional scene and the range of the sight blind area, and projecting the image of the sight blind area onto a screen on the inner side of the A column.
Specifically, in step 101, an image of an a-pillar blind spot scene in a world coordinate system and depth information of the a-pillar blind spot scene are acquired by a first imaging device.
In some embodiments, the first camera device comprises a depth camera. The image comprises a black-and-white image and a color image, wherein the color image comprises an RGB image, and can also be a color image in the form of image representation such as HSV image, YUV image and the like. The depth information includes a distance of the a-pillar blind spot scene to the first camera.
In step 102, three-dimensional reconstruction is performed on the A-pillar blind area scene based on the image and the depth information, and a three-dimensional scene of the A-pillar blind area mirror image is obtained. Three-dimensional reconstruction is achieved, for example, by performing filtering processing and texture mapping on the image and depth information.
In step 103, acquiring three-dimensional coordinates of the human eye in the vehicle in the world coordinate system through a second camera device, and updating the three-dimensional coordinates of the human eye at set time intervals; the set time interval can be set according to requirements and actual situations, for example, set to be a time interval of millisecond (ms) or second(s), specifically, for example, a time interval of 10ms, 20ms, … …, 1s, 1.5s, and the like.
In some embodiments, obtaining three-dimensional coordinates of the human eye in the vehicle in the world coordinate system by the second camera includes obtaining the human eye coordinates by performing face detection on an image captured by the second camera and performing depth measurement on pixels in the image. The depth measurement comprises obtaining a range of an actual position of the scene corresponding to a pixel in the image to the second camera.
And step 104, updating the three-dimensional coordinates of the human eyes and the coordinates of the inner screen of the A column in the world coordinate system at specific time intervals to obtain the range of the sight blind area.
In some embodiments, updating the three-dimensional coordinates of the human eye and the coordinates of the inside screen of the a-pillar in the world coordinate system at the specific time interval to obtain the range of the sight-line shadow includes obtaining the range of the sight-line shadow by coordinate transformation. The second imaging device can change the angle of view as needed, for example.
In step 105, generating an image of the sight-line blind area according to the three-dimensional scene and the range of the sight-line blind area, and projecting the image of the sight-line blind area onto a screen on the inner side of the A column.
FIG. 2 is a schematic effect diagram of a method for generating an A-pillar blind area image of an automobile according to the present application.
As shown in fig. 2, in an actual situation, when the eyes of the occupants in the vehicle are in different positions, the positions where the scene at the same position outside the vehicle is projected on the a-pillar display device AD should be different. AS in fig. 2 represents the a-pillar blind spot scene. According to the technical scheme, when the A column blind area scene outside the automobile is subjected to projection calculation, the positions of the eyes (human eyes E) of drivers and passengers which can change at specific time intervals are considered, so that the A column blind area scene display is realized to achieve a real 'transparent' effect, and the driving safety and the experience comfort level of the drivers and passengers are improved.
The application also provides a device for generating the automobile A column blind area image, which comprises a first camera device, a second camera device, an image processing device and an image display device.
Fig. 3 is a schematic diagram of an apparatus for generating an image of a pillar blind area of an automobile according to an embodiment of the present application. As shown in fig. 3, the apparatus 300 for generating an a-pillar blind spot image of an automobile includes a first camera 302, a second camera 304, an image processing device 306, and an image display device 308. The first camera 302, the second camera 304, the image processing device 306, and the image display device 308 may communicate to transmit data and instructions.
In some embodiments, the first camera is configured to acquire an image of an a-pillar blind spot scene in a world coordinate system and depth information of the a-pillar blind spot scene. The second camera device is used for acquiring the three-dimensional coordinates of the human eyes in the vehicle in the world coordinate system and updating the three-dimensional coordinates of the human eyes at set time intervals;
the image processing device is used for carrying out three-dimensional reconstruction on the A-column blind area scene based on the image and the depth information to obtain a three-dimensional scene of the A-column blind area mirror image; then updating the three-dimensional coordinates of the human eyes and the coordinates of the inner side screen of the A column in the world coordinate system at a specific time interval to obtain the range of the sight blind area; and generating an image of the sight blind area according to the three-dimensional scene and the range of the sight blind area.
An image display device may be used to project an image of the blind sight zone onto a screen inside the a-pillar.
The device for generating the automobile A column blind area image updates the display of the A column blind area image on the screen on the inner side of the A column at a specific time interval according to the positions of the eyes of the driver and the passenger, so that the display of the A column blind area scene is realized to achieve a real 'transparent' effect and guarantee the display accuracy, and the automobile driving safety and the experience comfort level of the driver and the passenger are improved.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing disclosure is by way of example only, and is not intended to limit the present application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, this application uses specific language to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. The processor may be one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), digital signal processing devices (DAPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, or a combination thereof. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media. For example, computer-readable media may include, but are not limited to, magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips … …), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD) … …), smart cards, and flash memory devices (e.g., card, stick, key drive … …).
The computer readable medium may comprise a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, and the like, or any suitable combination. The computer readable medium can be any computer readable medium that can communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer readable medium may be propagated over any suitable medium, including radio, electrical cable, fiber optic cable, radio frequency signals, or the like, or any combination of the preceding.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
Although the present application has been described with reference to the present specific embodiments, it will be recognized by those skilled in the art that the foregoing embodiments are merely illustrative of the present application and that various changes and substitutions of equivalents may be made without departing from the spirit of the application, and therefore, it is intended that all changes and modifications to the above-described embodiments that come within the spirit of the application fall within the scope of the claims of the application.

Claims (13)

1. A method for generating an automobile A-pillar blind area image comprises the following steps:
acquiring an image of a column A blind area scene in a world coordinate system and depth information of the column A blind area scene through a first camera device;
performing three-dimensional reconstruction on the A-column blind area scene based on the image and the depth information to obtain a three-dimensional scene of the A-column blind area mirror image;
acquiring three-dimensional coordinates of human eyes in the vehicle in the world coordinate system through a second camera device, and updating the three-dimensional coordinates of the human eyes at set time intervals;
updating the three-dimensional coordinates of the human eyes and the coordinates of the inner side screen of the A column in the world coordinate system at a specific time interval to obtain the range of the sight blind area;
and generating an image of the sight blind area according to the three-dimensional scene and the range of the sight blind area, and projecting the image of the sight blind area onto a screen on the inner side of the A column.
2. The method of generating an automobile a-pillar blind spot image according to claim 1, wherein the first camera device comprises a depth camera.
3. The method of generating an automobile a-pillar blind spot image as recited in claim 1, wherein the image comprises a black and white image and a color image.
4. The method of generating an automobile a-pillar blind spot image as recited in claim 3, wherein the color map comprises an RGB map.
5. The method of generating an automotive a-pillar blind image as set forth in claim 1, characterized in that the depth information includes a distance of the a-pillar blind scene from the first camera.
6. The method of generating an a-pillar blind area image of an automobile according to claim 1, wherein updating the three-dimensional coordinates of the human eye and the coordinates of the inside screen of the a-pillar in the world coordinate system at specific time intervals to obtain the range of the blind area includes obtaining the range of the blind area by coordinate transformation.
7. The method of generating an a-pillar blind area image of an automobile according to claim 1, wherein acquiring three-dimensional coordinates of human eyes in the automobile in the world coordinate system by a second camera comprises acquiring human eye coordinates by performing face detection on an image captured by the second camera and performing depth measurement on pixels in the image.
8. The method of generating an automobile a-pillar blind spot image according to claim 7, wherein the depth measurement comprises obtaining a range of an actual position of a scene corresponding to a pixel in the image to the second camera.
9. An apparatus for generating an image of a pillar blind area of an automobile, comprising:
the first camera device is used for acquiring an image of the A column blind area scene in a world coordinate system and depth information of the A column blind area scene;
the second camera device is used for acquiring the three-dimensional coordinates of the human eyes in the vehicle in the world coordinate system and updating the three-dimensional coordinates of the human eyes at set time intervals;
the image processing device is used for carrying out three-dimensional reconstruction on the A-column blind area scene based on the image and the depth information to obtain a three-dimensional scene of the A-column blind area mirror image; updating the three-dimensional coordinates of the human eyes and the coordinates of the inner side screen of the A column in the world coordinate system at a specific time interval to obtain the range of the sight blind area; generating an image of the sight blind area according to the three-dimensional scene and the range of the sight blind area;
and the image display device projects the image of the sight blind area onto a screen on the inner side of the A column.
10. The apparatus of claim 9, wherein the first camera device comprises a depth camera.
11. The apparatus of claim 9, wherein the image comprises black and white and color images.
12. The apparatus for generating an a-pillar shadow image of an automobile according to claim 11, wherein the color map includes an RGB map.
13. The apparatus of claim 9, wherein the depth information comprises a distance of the a-pillar blind scene from the first camera.
CN202110794751.5A 2021-07-14 2021-07-14 Method and device for generating automobile A column blind area image Pending CN113306492A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110794751.5A CN113306492A (en) 2021-07-14 2021-07-14 Method and device for generating automobile A column blind area image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110794751.5A CN113306492A (en) 2021-07-14 2021-07-14 Method and device for generating automobile A column blind area image

Publications (1)

Publication Number Publication Date
CN113306492A true CN113306492A (en) 2021-08-27

Family

ID=77382284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110794751.5A Pending CN113306492A (en) 2021-07-14 2021-07-14 Method and device for generating automobile A column blind area image

Country Status (1)

Country Link
CN (1) CN113306492A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113829997A (en) * 2021-11-16 2021-12-24 合众新能源汽车有限公司 Display method and device for images outside vehicle, curved screen and vehicle

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106627369A (en) * 2016-12-15 2017-05-10 浙江吉利控股集团有限公司 Vehicle window display system
CN109941277A (en) * 2019-04-08 2019-06-28 宝能汽车有限公司 The method, apparatus and vehicle of display automobile pillar A blind image
US20200039435A1 (en) * 2018-08-02 2020-02-06 Chunghwa Picture Tubes, Ltd. Onboard camera system for eliminating a-pillar blind areas of a mobile vehicle and image processing method thereof
CN111731187A (en) * 2020-06-19 2020-10-02 杭州视为科技有限公司 Automobile A-pillar blind area image display system and method
EP3736768A1 (en) * 2019-05-07 2020-11-11 Alpine Electronics, Inc. Image processing apparatus, image processing system, image processing method, and program
CN112140996A (en) * 2020-09-01 2020-12-29 恒大新能源汽车投资控股集团有限公司 Vehicle rear seat passenger blind area processing method, vehicle and equipment
CN112298039A (en) * 2020-09-27 2021-02-02 浙江合众新能源汽车有限公司 A-column imaging method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106627369A (en) * 2016-12-15 2017-05-10 浙江吉利控股集团有限公司 Vehicle window display system
US20200039435A1 (en) * 2018-08-02 2020-02-06 Chunghwa Picture Tubes, Ltd. Onboard camera system for eliminating a-pillar blind areas of a mobile vehicle and image processing method thereof
CN109941277A (en) * 2019-04-08 2019-06-28 宝能汽车有限公司 The method, apparatus and vehicle of display automobile pillar A blind image
EP3736768A1 (en) * 2019-05-07 2020-11-11 Alpine Electronics, Inc. Image processing apparatus, image processing system, image processing method, and program
CN111731187A (en) * 2020-06-19 2020-10-02 杭州视为科技有限公司 Automobile A-pillar blind area image display system and method
CN112140996A (en) * 2020-09-01 2020-12-29 恒大新能源汽车投资控股集团有限公司 Vehicle rear seat passenger blind area processing method, vehicle and equipment
CN112298039A (en) * 2020-09-27 2021-02-02 浙江合众新能源汽车有限公司 A-column imaging method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113829997A (en) * 2021-11-16 2021-12-24 合众新能源汽车有限公司 Display method and device for images outside vehicle, curved screen and vehicle

Similar Documents

Publication Publication Date Title
CN110203210A (en) A kind of lane departure warning method, terminal device and storage medium
CN110678872A (en) Direct vehicle detection as 3D bounding box by using neural network image processing
EP2950237A1 (en) Driver assistance apparatus capable of performing distance detection and vehicle including the same
US9826166B2 (en) Vehicular surrounding-monitoring control apparatus
CN108621940A (en) The control method of the display system of vehicle and the display system of vehicle
CN113635833A (en) Vehicle-mounted display device, method and system based on automobile A column and storage medium
KR20150068833A (en) Stereo camera, driver assistance apparatus and Vehicle including the same
US11592677B2 (en) System and method for capturing a spatial orientation of a wearable device
EP3456574A1 (en) Method and system for displaying virtual reality information in a vehicle
CN110996051A (en) Method, device and system for vehicle vision compensation
CN112298040A (en) Auxiliary driving method based on transparent A column
CN110562140A (en) Multi-camera implementation method and system of transparent A column
KR20200110033A (en) Image display system and method thereof
CN113306492A (en) Method and device for generating automobile A column blind area image
CN110232300A (en) Lane vehicle lane-changing intension recognizing method and system by a kind of
JP2019113720A (en) Vehicle surrounding display control device
CN116653775A (en) Vehicle-mounted auxiliary display method, display system, display device and readable storage medium
JP2016170688A (en) Driving support device and driving support system
US10535150B2 (en) Texture evaluation system
CN113727071A (en) Road condition display method and system
CN111241946B (en) Method and system for increasing FOV (field of view) based on single DLP (digital light processing) optical machine
CN114626472A (en) Auxiliary driving method and device based on machine learning and computer readable medium
CN113676618A (en) Intelligent display system and method of transparent A column
CN113343935A (en) Method and device for generating automobile A column blind area image
JP2020161002A (en) Video display system, driving simulator system, video display method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant after: United New Energy Automobile Co.,Ltd.

Address before: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant before: Hozon New Energy Automobile Co., Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20210827

RJ01 Rejection of invention patent application after publication