CN112738469A - Image processing method, apparatus, system, and computer-readable medium - Google Patents

Image processing method, apparatus, system, and computer-readable medium Download PDF

Info

Publication number
CN112738469A
CN112738469A CN202011567236.5A CN202011567236A CN112738469A CN 112738469 A CN112738469 A CN 112738469A CN 202011567236 A CN202011567236 A CN 202011567236A CN 112738469 A CN112738469 A CN 112738469A
Authority
CN
China
Prior art keywords
image
calculation
image stream
camera
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011567236.5A
Other languages
Chinese (zh)
Inventor
袁丹寿
李晨轩
盛大宁
张祺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Hozon New Energy Automobile Co Ltd
Original Assignee
Zhejiang Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Hozon New Energy Automobile Co Ltd filed Critical Zhejiang Hozon New Energy Automobile Co Ltd
Priority to CN202011567236.5A priority Critical patent/CN112738469A/en
Publication of CN112738469A publication Critical patent/CN112738469A/en
Priority to PCT/CN2021/094775 priority patent/WO2022134442A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising

Abstract

An image processing method, apparatus, system, and computer-readable medium are provided. The method comprises the following steps: acquiring a first image stream shot by a first camera and a second image stream shot by a second camera; and performing pipeline processing on the first image stream and the second image stream using a plurality of computing units, wherein the plurality of computing units includes: a first calculation unit that performs a first calculation based on each first image of the first image stream to obtain first information; a second calculation unit that performs a second calculation based on each second image of the second image stream to obtain second information; and a third calculation unit that performs a third calculation using the first image, the first information, and the second information to obtain a third image; wherein any computing unit of the plurality of computing units comprises the number of parallel computing subunits, so that the time interval between two adjacent computing is less than or equal to the first time length. The method can provide smooth image output and avoid serious problems of time delay and smoothness.

Description

Image processing method, apparatus, system, and computer-readable medium
Technical Field
The present application relates generally to the field of image processing, and more particularly, to an image processing method, apparatus, system, and computer readable medium.
Background
In a scene where a photographed image needs to be processed and then displayed to a user, a picture displayed in the prior art generally has a large delay and is not smooth enough. Especially in the case of a long algorithm calculation time, there is not enough time to perform image conversion and image enhancement, so that the user feels dizzy after seeing the display.
In the field of automobiles, a vision blind area formed by an A column of an automobile becomes the largest potential safety hazard of an automobile accident. However, the a-pillar is an indispensable component in the vehicle body structure, and plays an important role in protecting the safety of the vehicle occupants. At present, the solution for solving the problem of the automobile shielding in the prior art is a 'transparent a-pillar', that is, a camera outside the automobile is used for capturing the scene outside the automobile, and a part of the image is cut as the image shielded by the a-pillar and is directly displayed on a screen attached to the a-pillar.
However, the existing automobile a-pillar solution has the following disadvantages: 1. the picture of the column A has larger delay, the display picture of the screen is inconsistent with the outdoor scene, and when the delay is serious, a driver can misjudge the scene outside the vehicle, so that safety accidents are easily caused; 2. the displayed picture is not smooth, and the user experience is not good; 3. when the algorithm takes a long time to calculate, there is not enough time to process the image conversion and the image enhancement, so that the driver feels dizzy after seeing the picture of the a-pillar display screen. Therefore, how to output clear images with low time delay and fluency is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The technical problem to be solved by the present application is to provide an image processing method, device, system and computer readable medium, which can perform clear image output with low time delay and fluency.
In order to solve the above technical problem, the present application provides an image processing method, including: acquiring a first image stream shot by a first camera and a second image stream shot by a second camera, wherein the first image stream comprises a plurality of first images, and the second image stream comprises a plurality of second images; and performing pipeline processing on the first image stream and the second image stream using a plurality of computing units, wherein the plurality of computing units includes: a first calculation unit that performs a first calculation based on each first image of the first image stream to obtain first information; a second calculation unit that performs a second calculation based on each second image of the second image stream to obtain second information; and a third calculation unit that performs a third calculation using the first image, the first information, and the second information to obtain a third image; wherein any computing unit of the plurality of computing units comprises the number of parallel computing subunits, so that the time interval between two adjacent computing is less than or equal to the first time length.
Optionally, the first camera is an external camera for acquiring external scene information, the second camera is an internal camera for acquiring sight line information of the driver, the first image stream includes an external image stream of the vehicle, the second image stream includes an internal image stream of the vehicle, the first calculation includes spatial information calculation, the second calculation includes sight line calculation, and the third calculation includes occlusion region calculation.
Optionally, the vehicle exterior camera and the vehicle interior camera are both high-speed cameras; and the acquisition frame rates of the camera outside the vehicle and the camera inside the vehicle are the same.
Optionally, the second calculation further comprises line of sight coordinate transfer.
Optionally, the third calculation further comprises an image enhancement calculation.
Optionally, the method further comprises: and displaying the third image to the driver.
Optionally, the execution pipeline processing is in time units of time slots.
Optionally, the execution pipeline processing further comprises the steps of: and when any one of the plurality of calculation units fails to calculate, taking the last calculation result of any one calculation unit as the calculation result of the time.
Optionally, when the duration of a single calculation performed by any of the plurality of calculation units is longer than the second time length, all the steps of the method are re-executed.
Optionally, the method further comprises: accelerating any of the plurality of computing units using an accelerator.
Optionally, wherein the first image used by the third computing unit is the latest first image in the first image stream.
Optionally, the execution pipeline processing employs a round-robin token synchronization mechanism.
In order to solve the above technical problem, the present application further provides an image processing apparatus, including: a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the method as described above.
In order to solve the above technical problem, the present application further provides an image processing system, including: an acquisition module, configured to acquire a first image stream captured by a first camera and a second image stream captured by a second camera, where the first image stream includes a plurality of first images and the second image stream includes a plurality of second images; and a pipeline module to perform pipeline processing on the first image stream and the second image stream using a plurality of computing units, wherein the plurality of computing units includes: a first calculation unit that performs a first calculation based on each first image of the first image stream to obtain first information; a second calculation unit that performs a second calculation based on each second image of the second image stream to obtain second information; and a third calculation unit that performs a third calculation using the first image, the first information, and the second information to obtain a third image; wherein any computing unit of the plurality of computing units comprises the number of parallel computing subunits, so that the time interval between two adjacent computing is less than or equal to the first time length.
To solve the above technical problem, the present application also provides a computer readable medium storing computer program code, which when executed by a processor implements the method as described above.
Compared with the prior art, the image processing method, the device, the system and the computer readable medium provide smooth and clear image output by using a plurality of computing units to execute pipeline processing, and are particularly suitable for displaying the image of the scene outside the automobile on the A column of the automobile for the driver to view; further, serious latency and fluency problems are avoided by using parallel compute subunits to perform the computations.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the principle of the application. In the drawings:
FIG. 1 is a schematic flow chart diagram of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic view of a flow line process for an automobile A-pillar display according to an embodiment of the present application;
FIG. 3 is an architecture diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 4 is a block diagram illustrating an image processing system according to an embodiment of the present application.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only examples or embodiments of the application, from which the application can also be applied to other similar scenarios without inventive effort for a person skilled in the art. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
The relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations are added to or removed from these processes.
The application provides an image processing method. Fig. 1 is a flowchart illustrating an image processing method according to the present embodiment. As shown in fig. 1, the image processing method of the present embodiment includes the steps of:
step 101, acquiring a first image stream shot by a first camera and a second image stream shot by a second camera; and
at step 102, pipeline processing is performed on the first image stream and the second image stream using a plurality of computing units.
In step 101, an image processing system acquires a first image stream and a second image stream. The first image stream includes a plurality of first images captured by the first camera, wherein the plurality of first images may be ordered in capture order. The second image stream includes a plurality of second images captured by the second camera, wherein the plurality of second images may be ordered in capture order.
Alternatively, the first camera may be an off-board camera for acquiring off-board scene information, and the first image stream may include an off-board image stream of the automobile; the second camera may be an in-vehicle camera for acquiring driver's sight line information, and the second image stream may include an in-vehicle image stream of the automobile. Optionally, the exterior camera and the interior camera may both be high-speed cameras; and the acquisition frame rates of the exterior camera and the interior camera can be the same. In one example, when the update frame rate of the display for displaying the final image is D fps, the acquisition frame rates of the off-board camera and the on-board camera may be integer multiples of D. For example, when the update frame rate of the display is 30fps, the acquisition frame rate of the vehicle-exterior camera and the vehicle-interior camera may be 6 times the update frame rate of the display, that is, 180 fps. Selecting a high rate camera helps to improve the dynamic effect of off-board acquisition.
In step 102, the system performs pipeline processing on the first image stream and the second image stream using a plurality of computing units. The plurality of calculation units include a first calculation unit, a second calculation unit, and a third calculation unit. The first computing unit is used for performing first computation on the basis of each first image of the first image stream to obtain first information; the second calculation unit is used for performing second calculation on the basis of each second image of the second image stream to obtain second information; the third calculation unit is configured to perform a third calculation using the first image, the first information, and the second information to obtain a third image. Any computing unit of the plurality of computing units may include a plurality of parallel computing subunits, and the number of parallel computing subunits is such that the time interval between two adjacent computations is less than or equal to the first time length. In one example, the first length of time may be 30 ms. By using parallel computing subunits to perform computation, the time interval between two adjacent computations is less than or equal to the first time length, so that serious problems of time delay and fluency can be avoided. Alternatively, the execution pipeline processing may be in time units of slots.
Optionally, the image processing method of this embodiment may further include: an accelerator is used to accelerate any of the plurality of computing units. In one example, the accelerator may be a GPU acceleration unit in an onboard controller or an external GPU processor. By using the accelerator, the time for the calculation unit to perform the calculation can be shortened, thereby shortening the time required for the entire pipeline processing.
Optionally, after step 102, the image processing method of this embodiment may further include step 103: the third image is presented to the driver. The third image obtained by the first image, the first information and the second information is displayed to the driver, so that the driver can obtain the required information through the third image. In one example, when the first image is an image outside the automobile and the second information is sight line information of the driver, displaying the third image on the automobile A column enables the driver to see a picture of an area shielded by the A column, so that driving safety is improved.
Optionally, the execution pipeline processing in step 102 may further include the following steps: and when any one of the plurality of calculation units fails to calculate, taking the last calculation result of any one calculation unit as the calculation result of the time. The last calculation result of the calculation unit with failed calculation is taken as the current calculation result and is continuously output to the next calculation unit, so that the continuity, stability and fluency of the whole assembly line process can be ensured.
Alternatively, when the length of time for which any one of the plurality of calculation units performs the calculation at a time is longer than the second time length, all the steps of the image processing method may be re-executed. The second time period may be a reset waiting time preset by a user, and in one example, the second time period may be 100 milliseconds. When the duration of a single execution of the calculation by any one calculation unit exceeds the second time length, the system can inform all the calculation units to reset and restart the pipeline. The situation that the whole production line cannot be continued all the time due to the fact that a certain computing unit cannot normally compute can be avoided by setting the second time length.
Alternatively, the first image used by the third computing unit may be the latest first image in the first image stream. Since the first calculation unit takes a certain time to perform the first calculation, the first camera may have captured the first image that is newer than the first image used by the first calculation unit when the third calculation unit starts the calculation. By performing the third calculation using the latest first image in the first image stream, the time delay between the third image and the scene captured by the first camera can be further reduced, and the smear time of the third image can be shortened.
Alternatively, the execution pipeline processing may employ a round-robin token synchronization mechanism. After the system is started, a token counting instruction 0 is started for the first computing unit, the token is sent to the next computing unit after the first computing unit finishes computing, and the token is transmitted to the next pipeline unit by analogy. When the first computing unit performs the second computing, the used token count is increased by 1 to be 1, and after the second computing is completed, the token is transmitted to the next computing unit, and so on. When the token count reaches a preset maximum count value, the system zeroes the token count to pass the token from 0. Alternatively, when there is no output result from the computing unit for a long time, the token count is zeroed to pass the token from 0. The timing of asynchronous compute units can be synchronized using a round robin token synchronization mechanism and can be used to detect compute unit loss problems.
In summary, the image processing method of the embodiment provides smooth and clear image output by using a plurality of computing units to perform pipeline processing, and is particularly suitable for displaying an image of a scene outside a car on a column a of the car for a driver to view; further, serious latency and fluency problems are avoided by using parallel compute subunits to perform the computations.
Fig. 2 is a schematic view of a pipeline processing of an a-pillar display of an automobile according to an embodiment of the present application, and the following describes an image processing method of the present application by taking fig. 2 as an example. The screen of the a-pillar of the automobile can dynamically display an image blocked by the a-pillar when looking out of the automobile from the driver's sight. As shown in fig. 2, the first calculation included in the first calculation unit includes spatial information calculation, the second calculation included in the second calculation unit includes line-of-sight calculation and line-of-sight coordinate transfer, and the third calculation included in the third calculation unit includes occlusion region calculation and image enhancement calculation. In the embodiment, the first camera is an exterior camera for acquiring exterior scene information, and the first image stream comprises an exterior image stream of an automobile; the second camera is an in-vehicle camera for acquiring sight line information of the driver, and the second image stream includes an in-vehicle image stream of the automobile. The spatial information calculation means that the spatial information (i.e., the first information) of the vehicle exterior scene is calculated using the image (i.e., the first image) of the vehicle exterior scene captured by the vehicle exterior camera (i.e., the first camera). The sight line calculation means that the spatial coordinates of the driver's sight line are calculated from an in-vehicle image (i.e., a second image) captured by an in-vehicle camera (i.e., a second camera). The line-of-sight coordinate transmission means transmitting the calculated spatial coordinates of the driver's line-of-sight to a calculation unit that needs to use the spatial coordinates of the line-of-sight, for example, a third calculation unit. The shielded area calculation means that a third image (namely, an area shielded by the A column when the driver looks outside the vehicle from the sight line) is calculated according to the vehicle exterior scene image, the vehicle exterior scene space information and the sight line space coordinate transmission. The image enhancement calculation refers to performing enhancement processing on the third image.
The flow of the image processing scheme for the automobile A-pillar display of the embodiment is as follows:
first, the image processing system starts the exterior camera and the interior camera. Optionally, the time delay for the two cameras to initiate exposure is controlled to within 1/5 of the time per frame. For example, when each frame of time for shooting outside the vehicle is 16ms, the time delay is controlled within 3.2 ms, so that the scenes, the sight lines and the scenes outside the vehicle of the A-pillar display screen can be synchronized in real time without delay. The camera outside the vehicle and the camera inside the vehicle start to acquire images after being started, the camera outside the vehicle acquires scenery outside the vehicle body, and the camera inside the vehicle acquires face pictures of a driver. The system analyzes the sight line of the driver according to the collected face image of the driver and transmits the spatial coordinates of the sight line to a subsequent calculation unit. Meanwhile, the system calculates the space information outside the vehicle according to the images outside the vehicle. And then, the system calculates the image of the shielded area after the calculation of the sight line information of the driver is finished, then calculates the image on the A-pillar display according to the space mapping relation, performs display enhancement processing on the image, and finally displays the image on the display screen of the A-pillar for the driver to check.
Fig. 2 is a schematic view of a pipeline design according to the image processing scheme for the a-pillar display of an automobile described above. As shown in fig. 2, the spatial information calculation starts at the 1 st slot, and a calculation time of 4 slot units is required. The line-of-sight calculation starts at the 1 st slot, requiring a calculation time of 1 slot unit. The sight line coordinate transmission starts after the sight line calculation is completed, and 3 time slot units are required. The occlusion region calculation starts after the spatial information calculation and the sight line coordinate transmission are completed, and 3 time slot units are required. The image enhancement calculation starts after the calculation of the occlusion region is completed, and 1 time slot unit is needed. And after the image enhancement calculation is completed, displaying the enhanced image on an A-pillar display screen, wherein the display time is 3 time slot units. If the above calculations are all processed serially, it takes a long time to display the vehicle exterior image on the a-pillar, resulting in a serious problem of time delay and fluency. Therefore, by setting a proper time slot and a first time length, the time interval between two adjacent times of calculation is limited to be less than or equal to the first time length, so that serious problems of time delay and fluency are avoided.
In the present embodiment, the pipeline processing takes a time slot of 10 msec as a time unit, and the first time length is set to 30 msec (i.e., 3 slot units). The first computing unit comprises at least two parallel subunits, and after the first parallel subunit starts to compute 3 time slot units, the second parallel subunit starts to compute, so that a computing result can be output within a first time length to be used by a subsequent computing unit, and a final enhanced image can be obtained within 30 milliseconds and displayed to a driver.
The present application also provides an image processing apparatus comprising a controller and a processor, the controller being configured to control the processor to execute the instructions to implement the image processing method as described above.
Fig. 3 shows an architecture diagram of an image processing apparatus according to an embodiment of the present application. Referring to fig. 3, the image processing apparatus 300 may include a memory 310 and a processor 320. The memory 310 is used to store instructions that are executable by the processor 320. The processor 320 is configured to execute instructions to implement the image processing method described above.
The image processing apparatus 300 may include an internal communication bus 301, a Processor (Processor)302, a Read Only Memory (ROM)303, a Random Access Memory (RAM)304, and a communication port 305. When applied to a personal computer, the image processing apparatus 300 may further include a hard disk 307. The internal communication bus 301 may enable data communication among the components of the image processing apparatus 300. Processor 302 may make the determination and issue a prompt. In some embodiments, processor 302 may be comprised of one or more processors. The communication port 305 can enable data communication of the image processing apparatus 300 with the outside. In some embodiments, the image processing device 300 may send and receive information and data from a network through the communication port 305. The image processing apparatus 300 may also comprise various forms of program storage units and data storage units, such as a hard disk 307, a Read Only Memory (ROM)303 and a Random Access Memory (RAM)304, capable of storing various data files for computer processing and/or communication, and possibly program instructions for execution by the processor 302. The processor executes these instructions to implement the main parts of the method. The results processed by the processor are communicated to the user device through the communication port and displayed on the user interface.
It is to be understood that the image processing method of the present application is not limited to being implemented by one image processing apparatus, but may be cooperatively implemented by a plurality of online image processing apparatuses. The online image processing devices may be connected and communicate via a local area network or a wide area network.
Further implementation details of the image processing apparatus of the present embodiment may refer to the embodiments described in fig. 1 to 2, and are not described herein.
The application also provides an image processing system. FIG. 4 is a block diagram illustrating an image processing system according to an embodiment of the present application. As shown in fig. 4, the image processing system 400 includes an acquisition module 401 and a pipeline module 402.
The acquiring module 401 is configured to acquire a first image stream captured by a first camera and a second image stream captured by a second camera, where the first image stream includes a plurality of first images and the second image stream includes a plurality of second images.
The pipeline module 402 is to perform pipeline processing on the first image stream and the second image stream using a plurality of computational units. The plurality of calculation units include a first calculation unit 4021, a second calculation unit 4022, and a third calculation unit 4023. The first calculation unit 4021 performs a first calculation based on each first image of the first image stream to obtain first information. The second calculation unit 4022 performs second calculation based on each second image of the second image stream to obtain second information. The third calculation unit 40240233 performs a third calculation using the first image, the first information, and the second information to obtain a third image. Wherein, any computing unit of the plurality of computing units comprises the number of the parallel computing subunits, so that the time interval between two adjacent computing is less than or equal to the first time length.
The operations performed by the modules 401 and 402 may be referred to the description of the steps 101 and 102 in the embodiments of fig. 1 and 2, respectively, and will not be described herein.
The present application also provides a computer readable medium having stored thereon computer program code which, when executed by a processor, implements the image processing method as described above.
In an embodiment of the present application, the computer program code may implement the image processing method described above when executed by the processor 320 in the controller 300 shown in fig. 3.
For example, the image processing method of the present application can be implemented as a program of the image processing method, stored in the memory 310, and loaded into the processor 320 for execution, so as to implement the image processing method of the present application.
When the image processing method is implemented as a computer program, it may be stored in a computer-readable storage medium as an article of manufacture. For example, computer-readable storage media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD)), smart cards, and flash memory devices (e.g., electrically Erasable Programmable Read Only Memory (EPROM), card, stick, key drive). In addition, various storage media described herein can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media (and/or storage media) capable of storing, containing, and/or carrying code and/or instructions and/or data.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing disclosure is by way of example only, and is not intended to limit the present application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, this application uses specific language to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Aspects of the methods and systems of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. The processor may be one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), digital signal processing devices (DAPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, or a combination thereof. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media. For example, computer-readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips … …), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD) … …), smart cards, and flash memory devices (e.g., card, stick, key drive … …).
A computer readable signal medium may comprise a propagated data signal with computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, and the like, or any suitable combination. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer readable signal medium may be propagated over any suitable medium, including radio, electrical cable, fiber optic cable, radio frequency signals, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the application have been discussed in the foregoing disclosure by way of example, it should be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments of the application. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
Although the present application has been described with reference to the present specific embodiments, it will be recognized by those skilled in the art that the foregoing embodiments are merely illustrative of the present application and that various changes and substitutions of equivalents may be made without departing from the spirit of the application, and therefore, it is intended that all changes and modifications to the above-described embodiments that come within the spirit of the application fall within the scope of the claims of the application.

Claims (15)

1. An image processing method comprising:
acquiring a first image stream shot by a first camera and a second image stream shot by a second camera, wherein the first image stream comprises a plurality of first images, and the second image stream comprises a plurality of second images; and
performing pipeline processing on the first image stream and the second image stream using a plurality of computational units, wherein the plurality of computational units comprises:
a first calculation unit that performs a first calculation based on each first image of the first image stream to obtain first information;
a second calculation unit that performs a second calculation based on each second image of the second image stream to obtain second information; and
a third calculation unit that performs a third calculation using the first image, the first information, and the second information to obtain a third image;
wherein any computing unit of the plurality of computing units comprises the number of parallel computing subunits, so that the time interval between two adjacent computing is less than or equal to the first time length.
2. The method of claim 1, wherein the first camera is an off-board camera for acquiring off-board scene information, the second camera is an in-board camera for acquiring driver's gaze information, the first image stream includes an off-board image stream of a car, the second image stream includes an in-board image stream of a car, the first calculation includes a spatial information calculation, the second calculation includes a gaze calculation, and the third calculation includes an occlusion region calculation.
3. The method of claim 2, wherein the off-board camera and the in-board camera are both high speed cameras; and the acquisition frame rates of the camera outside the vehicle and the camera inside the vehicle are the same.
4. The method of claim 2, wherein the second calculation further comprises line-of-sight coordinate transfer.
5. The method of claim 2, wherein the third calculation further comprises an image enhancement calculation.
6. The method of claim 2, wherein the method further comprises: and displaying the third image to the driver.
7. The method of claim 1, wherein the execution pipeline processing is in time units of slots.
8. The method of claim 1 or 2, wherein the performing pipeline processing further comprises the steps of:
and when any one of the plurality of calculation units fails to calculate, taking the last calculation result of any one calculation unit as the calculation result of the time.
9. The method of claim 1 or 2, wherein when a single execution of a calculation by any of the plurality of calculation units is longer than a second length of time, all steps of the method are re-executed.
10. The method of claim 1 or 2, further comprising:
accelerating any of the plurality of computing units using an accelerator.
11. A method according to claim 1 or 2, wherein the first image used by the third computing unit is the latest first image in the first image stream.
12. The method of claim 1 or 2, wherein the execution pipeline processing employs a round-robin token synchronization mechanism.
13. An image processing apparatus comprising: a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the method of any one of claims 1-12.
14. An image processing system comprising:
an acquisition module, configured to acquire a first image stream captured by a first camera and a second image stream captured by a second camera, where the first image stream includes a plurality of first images and the second image stream includes a plurality of second images; and
a pipeline module to perform pipeline processing on the first image stream and the second image stream using a plurality of computational units, wherein the plurality of computational units comprises:
a first calculation unit that performs a first calculation based on each first image of the first image stream to obtain first information;
a second calculation unit that performs a second calculation based on each second image of the second image stream to obtain second information; and
a third calculation unit that performs a third calculation using the first image, the first information, and the second information to obtain a third image;
wherein any computing unit of the plurality of computing units comprises the number of parallel computing subunits, so that the time interval between two adjacent computing is less than or equal to the first time length.
15. A computer-readable medium having stored thereon computer program code which, when executed by a processor, implements the method of any of claims 1-12.
CN202011567236.5A 2020-12-25 2020-12-25 Image processing method, apparatus, system, and computer-readable medium Pending CN112738469A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011567236.5A CN112738469A (en) 2020-12-25 2020-12-25 Image processing method, apparatus, system, and computer-readable medium
PCT/CN2021/094775 WO2022134442A1 (en) 2020-12-25 2021-05-20 Image processing method, device and system, and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011567236.5A CN112738469A (en) 2020-12-25 2020-12-25 Image processing method, apparatus, system, and computer-readable medium

Publications (1)

Publication Number Publication Date
CN112738469A true CN112738469A (en) 2021-04-30

Family

ID=75616566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011567236.5A Pending CN112738469A (en) 2020-12-25 2020-12-25 Image processing method, apparatus, system, and computer-readable medium

Country Status (2)

Country Link
CN (1) CN112738469A (en)
WO (1) WO2022134442A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134442A1 (en) * 2020-12-25 2022-06-30 合众新能源汽车有限公司 Image processing method, device and system, and computer-readable medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1678988A (en) * 2002-09-04 2005-10-05 Arm有限公司 Synchronisation between pipelines in a data processing apparatus
CN109934076A (en) * 2017-12-19 2019-06-25 广州汽车集团股份有限公司 Generation method, device, system and the terminal device of the scene image of vision dead zone
CN110717945A (en) * 2019-09-25 2020-01-21 深圳疆程技术有限公司 Vision calibration method, vehicle machine and automobile
CN110874817A (en) * 2018-08-29 2020-03-10 上海商汤智能科技有限公司 Image stitching method and device, vehicle-mounted image processing device, electronic equipment and storage medium
CN110901534A (en) * 2019-11-14 2020-03-24 浙江合众新能源汽车有限公司 A-pillar perspective implementation method and system
CN111277796A (en) * 2020-01-21 2020-06-12 深圳市德赛微电子技术有限公司 Image processing method, vehicle-mounted vision auxiliary system and storage device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10618467B2 (en) * 2016-03-22 2020-04-14 Research & Business Foundation Sungkyunkwan University Stereo image generating method using mono cameras in vehicle and providing method for omnidirectional image including distance information in vehicle
CN107640111B (en) * 2017-07-27 2020-07-24 北京德天泉机电技术研究院 Automobile visual image processing system and method based on hundred-core microprocessor control
EP3707572B1 (en) * 2017-11-10 2023-08-23 Nvidia Corporation Systems and methods for safe and reliable autonomous vehicles
CN110764609A (en) * 2018-07-27 2020-02-07 博世汽车部件(苏州)有限公司 Method and device for data synchronization and computing equipment
CN109741456A (en) * 2018-12-17 2019-05-10 深圳市航盛电子股份有限公司 3D based on GPU concurrent operation looks around vehicle assistant drive method and system
CN110688952B (en) * 2019-09-26 2022-04-12 北京市商汤科技开发有限公司 Video analysis method and device
CN110962865A (en) * 2019-12-24 2020-04-07 国汽(北京)智能网联汽车研究院有限公司 Automatic driving safety computing platform
CN111192230B (en) * 2020-01-02 2023-09-19 北京百度网讯科技有限公司 Multi-camera-based image processing method, device, equipment and readable storage medium
CN112738469A (en) * 2020-12-25 2021-04-30 浙江合众新能源汽车有限公司 Image processing method, apparatus, system, and computer-readable medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1678988A (en) * 2002-09-04 2005-10-05 Arm有限公司 Synchronisation between pipelines in a data processing apparatus
CN109934076A (en) * 2017-12-19 2019-06-25 广州汽车集团股份有限公司 Generation method, device, system and the terminal device of the scene image of vision dead zone
CN110874817A (en) * 2018-08-29 2020-03-10 上海商汤智能科技有限公司 Image stitching method and device, vehicle-mounted image processing device, electronic equipment and storage medium
CN110717945A (en) * 2019-09-25 2020-01-21 深圳疆程技术有限公司 Vision calibration method, vehicle machine and automobile
CN110901534A (en) * 2019-11-14 2020-03-24 浙江合众新能源汽车有限公司 A-pillar perspective implementation method and system
CN111277796A (en) * 2020-01-21 2020-06-12 深圳市德赛微电子技术有限公司 Image processing method, vehicle-mounted vision auxiliary system and storage device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134442A1 (en) * 2020-12-25 2022-06-30 合众新能源汽车有限公司 Image processing method, device and system, and computer-readable medium

Also Published As

Publication number Publication date
WO2022134442A1 (en) 2022-06-30

Similar Documents

Publication Publication Date Title
CN109917920B (en) Vehicle-mounted projection processing method and device, vehicle-mounted equipment and storage medium
CN106502427B (en) Virtual reality system and scene presenting method thereof
US10580386B2 (en) In-vehicle projected reality motion correction
CN111559371B (en) Three-dimensional parking display method, vehicle and storage medium
US20170136346A1 (en) Interaction Method, Interaction Apparatus and User Equipment
US11391952B1 (en) AR/VR controller with event camera
CN109727305B (en) Virtual reality system picture processing method, device and storage medium
CN109889807A (en) Vehicle-mounted projection adjusting method, device, equipment and storage medium
US10040353B2 (en) Information display system
CN112738496A (en) Image processing method, apparatus, system, and computer-readable medium
CN112738469A (en) Image processing method, apparatus, system, and computer-readable medium
CN111027506B (en) Method and device for determining sight direction, electronic equipment and storage medium
CN116883977A (en) Passenger state monitoring method and device, terminal equipment and vehicle
KR102244556B1 (en) Priority-based access management to shared resources
CN112667335A (en) Method, device and equipment for loading image frames of backing car and storage medium
CN109445597A (en) A kind of operation indicating method, apparatus and terminal applied to terminal
CN113507559A (en) Intelligent camera shooting method and system applied to vehicle and vehicle
CN111231826A (en) Control method, device and system for vehicle model steering lamp in panoramic image and storage medium
CN110139141A (en) Video pictures rendering method, device, storage medium and electronic equipment
US11768650B2 (en) Display control device, display system, and display control method for controlling display of information
CN210781107U (en) Vehicle-mounted data processing terminal and system
CN112215033B (en) Method, device and system for generating panoramic looking-around image of vehicle and storage medium
US20210225053A1 (en) Information processing apparatus, information processing method, and program
CN108290522A (en) The control device and method of driver assistance system for vehicle
CN115437493A (en) Image display method and device of vehicle-mounted VR glasses and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210430

WD01 Invention patent application deemed withdrawn after publication