WO2022033312A1 - 图像处理装置及终端 - Google Patents

图像处理装置及终端 Download PDF

Info

Publication number
WO2022033312A1
WO2022033312A1 PCT/CN2021/108960 CN2021108960W WO2022033312A1 WO 2022033312 A1 WO2022033312 A1 WO 2022033312A1 CN 2021108960 W CN2021108960 W CN 2021108960W WO 2022033312 A1 WO2022033312 A1 WO 2022033312A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
processing apparatus
fusion
original
Prior art date
Application number
PCT/CN2021/108960
Other languages
English (en)
French (fr)
Inventor
刁鸿浩
Original Assignee
北京芯海视界三维科技有限公司
视觉技术创投私人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京芯海视界三维科技有限公司, 视觉技术创投私人有限公司 filed Critical 北京芯海视界三维科技有限公司
Publication of WO2022033312A1 publication Critical patent/WO2022033312A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the technical field of image processing, for example, to an image processing device and a terminal.
  • the images to be fused must first be sent to the central processing unit (CPU), the central processing unit will fuse the images, and send the fused images to the display unit.
  • the processed image is fused by the processor.
  • Embodiments of the present disclosure provide an image processing apparatus and terminal to solve the technical problem that images cannot be fused in time, which ultimately results in a delay in displaying the fused images, which reduces user experience.
  • the image processing apparatus includes:
  • an image fusion unit configured to fuse at least two images from at least one of the image obtaining unit and the application processor based on transparency, so that the at least two images correspondingly generate a fused image.
  • the image fusion unit is configured to superimpose the at least two images to correspondingly generate a fusion image based on the transparency of the at least two images.
  • the image fusion unit is configured to, based on transparency and depth information of the at least two images, superimpose at least two images to correspondingly generate a fusion image.
  • the image processing apparatus when the image processing apparatus includes an image acquisition unit,
  • an image obtaining unit configured to obtain at least two original images
  • the image fusion unit is configured to fuse at least two original images to obtain a fused image.
  • the original image includes at least one of an original follow-up image and an original reference image.
  • the image processing apparatus when the image processing apparatus includes an image acquisition unit and an application processor,
  • an image obtaining unit configured to obtain the original image
  • an application processor configured to obtain the enhanced image
  • the image fusion unit is further configured to: fuse the original image and the enhanced image to obtain a fused image.
  • the augmented image includes at least one of an augmented reality image and a virtual image
  • the augmented reality image is an image obtained by enhancing the real image.
  • the virtual image includes at least one of a virtual follow-up image and a virtual reference image.
  • the image processing apparatus when the image processing apparatus includes an application processor,
  • an application processor configured to obtain at least two of an augmented reality image, a virtual follow-up image, and a virtual reference image
  • the image fusion unit is configured to fuse at least two kinds of images among the augmented reality image, the virtual follow-up image and the virtual reference image to obtain a fused image.
  • the image processing device is capable of communicating with a display screen, the display screen, configured to obtain the fused image, to display the fused image.
  • the image fusion unit is provided in the display screen.
  • the terminal includes the above-mentioned image processing apparatus.
  • the speed of image fusion processing is improved, thereby avoiding the need for all images to be fused to pass through the central processing unit.
  • the delay caused by the processor performing fusion processing and then outputting reduces the problem of user experience.
  • FIG. 1 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present disclosure.
  • FIG. 3 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of fusion of a first image and a second image provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of fusion of a first image and a third image provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of fusion of a first image and a fourth image provided by an embodiment of the present disclosure
  • FIG. 7 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of a terminal provided by an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides an image processing apparatus 1 , including at least one of an image obtaining unit 01 and an application processor 02 ;
  • an image fusion unit 03 configured to fuse at least two images from at least one of the image obtaining unit 01 and the application processor 02 based on transparency, so that the at least two images correspondingly generate a fused image.
  • the image processing apparatus 1 includes an image fusion unit 03 configured to fuse at least two images; the image processing apparatus 1 further includes at least one of an image acquisition unit 01 and an application processor 02 .
  • FIG. 1 is a schematic structural diagram of an image processing apparatus 1 including an image fusion unit 03 and an image acquisition unit 01
  • the image fusion unit 03 is configured to fuse at least two images from the image acquisition unit 01.
  • FIG. 2 which is a schematic structural diagram of the image processing apparatus 1 including an image fusion unit 03 , an image acquisition unit 01 and an application processor 02
  • the image fusion unit 03 is configured to combine the images from the image acquisition unit 01 and the application processor 02 At least two images are fused. For example, one image from the image obtaining unit 01 and one image from the application processor 02 are fused. Optionally, one image from the image acquisition unit 01 and two images from the application processor 02 are fused, or one image from the image acquisition unit 01 and three images from the application processor 02 are fused ,Wait.
  • FIG. 3 which is a schematic structural diagram of the image processing apparatus 1 including an image fusion unit 03 and an application processor 02 , the image fusion unit 03 is configured to fuse at least two images from the application processor 02 .
  • the above-mentioned image may be a single image, or may be a sequence of images included in the captured video.
  • each of the at least two images may be selected independently of each other as a 2D image or a 3D image.
  • the at least two images are fused based on transparency, which is 0-100%.
  • it can refer to fully opaque, semi-transparent and fully transparent; wherein, the transparency of translucent is between fully opaque and fully transparent.
  • the image fusion unit 03 may fuse at least two completely opaque images.
  • the image fusion unit 03 may fuse at least two semitransparent images.
  • the image fusion unit 03 may fuse at least two completely transparent images.
  • the image fusion unit 03 may fuse at least two of the opaque image, the semi-transparent image and the completely transparent image.
  • the image fusion unit 03 is configured to fuse the at least two images based on transparency.
  • the image fusion unit 03 when the first implementation manner is used to fuse at least two images, the image fusion unit 03 is configured to, based on the transparency of the at least two images, superimpose the at least two images to generate a corresponding fusion image.
  • the at least two images may include a first image 04 and a second image 05 that are superimposed based on transparency and order to form a first fused image 06.
  • the first image 04 is an opaque image
  • the content displayed in the image is a jungle.
  • the second image 05 is divided into a first area 051 and a second area 052, wherein the first area 051 is set to be transparent, and the second area 052 is set to opaque , and the image content displayed in the second area 052 is a gun, the first image 04 and the second image 05 are superimposed in a certain order, and after the superposition, a first fusion image 06 is generated based on the transparency of the different images.
  • the image fusion unit 03 when the second implementation manner is used to fuse at least two images, the image fusion unit 03 is configured to, based on the transparency and depth information of the at least two images, Superimposing at least two images correspondingly generates a fused image.
  • the at least two images may include a first image 04 and a third image 07
  • the fusion process of fusing the first image 04 and the third image 07 is as follows:
  • the depth information superimposes the images, as shown in Figure 5, based on the depth information, the third image 07 is on top of the first image 04, and then adjusted based on the transparency, for example, in the third image 07, the third image 07
  • the first area 071 is a transparent area
  • the second area 072 of the third image 07 is an opaque area.
  • the first image 04 and the third image 07 are fused to generate the second fusion image 08.
  • the first area 071 of the image 07 is a transparent area, then the image corresponding to the first image 04 is displayed in the first area 081 of the second fused image 08. Since the second area 072 of the third image 07 is an opaque area, the second The content displayed in the second area 082 of the fused image 08 is exactly the same as the content displayed in the second area 072 of the third image 07 .
  • the at least two images may include a first image 04 and a fourth image 09
  • the fusion process for fusing the first image 04 and the fourth image 09 is:
  • the depth information superimposes the images, as shown in Figure 6, based on the depth information, the fourth image 09 is on the top of the first image 04, and then adjusted based on the transparency, for example, in the fourth image 09, the fourth image 09
  • the first area 091 is a transparent area
  • the second area 092 of the fourth image 09 is a semi-transparent area, that is, the transparency of the second area 092 of the fourth image 09 is smaller than that of the first area 091 of the fourth image 09, for example,
  • the image displayed in the second area 092 of the fourth image 09 is a glass water cup or a transparent plastic bottle or the like.
  • the first image 04 and the fourth image 09 are fused to generate the third fused image 10.
  • the third fused image 10 since the first area 091 of the fourth image 09 is a transparent area, The area 101 displays the image corresponding to the first image 04. Since the second area 092 of the fourth image 09 is a semi-transparent area, the content displayed in the second area 102 of the third fused image 10 is the second area of the fourth image 09 092 and the content of the fusion of the first image 04 at the corresponding position behind.
  • the fusion of at least two images mentioned in the following embodiments may be performed by using the above-mentioned fusion method.
  • the image processing apparatus 1 when the image processing apparatus 1 includes the image obtaining unit 01,
  • an image obtaining unit 01 configured to obtain at least two original images
  • the image fusion unit 03 is configured to fuse at least two original images to obtain a fused image.
  • the above-mentioned “obtaining” may be active acquisition or passive reception.
  • “Acquiring” in the following embodiments can be understood as active acquisition or passive reception.
  • the image processing apparatus 1 may include an image obtaining unit 01 and an image fusion unit 03.
  • the image obtaining unit 01 is, for example, a camera, and the image obtaining unit 01 is configured to obtain at least two original images, and the original images may be obtained without any
  • the processed image obtained based on the real scene is, for example, an image collected by a camera on the real scene, or an image sequence included in a video collected by a camera on the real scene.
  • the image fusion unit 03 is used to fuse at least two original images.
  • the original image includes at least one of an original follow-up image and an original reference image.
  • the original image may include an original follow-up image.
  • the original image may include an original reference image.
  • the original image may include an original follow-up image and an original reference image.
  • the original moving image may be an image obtained by a camera based on a real scene (eg, a panoramic image).
  • the original moving image may change as the user's perspective changes.
  • the original reference image may be an image of a certain target based on a real scene obtained by a camera.
  • the original reference image may be fixed or movable, and may not change with the user's viewing angle.
  • the image fusion unit 03 when the image fusion unit 03 fuses at least two original images, for example: fusion of at least two original follow-up images; fusion of at least two original reference images; The moving image is fused with at least one original reference image.
  • the image processing apparatus 1 when the image processing apparatus 1 includes an image obtaining unit 01 and an application processor 02,
  • an image obtaining unit 01 configured to obtain an original image
  • an application processor 02 configured to obtain an enhanced image
  • the image fusion unit 03 is further configured to: fuse the original image and the enhanced image to obtain a fused image.
  • the image processing apparatus 1 may include an image obtaining unit 01, an application processor 02, and an image fusion unit 03.
  • the image obtaining unit 01 is used to obtain at least one original image; the application processor 02 is used to obtain at least one enhanced image.
  • the manner in which the application processor 02 obtains the enhanced image may be at least one of the following manners: the application processor 02 directly obtains the enhanced image from the storage medium; the application processor 02 generates the enhanced image based on a certain image, for example, generates the enhanced image based on the original image Image; the application processor 02 directly generates an enhanced image based on its own logic.
  • the original image includes at least one of an original follow-up image and an original reference image.
  • the augmented image includes at least one of an augmented reality image and a virtual image.
  • the augmented image may comprise an augmented reality image.
  • the enhanced image may include a virtual image.
  • the augmented images may include augmented reality images and virtual images.
  • the augmented reality image is an image augmented based on a real image or a real image.
  • the augmented reality image may be an image obtained by color enhancement based on a real image or a real image (image obtained based on a real scene).
  • the virtual image is an image created out of thin air, for example, an image of a black hole scene created out of nowhere, an image of a scene outside a spacecraft, or an image of a scene inside a cabin of a spacecraft, and the like.
  • the virtual image includes at least one of a virtual follow-up image and a virtual reference image.
  • the virtual image may include a virtual follow-up image.
  • the virtual image may include a virtual reference image.
  • the virtual image may include a virtual follow-up image and a virtual reference image.
  • the virtual panning image such as an image of a scene outside a spaceship, may change based on changes in the viewing angle of the viewer.
  • the virtual reference image is, for example, a scene image in the cabin of the spacecraft, for example, the operating console in the cabin of the spacecraft, which may not change with the change of the viewing angle of the viewer.
  • the image fusion unit 03 is configured to fuse the original image from the image obtaining unit 01 with the enhanced image from the application processor 02, and the original image may include at least one of the original follow-up image and the original reference image.
  • the enhanced image may include at least one of an augmented reality image, a virtual follow-up image, and a virtual reference image. Therefore, fusing the original image and the enhanced image may be at least one of the images included in the original image.
  • the image and at least one of the images included in the enhanced image are fused, for example: fusing the original moving image and the augmented reality image; fusing the original reference image and the augmented reality image; fusing the original moving image and the virtual image Fusion of follow-up images; fusion of original reference images and virtual follow-up images; fusion of original follow-up images and virtual reference images; fusion of original reference images and virtual reference images; Fusion of the virtual reality image and virtual follow-up image; fusion of the original follow-up image, augmented reality image and virtual reference image; fusion of the original follow-up image, virtual follow-up image and virtual reference image;
  • the augmented reality image and the virtual follow-up image are fused; the original reference image, the augmented reality image and the virtual reference image are fused; the original reference image, the virtual follow-up image and the virtual reference image are fused; the original reference image can also be fused , the original follow-up image and the augmented reality image are fused; the original reference image, the original follow-up image and the virtual
  • the image processing apparatus 1 when the image processing apparatus 1 includes the application processor 02,
  • the application processor 02 is configured to obtain at least two of an augmented reality image, a virtual follow-up image and a virtual reference image;
  • the image fusion unit 03 is configured to fuse at least two kinds of images among the augmented reality image, the virtual follow-up image and the virtual reference image to obtain a fused image.
  • the image processing apparatus 1 may include an application processor 02 and an image fusion unit 03, where the application processor 02 is configured to obtain at least two kinds of images among the augmented reality image, the virtual follow-up image and the virtual reference image.
  • the application processor 02 can obtain an augmented reality image and a virtual follow-up image
  • the application processor 02 can obtain an augmented reality image and a virtual reference image
  • the application processor 02 can obtain a virtual follow-up image and a virtual reference image
  • application processing The device 02 can also obtain augmented reality images, virtual follow-up images, and virtual reference images.
  • the image fusion unit 03 is configured to fuse at least two kinds of images obtained by the application processor 02 to obtain a fused image.
  • the image processing apparatus 1 can communicate with a display screen 11, and the display screen 11 is configured to obtain a fused image and display the fused image.
  • the display screen 11 is connected to the image fusion unit 03 for displaying the fusion image obtained from the image fusion unit 03 .
  • the image fusion unit 03 and the display screen 11 may be provided separately and independently, and the image fusion unit 03 may also be provided in the display screen 11 .
  • FIG. 7 only exemplarily shows the case where the image fusion unit 03 and the display screen 11 are arranged separately and independently.
  • an embodiment of the present disclosure provides a terminal 2 , where the terminal 2 includes the above-mentioned image processing apparatus 1 .
  • the speed of the image fusion processing is improved, thereby avoiding It eliminates the time delay caused by the fusion processing of all the images to be fused by the central processing unit before outputting, which reduces the problem of user experience.
  • a first element could be termed a second element, and, similarly, a second element could be termed a first element, so long as all occurrences of "the first element” were consistently renamed and all occurrences of "the first element” were named consistently.
  • the “second element” can be renamed consistently.
  • the first element and the second element are both elements, but may not be the same element.
  • the terms used in this application are used to describe the embodiments only and not to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a” (a), “an” (an) and “the” (the) are intended to include the plural forms as well, unless the context clearly dictates otherwise. .
  • the term “and/or” as used in this application is meant to include any and all possible combinations of one or more of the associated listings.
  • the term “comprise” and its variations “comprises” and/or including and/or the like refer to stated features, integers, steps, operations, elements, and/or The presence of a component does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groupings of these.
  • an element qualified by the phrase “comprising a" does not preclude the presence of additional identical elements in the process, method, or device that includes the element.
  • each embodiment may focus on the differences from other embodiments, and the same and similar parts between the various embodiments may refer to each other.
  • the methods, products, etc. disclosed in the embodiments if they correspond to the method section disclosed in the embodiments, reference may be made to the description of the method section for relevant parts.
  • the disclosed methods and products may be implemented in other ways.
  • the apparatus embodiments described above are only illustrative.
  • the division of units may only be a logical function division.
  • multiple units or components may be combined or may be Integration into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. This embodiment may be implemented by selecting some or all of the units according to actual needs.
  • each functional unit in the embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种图像处理装置(1),包括图像获得单元(01)和应用处理器(02)中的至少之一;还包括图像融合单元(03),被配置为基于透明度融合来自图像获得单元(01)和应用处理器(02)中至少之一的至少两个图像,使至少两个图像对应地生成融合图像。通过在图像处理装置(1)中设置独立于中央处理器的图像融合单元(03),利用图像融合单元(03)来对至少两个图像进行融合处理,提高了图像融合处理的速度,从而避免了所有待融合图像都要经过中央处理器进行融合处理再进行输出所导致的时延性,降低了用户体验度的问题。还公开了一种终端(2)。

Description

图像处理装置及终端
本申请要求在2020年08月11日提交中国知识产权局、申请号为202010799324.1、发明名称为“一种图像处理装置及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,例如涉及一种图像处理装置及终端。
背景技术
目前,在进行图像融合处理时,待融合的图像都要先发送到中央处理器CPU,中央处理器对图像进行融合处理,将进行融合处理后的图像发送到显示单元,显示单元显示经过中央处理器融合处理后的图像。
在实现本公开实施例的过程中,发现相关技术中至少存在如下问题:当图像需要进行融合处理时,是由中央处理器对图像进行融合处理,由于中央处理器同时还需要处理其他数据,导致中央处理器处理的数据量过大,从而不能及时对图像进行融合处理,最终导致融合后的图像在显示时具有时延性,降低了用户体验。
发明内容
为了对披露的实施例的一些方面有基本的理解,下面给出了简单的概括。该概括不是泛泛评述,也不是要确定关键/重要组成元素或描绘这些实施例的保护范围,而是作为后面的详细说明的序言。
本公开实施例提供了一种图像处理装置及终端,以解决不能及时对图像进行融合处理,最终导致融合后的图像在显示时具有时延性,降低了用户体验的技术问题。
在一些实施例中,图像处理装置包括:
图像获得单元和应用处理器AP中的至少之一;
还包括图像融合单元,被配置为基于透明度融合来自图像获得单元和应用处理器中至少之一的至少两个图像,使至少两个图像对应地生成融合图像。
在一些实施例中,图像融合单元,被配置为基于至少两个图像的透明度,叠加至少两个图像对应地生成融合图像。
在一些实施例中,图像融合单元,被配置为基于至少两个图像的透明度和景深信息, 叠加至少两个图像对应地生成融合图像。
在一些实施例中,当图像处理装置包括图像获得单元时,
图像获得单元,被配置为,获得至少两个原始图像;
图像融合单元,被配置为,将至少两个原始图像进行融合,以得到融合图像。
在一些实施例中,原始图像包括原始随动图像和原始参考图像中的至少一种。
在一些实施例中,当图像处理装置包括图像获得单元和应用处理器时,
图像获得单元,被配置为,获得原始图像;
应用处理器,被配置为,获得增强图像;
图像融合单元还被配置为:将原始图像和增强图像进行融合,以得到融合图像。
在一些实施例中,增强图像包括增强化现实图像和虚拟图像中的至少一种;
其中,增强化现实图像为基于真实图像增强得到的图像。
在一些实施例中,虚拟图像包括虚拟随动图像和虚拟参考图像中的至少一种。
在一些实施例中,当图像处理装置包括应用处理器时,
应用处理器,被配置为,获得增强化现实图像、虚拟随动图像和虚拟参考图像中的至少两种;
图像融合单元,被配置为,将增强化现实图像、虚拟随动图像和虚拟参考图像中的至少两种图像进行融合,以得到融合图像。
在一些实施例中,图像处理装置能够与显示屏通信,所述显示屏,被配置为获得融合图像,显示融合图像。
在一些实施例中,图像融合单元设置在显示屏中。
在一些实施例中,终端包括上述的图像处理装置。
本公开实施例提供的图像处理装置及终端,可以实现以下技术效果:
通过在图像处理装置中设置独立于中央处理器的图像融合单元,利用图像融合单元来对至少两个图像进行融合处理,提高了图像融合处理的速度,从而避免了所有待融合图像都要经过中央处理器进行融合处理再进行输出所导致的时延性,降低了用户体验度的问题。
以上的总体描述和下文中的描述仅是示例性和解释性的,不用于限制本申请。
附图说明
至少一个实施例通过与之对应的附图进行示例性说明,这些示例性说明和附图并不构成对实施例的限定,附图中具有相同参考数字标号的元件示为类似的元件,附图不构成比例限制,并且其中:
图1是本公开实施例提供的一种图像处理装置的结构示意图;
图2是本公开实施例提供的另一种图像处理装置的结构示意图;
图3是本公开实施例提供的另一种图像处理装置的结构示意图;
图4是本公开实施例提供的第一图像和第二图像进行融合的示意图;
图5是本公开实施例提供的第一图像和第三图像进行融合的示意图;
图6是本公开实施例提供的第一图像和第四图像进行融合的示意图;
图7是本公开实施例提供的另一种图像处理装置的结构示意图;
图8是本公开实施例提供的一种终端的结构示意图。
附图标记:
1:图像处理装置;01:图像获得单元;02:应用处理器;03:图像融合单元;04:第一图像;05:第二图像;051:第二图像的第一区域;052:第二图像的第二区域;06:第一融合图像;07:第三图像;071:第三图像的第一区域;072:第三图像的第二区域;08:第二融合图像;081:第二融合图像的第一区域;082:第二融合图像的第二区域;09:第四图像;091:第四图像的第一区域;092:第四图像的第二区域;10:第三融合图像;101:第三融合图像的第一区域;102:第三融合图像的第二区域;11:显示屏;2:终端。
具体实施方式
为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。在以下的技术描述中,为方便解释起见,通过多个细节以提供对所披露实施例的充分理解。然而,在没有这些细节的情况下,至少一个实施例仍然可以实施。在其它情况下,为简化附图,熟知的结构和装置可以简化展示。
如图1、图2和图3所示,本公开实施例提供了一种图像处理装置1,包括图像获得单元01和应用处理器02中的至少之一;
还包括图像融合单元03,被配置为基于透明度融合来自图像获得单元01和应用处理器02中至少之一的至少两个图像,使至少两个图像对应地生成融合图像。
在一些实施例中,图像处理装置1包括图像融合单元03,图像融合单元03用于将至少两个图像进行融合;图像处理装置1还包括图像获得单元01和应用处理器02中的至少之一。
如图1所示,为图像处理装置1包括图像融合单元03和图像获得单元01的结构示意 图,图像融合单元03,被配置为将来自图像获得单元01的至少两个图像进行融合。
如图2所示,为图像处理装置1包括图像融合单元03、图像获得单元01和应用处理器02的结构示意图,图像融合单元03,被配置为将来自图像获得单元01和应用处理器02的至少两张图像进行融合。例如,将来自图像获得单元01的一张图像和来自应用处理器02的一张图像进行融合。可选地,将来自图像获得单元01的一张图像和来自应用处理器02的两张图像进行融合,或将来自图像获得单元01的一张图像和来自应用处理器02的三张图像进行融合,等。
如图3所示,为图像处理装置1包括图像融合单元03和应用处理器02的结构示意图,图像融合单元03,被配置为将来自应用处理器02的至少两个图像进行融合。
在一些实施例中,上述的图像可以为单个的图像,也可以为采集的视频中包括的图像序列。可选地,至少两个图像中的每个图像可以彼此独立地选择为2D图像或3D图像。
在一些实施例中,基于透明度融合至少两个图像,上述的透明度为0-100%。例如可以指完全不透明、半透明和完全透明;其中,半透明的透明度介于完全不透明和完全透明之间。可选地,图像融合单元03可以将至少两个完全不透明图像进行融合。可选地,图像融合单元03可以将至少两个半透明图像进行融合。可选地,图像融合单元03可以将至少两个完全透明图像进行融合。可选地,图像融合单元03可以将不透明图像、半透明图像和完全透明图像中的至少两种进行融合。
在一些实施例中,图像融合单元03被配置为,基于透明度融合至少两个图像。基于透明度融合至少两个图像的实现方式可以有两种,第一种为,基于至少两个图像的顺序和透明度对至少两个图像进行融合;第二种为基于至少两个图像的透明度和景深信息对至少两个图像进行融合。
在一些实施例中,当采用第一种实现方式对至少两个图像进行融合时,图像融合单元03,被配置为基于至少两个图像的透明度,叠加至少两个图像对应地生成融合图像。
在一些实施例中,如图4所示,至少两个图像可以包括第一图像04和第二图像05,第一图像04和第二图像05基于透明度和顺序进行叠加,从而形成第一融合图像06。第一图像04为不透明的图像,图像显示的内容为一片丛林,第二图像05分为第一区域051和第二区域052,其中,第一区域051设置为透明,第二区域052设置为不透明,且第二区域052显示的图像内容为一把枪,将第一图像04和第二图像05按照一定的顺序叠加,叠加后,基于不同图像的透明度生成第一融合图像06。
如图5和图6所示,在一些实施例中,当采用第二种实现方式对至少两个图像进行融合时,图像融合单元03,被配置为基于至少两个图像的透明度和景深信息,叠加至少两个 图像对应地生成融合图像。
在一些实施例中,如图5所示,至少两个图像可以包括第一图像04和第三图像07,将第一图像04和第三图像07进行融合的融合过程为:先基于不同图像的景深信息将图像进行叠加,如图5所示,基于景深信息,第三图像07在第一图像04的上面,而后再基于透明度进行调整,例如,在第三图像07中,第三图像07的第一区域071为透明区域,第三图像07的第二区域072为不透明区域,第一图像04和第三图像07进行融合生成第二融合图像08,在第二融合图像08中,由于第三图像07的第一区域071为透明区域,则在第二融合图像08的第一区域081显示第一图像04相对应的图像,由于第三图像07的第二区域072为不透明区域,则第二融合图像08的第二区域082显示的内容与第三图像07的第二区域072显示的内容完全相同。
在一些实施例中,如图6所示,至少两个图像可以包括第一图像04和第四图像09,将第一图像04和第四图像09进行融合的融合过程为:先基于不同图像的景深信息将图像进行叠加,如图6所示,基于景深信息,第四图像09在第一图像04的上面,而后再基于透明度进行调整,例如,在第四图像09中,第四图像09的第一区域091为透明区域,第四图像09的第二区域092为半透明区域,即,第四图像09的第二区域092的透明度小于第四图像09的第一区域091的透明度,例如,第四图像09的第二区域092展示的图像为玻璃水杯或是透明塑料瓶等。第一图像04和第四图像09进行融合生成第三融合图像10,在第三融合图像10中,由于第四图像09的第一区域091为透明区域,则在第三融合图像10的第一区域101显示第一图像04相对应的图像,由于第四图像09的第二区域092为半透明区域,则第三融合图像10的第二区域102显示的内容为第四图像09的第二区域092与后面的对应位置的第一图像04的融合的内容。
在一些实施例中,以下实施例中提及的至少两个图像的融合可以采用上述的融合方法进行融合。
在一些实施例中,当图像处理装置1包括图像获得单元01时,
图像获得单元01,被配置为获得至少两个原始图像;
图像融合单元03,被配置为将至少两个原始图像进行融合,以得到融合图像。
在一些实施例中,上述的“获得”可以为主动获取,也可以为被动接收。以下实施例中的“获得”均可以被理解为主动获取或者被动接收。
在一些实施例中,图像处理装置1可以包括图像获得单元01和图像融合单元03,图像获得单元01例如为摄像头,图像获得单元01用于获得至少两个原始图像,原始图像可以为不经过任何处理的基于真实场景得到的图像,例如为摄像头对真实场景采集的图像, 或者为摄像头对真实场景采集的视频中包括的图像序列。图像融合单元03用于将至少两个原始图像进行融合。
在一些实施例中,原始图像包括原始随动图像和原始参考图像中的至少一种。可选地,原始图像可以包括原始随动图像。可选地,原始图像可以包括原始参考图像。可选地,原始图像可以包括原始随动图像和原始参考图像。在一些实施例中,原始随动图像可以为摄像头获得的基于真实场景的图像(例如:全景图像)。可选地,原始随动图像可以随着用户视角的变化而变化。可选地,原始参考图像可以为摄像头获得的基于真实场景的某一目标物的图像。可选地,原始参考图像可以是固定的或可移动的,可以不随用户视角的变化而变化。
在一些实施例中,图像融合单元03在将至少两个原始图像融合时,例如:将至少两个原始随动图像进行融合;将至少两个原始参考图像进行融合;还可以将至少一个原始随动图像和至少一个原始参考图像进行融合。
在一些实施例中,当图像处理装置1包括图像获得单元01和应用处理器02时,
图像获得单元01,被配置为获得原始图像;
应用处理器02,被配置为获得增强图像;
图像融合单元03还被配置为:将原始图像和增强图像进行融合,以得到融合图像。
可选地,图像处理装置1可以包括图像获得单元01、应用处理器02和图像融合单元03,图像获得单元01用于获得至少一个原始图像;应用处理器02用于获得至少一个增强图像。应用处理器02获得增强图像的方式可以为以下方式中的至少之一:应用处理器02从存储介质中直接获得增强图像;应用处理器02基于某一图像生成增强图像,例如基于原始图像生成增强图像;应用处理器02基于自身逻辑直接生成增强图像。
在一些实施例中,原始图像包括原始随动图像和原始参考图像中的至少一种。
在一些实施例中,增强图像包括增强化现实图像和虚拟图像中的至少一种。可选地,增强图像可以包括增强化现实图像。可选地,增强图像可以包括虚拟图像。可选地,增强图像可以包括增强化现实图像和虚拟图像。在一些实施例中,增强化现实图像为基于真实图像或现实图像增强得到的图像。例如,增强化现实图像可以为基于真实图像或现实图像(基于真实场景得到的图像)做色彩增强得到的图像。虚拟图像为凭空创造出的图像,例如,凭空创造出的黑洞场景的图像,宇宙飞船外的场景图像或是宇宙飞船船舱内的场景图像,等。可选地,虚拟图像包括虚拟随动图像和虚拟参考图像中的至少一种。可选地,虚拟图像可以包括虚拟随动图像。可选地,虚拟图像可以包括虚拟参考图像。可选地,虚拟图像可以包括虚拟随动图像和虚拟参考图像。在一些实施例中,虚拟随动图像例如为宇宙 飞船外的场景图像,可以基于观看者的视角的变化而变化。可选地,虚拟参考图像例如为宇宙飞船船舱内的场景图像,例如:宇宙飞船船舱内的操作台,可以不随观看者的视角的变化而变化。
在一些实施例中,图像融合单元03用于将来自于图像获得单元01的原始图像和来自于应用处理器02的增强图像进行融合,原始图像可以包括原始随动图像和原始参考图像中的至少之一,增强图像可以包括增强化现实图像、虚拟随动图像和虚拟参考图像中的至少之一,因此,将原始图像和增强图像进行融合可以为,将原始图像包括的图像中的至少一种图像和增强图像包括的图像中的至少一种图像进行融合,例如:将原始随动图像和增强化现实图像进行融合;将原始参考图像和增强化现实图像进行融合;将原始随动图像和虚拟随动图像进行融合;将原始参考图像和虚拟随动图像进行融合;将原始随动图像和虚拟参考图像进行融合;将原始参考图像和虚拟参考图像进行融合;还可以将原始随动图像、增强化现实图像和虚拟随动图像进行融合;将原始随动图像、增强化现实图像和虚拟参考图像进行融合;将原始随动图像、虚拟随动图像和虚拟参考图像进行融合;将原始参考图像、增强化现实图像和虚拟随动图像进行融合;将原始参考图像、增强化现实图像和虚拟参考图像进行融合;将原始参考图像、虚拟随动图像和虚拟参考图像进行融合;还可以将原始参考图像、原始随动图像和增强化现实图像进行融合;将原始参考图像、原始随动图像和虚拟随动图像进行融合;将原始参考图像、原始随动图像和虚拟参考图像进行融合;还可以将原始参考图像、原始随动图像、增强化现实图像和虚拟随动图像进行融合;将原始参考图像、原始随动图像、增强化现实图像和虚拟参考图像进行融合;将原始参考图像、原始随动图像、虚拟参考图像和虚拟随动图像进行融合;还可以将原始随动图像、增强化现实图像、虚拟随动图像和虚拟参考图像进行融合;将原始参考图像、增强化现实图像、虚拟随动图像和虚拟参考图像进行融合;还可以将原始随动图像、原始参考图像、增强化现实图像、虚拟随动图像和虚拟参考图像进行融合。
在一些实施例中,当图像处理装置1包括应用处理器02时,
应用处理器02,被配置为,获得增强化现实图像、虚拟随动图像和虚拟参考图像中的至少两种;
图像融合单元03,被配置为,将增强化现实图像、虚拟随动图像和虚拟参考图像中的至少两种图像进行融合,以得到融合图像。
可选地,图像处理装置1可以包括应用处理器02和图像融合单元03,应用处理器02用于获得增强化现实图像、虚拟随动图像和虚拟参考图像中的至少两种图像。例如:应用处理器02可以获得增强化现实图像和虚拟随动图像;应用处理器02可以获得增强化现实 图像和虚拟参考图像;应用处理器02可以获得虚拟随动图像和虚拟参考图像;应用处理器02还可以获得增强化现实图像、虚拟随动图像和虚拟参考图像。图像融合单元03用于将应用处理器02获得的至少两种图像进行融合,以得到融合图像。
如图7所示,在一些实施例中,图像处理装置1能够与显示屏11通信,显示屏11被配置为获得融合图像,显示融合图像。在一些实施例中,显示屏11与图像融合单元03连接,用于将从图像融合单元03获得的融合图像进行显示。可选地,图像融合单元03与显示屏11可以分开独立设置,还可以将图像融合单元03设置在显示屏11中。图7中只是示例性地表示了图像融合单元03和显示屏11分开独立设置的情况。
如图8所示,本公开实施例提供了一种终端2,终端2包括上述的图像处理装置1。
本公开实施例中,通过在图像处理装置1中设置独立于中央处理器的图像融合单元03,利用图像融合单元03来对至少两个图像进行融合处理,提高了图像融合处理的速度,从而避免了所有待融合图像都要经过中央处理器进行融合处理再进行输出所导致的时延性,降低了用户体验度的问题。
以上描述和附图充分地示出了本公开的实施例,以使本领域技术人员能够实践它们。其他实施例可以包括结构的、逻辑的、电气的、过程的以及其他的改变。实施例仅代表可能的变化。除非明确要求,否则单独的部件和功能是可选的,并且操作的顺序可以变化。一些实施例的部分和特征可以被包括在或替换其他实施例的部分和特征。本公开实施例的范围包括权利要求书的整个范围,以及权利要求书的所有可获得的等同物。当用于本申请中时,虽然术语“第一”、“第二”等可能会在本申请中使用以描述各元件,但这些元件不应受到这些术语的限制。这些术语仅用于将一个元件与另一个元件区别开。比如,在不改变描述的含义的情况下,第一元件可以叫做第二元件,并且同样地,第二元件可以叫做第一元件,只要所有出现的“第一元件”一致重命名并且所有出现的“第二元件”一致重命名即可。第一元件和第二元件都是元件,但可以不是相同的元件。而且,本申请中使用的用词仅用于描述实施例并且不用于限制权利要求。如在实施例以及权利要求的描述中使用的,除非上下文清楚地表明,否则单数形式的“一个”(a)、“一个”(an)和“所述”(the)旨在同样包括复数形式。类似地,如在本申请中所使用的术语“和/或”是指包含一个或一个以上相关联的列出的任何以及所有可能的组合。另外,当用于本申请中时,术语“包括”(comprise)及其变型“包括”(comprises)和/或包括(comprising)等指陈述的特征、整体、步骤、操作、元素,和/或组件的存在,但不排除一个或一个以上其它特征、整体、步骤、操作、元素、组件和/或这些的分组的存在或添加。在没有更多限制的情况下,由语句“包括一个…”限定的要素,并不排除在包括该要素的过程、方法或者设备中还存在另外的相同要素。本文中, 每个实施例重点说明的可以是与其他实施例的不同之处,各个实施例之间相同相似部分可以互相参见。对于实施例公开的方法、产品等而言,如果其与实施例公开的方法部分相对应,那么相关之处可以参见方法部分的描述。
本领域技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,可以取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同方法以实现所描述的功能,但是这种实现不应认为超出本公开实施例的范围。本领域技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本文所披露的实施例中,所揭露的方法、产品(包括但不限于装置、设备等),可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,可以仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例。另外,在本公开实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
在附图中,考虑到清楚性和描述性,可以夸大元件或层等结构的宽度、长度、厚度等。当元件或层等结构被称为“设置在”(或“安装在”、“铺设在”、“贴合在”、“涂布在”等类似描述)另一元件或层“上方”或“上”时,该元件或层等结构可以直接“设置在”上述的另一元件或层“上方”或“上”,或者可以存在与上述的另一元件或层之间的中间元件或层等结构,甚至有一部分嵌入上述的另一元件或层。

Claims (12)

  1. 一种图像处理装置,包括图像获得单元和应用处理器AP中的至少之一;
    还包括图像融合单元,被配置为基于透明度融合来自图像获得单元和应用处理器中至少之一的至少两个图像,使所述至少两个图像对应地生成融合图像。
  2. 根据权利要求1所述的图像处理装置,其中,所述图像融合单元,被配置为基于所述至少两个图像的透明度,叠加所述至少两个图像对应地生成融合图像。
  3. 根据权利要求1所述的图像处理装置,其中,所述图像融合单元,被配置为基于所述至少两个图像的透明度和景深信息,叠加所述至少两个图像对应地生成融合图像。
  4. 根据权利要求1至3任一项所述的图像处理装置,其中,当所述图像处理装置包括图像获得单元时,
    所述图像获得单元,被配置为获得至少两个原始图像;
    所述图像融合单元,被配置为将所述至少两个原始图像进行融合,以得到融合图像。
  5. 根据权利要求4所述的图像处理装置,其中,所述原始图像包括原始随动图像和原始参考图像中的至少一种。
  6. 根据权利要求1至3任一项所述的图像处理装置,其中,当所述图像处理装置包括图像获得单元和应用处理器时,
    所述图像获得单元,被配置为获得原始图像;
    所述应用处理器,被配置为获得增强图像;
    所述图像融合单元还被配置为:将所述原始图像和所述增强图像进行融合,以得到融合图像。
  7. 根据权利要求6所述的图像处理装置,其中,所述增强图像包括增强化现实图像和虚拟图像中的至少一种;
    其中,所述增强化现实图像为基于真实图像增强得到的图像。
  8. 根据权利要求7所述的图像处理装置,其中,所述虚拟图像包括虚拟随动图像和虚拟参考图像中的至少一种。
  9. 根据权利要求1至3任一项所述的图像处理装置,其中,当所述图像处理装置包括应用处理器时,
    所述应用处理器,被配置为获得增强化现实图像、虚拟随动图像和虚拟参考图像中的至少两种;
    所述图像融合单元,被配置为将所述增强化现实图像、虚拟随动图像和虚拟参考图像中的至少两种图像进行融合,以得到融合图像。
  10. 根据权利要求1至9中任一项所述的图像处理装置,其中,所述图像处理装置能够与显示屏通信,所述显示屏,被配置为获得所述融合图像,显示所述融合图像。
  11. 根据权利要求10所述的图像处理装置,其中,所述图像融合单元设置在所述显示屏中。
  12. 一种终端,包括如权利要求1至11中任一项所述的图像处理装置。
PCT/CN2021/108960 2020-08-11 2021-07-28 图像处理装置及终端 WO2022033312A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010799324.1A CN114078074A (zh) 2020-08-11 2020-08-11 一种图像处理装置及终端
CN202010799324.1 2020-08-11

Publications (1)

Publication Number Publication Date
WO2022033312A1 true WO2022033312A1 (zh) 2022-02-17

Family

ID=80247650

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/108960 WO2022033312A1 (zh) 2020-08-11 2021-07-28 图像处理装置及终端

Country Status (3)

Country Link
CN (1) CN114078074A (zh)
TW (1) TWI827960B (zh)
WO (1) WO2022033312A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090175609A1 (en) * 2008-01-08 2009-07-09 Sony Ericsson Mobile Communications Ab Using a captured background image for taking a photograph
CN101777180A (zh) * 2009-12-23 2010-07-14 中国科学院自动化研究所 基于背景建模和能量最小化的复杂背景实时替换方法
CN102157011A (zh) * 2010-12-10 2011-08-17 北京大学 利用移动拍摄设备进行动态纹理采集及虚实融合的方法
WO2016010721A1 (en) * 2014-07-15 2016-01-21 Qualcomm Incorporated Multispectral eye analysis for identity authentication
CN107610077A (zh) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 图像处理方法和装置、电子装置和计算机可读存储介质
CN110099217A (zh) * 2019-05-31 2019-08-06 努比亚技术有限公司 一种基于tof技术的图像拍摄方法、移动终端及计算机可读存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106504284B (zh) * 2016-10-24 2019-04-12 成都通甲优博科技有限责任公司 一种基于立体匹配与结构光相结合的深度图获取方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090175609A1 (en) * 2008-01-08 2009-07-09 Sony Ericsson Mobile Communications Ab Using a captured background image for taking a photograph
CN101777180A (zh) * 2009-12-23 2010-07-14 中国科学院自动化研究所 基于背景建模和能量最小化的复杂背景实时替换方法
CN102157011A (zh) * 2010-12-10 2011-08-17 北京大学 利用移动拍摄设备进行动态纹理采集及虚实融合的方法
WO2016010721A1 (en) * 2014-07-15 2016-01-21 Qualcomm Incorporated Multispectral eye analysis for identity authentication
CN107610077A (zh) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 图像处理方法和装置、电子装置和计算机可读存储介质
CN110099217A (zh) * 2019-05-31 2019-08-06 努比亚技术有限公司 一种基于tof技术的图像拍摄方法、移动终端及计算机可读存储介质

Also Published As

Publication number Publication date
TW202209247A (zh) 2022-03-01
CN114078074A (zh) 2022-02-22
TWI827960B (zh) 2024-01-01

Similar Documents

Publication Publication Date Title
WO2017113718A1 (zh) 基于虚拟现实的多界面统一显示系统及方法
WO2014055487A1 (en) Video conferencing enhanced with 3-d perspective control
CN101742348A (zh) 渲染方法与系统
US20120044241A1 (en) Three-dimensional on-screen display imaging system and method
CN109639954A (zh) 双孔径摄影机系统中的帧同步
WO2016072927A1 (en) Situation awareness system and method for situation awareness in a combat vehicle
CN104731338B (zh) 一种基于封闭式的增强虚拟现实系统及方法
CN109361912A (zh) 用于立体视觉图像捕获的多层相机装置
US8947512B1 (en) User wearable viewing devices
Chapdelaine-Couture et al. The omnipolar camera: A new approach to stereo immersive capture
CN107005689B (zh) 数字视频渲染
US11190757B2 (en) Camera projection technique system and method
WO2022033312A1 (zh) 图像处理装置及终端
JP2011135202A (ja) 映像信号処理装置及び映像信号処理方法
Andorko et al. Hardware implementation of a real-time 3D video acquisition system
WO2022033310A1 (zh) 图像处理装置和虚拟现实设备
WO2022033311A1 (zh) 图像处理装置及虚拟现实设备
Feldmann et al. Immersive multi-user 3D video communication
CN109727315B (zh) 一对多集群渲染方法、装置、设备及存储介质
TW202129363A (zh) 實現3d顯示的方法、裝置及3d顯示系統
US20200103669A1 (en) Mirror-based scene cameras
CN112738399A (zh) 图像处理方法、装置和电子设备
WO2012035927A1 (ja) 遠隔映像監視システム
WO2022033313A1 (zh) 图像处理装置及终端
Kumar et al. Penetra3D: A penetrable, interactive, 360-degree viewable display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21855365

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21855365

Country of ref document: EP

Kind code of ref document: A1