WO2022179362A1 - 一种图像融合方法以及装置 - Google Patents

一种图像融合方法以及装置 Download PDF

Info

Publication number
WO2022179362A1
WO2022179362A1 PCT/CN2022/073044 CN2022073044W WO2022179362A1 WO 2022179362 A1 WO2022179362 A1 WO 2022179362A1 CN 2022073044 W CN2022073044 W CN 2022073044W WO 2022179362 A1 WO2022179362 A1 WO 2022179362A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
information
target image
block
image
Prior art date
Application number
PCT/CN2022/073044
Other languages
English (en)
French (fr)
Inventor
蒲朝飞
张楠赓
Original Assignee
嘉楠明芯(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 嘉楠明芯(北京)科技有限公司 filed Critical 嘉楠明芯(北京)科技有限公司
Publication of WO2022179362A1 publication Critical patent/WO2022179362A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the field of image processing, in particular to the field of image fusion.
  • the alpha-blending technology can fuse different images to be fused into different regions of the target image (eg, background image) in a semi-transparent or opaque manner, and then display the fused image.
  • alpha channel bits When a pixel is represented by 32 bits, if R, G, and B are represented by 8 bits respectively, the remaining 8 bits are often called alpha channel bits.
  • An alpha value is stored in each pixel to indicate how transparent the pixel is. The value of alpha is generally 0 to 255. When alpha is 0, it means that the picture is fully transparent; when alpha is 255, it means that the picture is in the display target state; when alpha is between 0 and 255, it means that the picture is half-transparent. transparent state.
  • image fusion the usual method is to mix the RGB values of the source pixels with the RGB values of the target pixels (such as the pixels of the background image) in proportion, and finally obtain a mixed RGB value.
  • the image fusion technology in the prior art it takes a long time to process the video or image, which leads to lower display efficiency.
  • the embodiments of the present application provide an image fusion method and device to solve the problems existing in the related art, and the technical solutions are as follows:
  • an embodiment of the present application provides an image fusion method, including:
  • configuration information corresponding to multiple layer blocks to be fused, where the configuration information includes position information of the layer blocks in the target map;
  • the target image information and the layer information corresponding to each layer block are fused to obtain a fused image.
  • the configuration information further includes the display priority order of each layer block, and the method further includes:
  • the layer information of each layer block is obtained in order from low to high according to the display priority order.
  • acquiring target image information corresponding to the target image, and acquiring layer information corresponding to each layer block according to the location distribution including:
  • the arrangement order of each layer block is determined according to the position distribution
  • the location information includes the identification frame corresponding to the layer block, and the start coordinates, end coordinates and storage address of the identification frame in the target image, and the layer information of the current row is obtained according to the arrangement order, including:
  • the layer information of the current row or column is generated in the identification box corresponding to the layer block.
  • it also includes:
  • a bilinear interpolation operation is performed on the layer information corresponding to each layer block, and the layer information obtained after the operation is distributed to the second storage module.
  • the layer information obtained after the operation is distributed to the second storage module, including:
  • the layer information corresponding to the layer block with the highest priority is distributed to the second storage module.
  • an image fusion apparatus including:
  • a configuration information acquisition module configured to acquire configuration information corresponding to multiple layer blocks to be fused, where the configuration information includes location information of the layer blocks in the target map;
  • the layer block distribution determination module is used to determine the position distribution of each layer block in the target map according to the configuration information
  • the layer information obtaining module is used to obtain the target image information corresponding to the target image, and obtain the layer information corresponding to each layer block according to the location distribution;
  • the image fusion module is used to fuse the target image information and the layer information corresponding to each layer block to obtain a fused image.
  • the configuration information further includes the display priority order of each layer block, and further includes:
  • the overlapping area determination module is used to determine the overlapping area between each layer block according to the location distribution;
  • the layer information of each layer block is obtained in order from low to high according to the display priority order.
  • the layer information acquisition module includes:
  • the target image information reading submodule is used to read the target image information of the current row or column in the target image by using the method of reading by row or column;
  • the arrangement order determination sub-module is used to determine the arrangement order of each layer block according to the position distribution when there are layer blocks distributed in the current row or column;
  • the layer information reading submodule is used to read the layer information of the current row or column according to the arrangement order.
  • the location information includes the identification frame corresponding to the layer block, as well as the start coordinates, end coordinates and storage address of the identification frame in the target image, and the layer information reading submodule includes:
  • a read request generation unit used to generate a read request when the read position falls within the range of the start coordinate and the end coordinate;
  • the layer information reading unit is used to read the layer information of the current row or column from the memory according to the read request and storage address in the order of arrangement;
  • the layer information generating unit is used to generate the layer information of the current row or column in the identification box corresponding to the layer block.
  • it also includes:
  • a target image information distribution module for distributing the target image information to the first storage module
  • the layer information distribution module is used to perform bilinear interpolation operation on the layer information corresponding to each layer block, and distribute the layer information obtained after the operation to the second storage module.
  • the layer information distribution module includes:
  • the highest priority information distribution sub-module is configured to distribute the layer information corresponding to the layer block with the highest priority to the second storage module for a plurality of layer blocks including overlapping areas.
  • an electronic device comprising:
  • At least one processor and a memory communicatively coupled to the at least one processor;
  • the memory stores instructions executable by at least one processor, and the instructions are executed by at least one processor, so that the at least one processor can perform any one of the above methods.
  • a non-transitory computer-readable storage medium storing computer instructions, the computer instructions being used to cause a computer to perform any of the above methods.
  • an embodiment in the above application has the following advantages or beneficial effects: the image fusion method provided in this embodiment, because the image information of the target map is read, and the image information of the target map can also be read according to the position distribution of each layer block in the target map. Layer information of each layer block, and fuse the image information of the read target image with the layer information of multiple layer blocks to obtain a fused image, and then display the fused image.
  • the reading method provided in this embodiment the target image and multiple layer blocks are read, and then the read target image and multiple layer blocks are fused, which solves the background of sequential reading and fusion in the prior art
  • the technical problems of low fusion speed and low efficiency caused by images and multiple layer blocks effectively improve the fusion speed, save image processing time, and improve display efficiency.
  • FIG. 1 is a schematic diagram of an image fusion method according to an embodiment of the present application.
  • FIG. 2 is a structural diagram of an image display device according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an image fusion method according to another embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an image information acquisition module according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a reading process of reading target image information and layer information of layer blocks according to an embodiment of the present application
  • FIG. 6 is a schematic diagram of a display priority order of a plurality of layer blocks including overlapping regions according to an embodiment of the present application
  • FIG. 9 is a schematic diagram of an image fusion apparatus according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an image fusion apparatus according to another embodiment of the present application.
  • FIG. 11 is a block diagram of an electronic device for implementing an image fusion method according to an embodiment of the present application.
  • the background image and multiple layer blocks are sequentially read and fused. Specifically, the background image and the first layer block are read and fused, the image after the first fusion is used as the background image, the second layer block is read again, and the first fusion image and The second layer block is fused, and so on, until the fusion of the last layer block is completed, and then the final fused image is displayed, resulting in a long image fusion time and low display efficiency.
  • the present embodiment provides an image fusion method as shown in FIG. 1 , including the following steps:
  • Step S110 obtaining configuration information corresponding to multiple layer blocks to be fused, where the configuration information includes location information of the layer blocks in the target map;
  • Step S120 Determine the location distribution of each layer block in the target map according to the configuration information
  • Step S130 obtaining target image information corresponding to the target image, and obtaining layer information corresponding to each layer block according to the location distribution;
  • Step S140 Perform fusion processing on the target image information and the layer information corresponding to each layer block to obtain a fused image.
  • an image display device which fuses multiple layer blocks to be input with a target image, and generates interface control logic for displaying the fused image.
  • the 32 small rectangular images (layer blocks) configured by the software can be enlarged and pasted to different areas of the background video image (target image) in a translucent or opaque manner, and all the small images are independent.
  • the target image can be a background image.
  • Layer blocks can be rectangular areas, and rectangular areas can be used with rectangular borders.
  • the overall area of the rectangular box can be smaller than the sum of the areas of more than 4 original images.
  • Each layer block can support integer magnification within 8 times.
  • the image display device can support up to 32 layer blocks for fusion, the priority between layer blocks can be configured; the transparency of layer blocks can be configured; the recognition frame for identifying the content in the layer block can be a rectangular frame, The starting position, width and height of the rectangle are configurable. Requires each frame of image configuration register and the starting address of the image, supports MIPI (Mobile Industry Processor Interface/Mobile Industry Processor Interface), DPI (Display Pixel Interface, display pixel interface, RGB interface) output, supports SiI9022 video parallel port output.
  • MIPI Mobile Industry Processor Interface/Mobile Industry Processor Interface
  • DPI Display Pixel Interface
  • display pixel interface display pixel interface
  • RGB interface RGB interface
  • the configuration information acquisition module (Block info) in the image display device obtains the configuration information of multiple layer blocks (rectangular boxes) through the APB (Advanced Peripheral Bus) peripheral bus.
  • the configuration information includes: the location information of the layer block in the target image. For example, the size of the rectangle corresponding to each layer block, the initial position, end position and storage address of the rectangle in the target image.
  • the configuration information acquisition module (Block info) acquires the configuration information from the memory and sends it to the image information acquisition module (Fetch) and the control module (ctrl).
  • the image information acquisition module is used for determining the position distribution of each layer block in the target map according to the configuration information, acquiring the image information corresponding to the target map, and acquiring the layer information corresponding to each layer block according to the position distribution.
  • the acquired image information of the target image is sent to the first buffer module (src_buf) for storage through the data distribution module (data_distribute), and the acquired image information of multiple layer blocks is sent to the second buffer module (block_buf) for storage.
  • the control module (ctrl) is used to scan the image information of the target image and the layer information of a plurality of layer blocks according to the display order, starting from the starting position, that is, from left to right and from top to bottom.
  • the image fusion module uses a fusion algorithm to fuse the image information of the scanned target image and the layer information of multiple layer blocks to obtain a fused image.
  • the fused image is sent to the display buffer module (display_buf), and the timing control module (dsi_packer) displays the fused image through the protocol and interface.
  • the layer information of a plurality of layer blocks can also be read according to the position distribution of each layer block in the target map, and the image information of the read target map can be read.
  • the information is fused with the layer information of multiple layer blocks to obtain a fused image, and then the fused image is displayed, which effectively improves the fusion speed, saves image processing time, and improves display efficiency.
  • the configuration information further includes the display priority order of each layer block, and further includes:
  • Step S150 Determine the overlapping area between each layer block according to the location distribution
  • Step S160 For a plurality of layer blocks including overlapping regions, according to the display priority order, acquire layer information of each layer block in order from low to high.
  • the position distribution relationship between layer block i (block_i) and layer block j (block_j) is determined according to the configuration information, and the arrangement order of block_i and block_j and the overlapping area between block_i and block_j can be determined according to the position distribution relationship. If block_i and block_j need to be obtained in a row, and there is an overlapping area between block_i and block_j, obtain the layer information of these two layer blocks according to the arrangement order and display priority order of block_i and block_j.
  • the layer information of the layer block with the lowest priority is read first, and then the layer information of the layer block with the next lower priority is read, and the layer information read later directly overwrites the layer information read first.
  • step S130 includes:
  • Step S131 Read the target image information of the current row or column in the target image by using the method of reading by row or column;
  • Step S132 In the case that there are layer blocks distributed in the current row or column, determine the arrangement order of each layer block according to the position distribution;
  • Step S133 Acquire the layer information of the current row or column according to the arrangement order.
  • the image information acquisition module includes: a judgment submodule (block_judge), a coordinate management submodule (coor_manager), an address management submodule (addr_manager), a request management submodule (req_manager), a layer The block arbitration submodule (arb).
  • Coordinate management sub-module maintains the coordinate calculation of the background image (target image), including the horizontal and vertical coordinates of the background image, and is also used to maintain the coordinate calculation of the layer block relative to the background image.
  • Judging sub-module Each layer block has its own independent judging sub-module. The judging sub-module judges whether to request the layer information of the layer block from the memory according to the ordinate information of the background image, and calculates the requested image.
  • Address management sub-module Calculate the storage address information in the memory according to the coordinate information of the image.
  • Request management sub-module responsible for generating memory access requests for background images and memory access requests for different layer blocks.
  • Layer block arbitration sub-module When there are multiple memory access requests for layer blocks, the corresponding layer block is selected for memory access according to the location distribution relationship between the layer blocks.
  • the target image information and layer information are usually obtained by row-by-row reading. Of course, column-by-column reading is also possible.
  • the judgment sub-module (block_judge) needs to judge whether there is a layer block distribution in the current row. If there is no distributed layer block in the current row, according to the requirements of the image fusion algorithm, one row of target image information (for example, background image data) can be read in order from left to right. If there are layer blocks distributed in the current row, in addition to reading the target image information of the current row, it is also necessary to read the layer information of multiple layer blocks distributed in the current row. The reading process is shown in Figure 5.
  • the layer block arbitration sub-module (arb) If the current row has layer block distribution, the arrangement order of each layer block is determined according to the position distribution. Of course, it is also necessary to judge the overlap between multiple layer blocks in the current row, and determine the display priority order of multiple layer blocks including overlapping areas. Then, the current row of the layer block is read in order of arrangement and display priority. As shown in Figure 6, for example, the starting coordinate of block_i in the current row is smaller than the starting coordinate of block_j in the current row (start_i ⁇ start_j), then the layer information of block_i in the current row can be obtained first and written into the second storage module , and then take the layer information of block_j in the current row and write it into the second storage module.
  • the read layer information can be stored, which can ensure the smallest amount of stored data. If there is an overlapping area between block_i and block_j, and the display priority of block_i is higher than that of block_j, first read the layer information of block_j in the current row, and then read the layer information of block_i in the current row. In the overlapping area, the image of block_i The layer information is overlaid on the layer information of block_j.
  • the location information includes the identification frame corresponding to the layer block, and the start coordinates, end coordinates and storage address of the identification frame in the target image.
  • Step S133 includes:
  • the layer information of the current row or column is generated in the identification box corresponding to the layer block.
  • the request management module in the process of reading the current line, as shown in Figure 6, if the read ordinate y coordinate falls within the range of the start and end coordinates of the layer block , a read request is generated.
  • the coordinate management sub-module in the process of reading the current line from left to right, the storage address and read request corresponding to the pixels of the background image are sent to the memory, and the background image in this address is read from the memory. information.
  • Vertical coordinate management If there is no layer block in the current line that can be merged, then for the y coordinate of the background image, you only need to add 1 to the y coordinate at the end of each line to read the background image of the next line.
  • Abscissa coordinate management As shown in Figure 7, in the current line, the current line length of the background image read is 1919, which is stored in the first cache module. During the reading process, it needs to pass through the layer block m (block_m) and the map. Layer block n (block_n). When the layer block m (block_m) and the layer block n (block_n) do not overlap, the coordinates jump from 0 to the starting coordinate _m directly to read the layer information, and the reading ends at the ending coordinate _m, directly Jump to the start coordinate _n of the next layer block n, start to read the layer information, and stop reading at the end coordinate _n.
  • it also includes:
  • a bilinear interpolation operation is performed on the layer information corresponding to each layer block, and the layer information obtained after the operation is distributed to the second storage module.
  • the size of the acquired layer block is scaled, and the image scaling is usually performed by using a bilinear interpolation method. Since the point to be interpolated only depends on the surrounding four pixels, the interpolation operation only depends on two lines of image data at most.
  • step S180 includes:
  • the layer information corresponding to the layer block with the highest priority is distributed to the second storage module.
  • each layer block will use a different space in the cache.
  • the layer information of the layer block with the higher priority is retained, and the history of the overlapping layer block is not saved.
  • Layer information in order to avoid creating a cache for each layer block, which will cause the cache space to take up too much. It only needs to store the layer information of the layer block that needs to be merged, and the storage amount is small.
  • an image fusion apparatus including:
  • the configuration information obtaining module 110 is configured to obtain configuration information corresponding to a plurality of layer blocks to be fused, where the configuration information includes position information of the layer blocks in the target map;
  • the layer block distribution determining module 120 is configured to determine the position distribution of each layer block in the target map according to the configuration information
  • the layer information acquisition module 130 is configured to acquire the target image information corresponding to the target image, and acquire the layer information corresponding to each layer block according to the location distribution;
  • the image fusion module 140 is configured to perform fusion processing on the target image information and the layer information corresponding to each layer block to obtain a fused image.
  • the configuration information further includes the display priority order of each layer block, and further includes:
  • the overlapping area determination module 150 is used for determining the overlapping area between each layer block according to the position distribution;
  • the priority information acquisition module 160 is configured to, for a plurality of layer blocks including overlapping regions, acquire layer information of each layer block in order from low to high according to the display priority order.
  • the layer information acquisition module 130 includes:
  • the target image information reading submodule is used to read the target image information of the current row or column in the target image by using the method of reading by row or column;
  • the arrangement order determination sub-module is used to determine the arrangement order of each layer block according to the position distribution when there are layer blocks distributed in the current row or column;
  • the layer information reading submodule is used to read the layer information of the current row or column according to the arrangement order.
  • the location information includes the identification frame corresponding to the layer block, as well as the start coordinates, end coordinates and storage address of the identification frame in the target image, and the layer information reading submodule includes:
  • a read request generation unit used to generate a read request when the read position falls within the range of the start coordinate and the end coordinate;
  • the layer information reading unit is used to read the layer information of the current row or column from the memory according to the read request and storage address in the order of arrangement;
  • the layer information generating unit is used to generate the layer information of the current row or column in the identification box corresponding to the layer block.
  • it also includes:
  • a target image information distribution module for distributing the target image information to the first storage module
  • the layer information distribution module is used to perform bilinear interpolation operation on the layer information corresponding to each layer block, and distribute the layer information obtained after the operation to the second storage module.
  • the layer information distribution module includes:
  • the highest priority information distribution sub-module is configured to distribute the layer information corresponding to the layer block with the highest priority to the second storage module for a plurality of layer blocks including overlapping areas.
  • the present application further provides an electronic device and a readable storage medium.
  • FIG. 11 it is a block diagram of an electronic device of an image fusion method according to an embodiment of the present application.
  • Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the application described and/or claimed herein.
  • the electronic device includes: one or more processors 1101, a memory 1102, and interfaces for connecting various components, including a high-speed interface and a low-speed interface.
  • the various components are interconnected using different buses and may be mounted on a common motherboard or otherwise as desired.
  • the processor may process instructions for execution within the electronic device, including storing in or on memory to display a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to the interface ) instructions for graphics information.
  • GUI Graphical User Interface
  • multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired.
  • multiple electronic devices may be connected, each providing some of the necessary operations (eg, as a server array, a group of blade servers, or a multiprocessor system).
  • a processor 1101 is used as an example.
  • the memory 1102 is the non-transitory computer-readable storage medium provided by the present application.
  • the memory stores instructions executable by at least one processor, so that the at least one processor executes an image fusion method provided by the present application.
  • the non-transitory computer-readable storage medium of the present application stores computer instructions, and the computer instructions are used to cause a computer to execute an image fusion method provided by the present application.
  • the memory 1102 can be used to store non-transitory software programs, non-transitory computer-executable programs and modules, such as program instructions/modules corresponding to an image fusion method in the embodiments of the present application.
  • the processor 1101 executes various functional applications and data processing of the server by running the non-transitory software programs, instructions and modules stored in the memory 1102, ie, implements an image fusion method in the above method embodiments.
  • the memory 1102 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by using an electronic device according to an image fusion method. data etc. Additionally, memory 1102 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1102 may optionally include memory located remotely from the processor 1101, and these remote memories may be connected to the aforementioned electronic device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the above electronic device may further include: an input device 1103 and an output device 1104 .
  • the processor 1101 , the memory 1102 , the input device 1103 and the output device 1104 may be connected by a bus or in other ways, and the connection by a bus is taken as an example in FIG. 11 .
  • the input device 1103 can receive input numerical or character information, and generate key signal input related to user settings and function control of the above electronic equipment, such as touch screen, keypad, mouse, track pad, touch pad, pointing stick, one or more Input devices such as mouse buttons, trackballs, joysticks, etc.
  • Output devices 1104 may include display devices, auxiliary lighting devices (eg, LEDs), haptic feedback devices (eg, vibration motors), and the like.
  • the display device may include, but is not limited to, a liquid crystal display (Liquid Cr11stal Displa11, LCD), a light emitting diode (Light Emitting Diode, LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
  • Various implementations of the systems and techniques described herein can be implemented in digital electronic circuitry, integrated circuit systems, application specific integrated circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof .
  • ASICs application specific integrated circuits
  • These various embodiments may include being implemented in one or more computer programs executable and/or interpretable on a programmable system including at least one programmable processor that The processor, which may be a special purpose or general-purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device an output device.
  • machine-readable medium and “computer-readable medium” refer to any computer program product, apparatus, and/or apparatus for providing machine instructions and/or data to a programmable processor ( For example, magnetic disks, optical disks, memories, programmable logic devices (PLDs)), including machine-readable media that receive machine instructions as machine-readable signals.
  • machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • the systems and techniques described herein may be implemented on a computer having: a display device (eg, a CRT (Cathode Ray Tube) or an LCD (liquid crystal) for displaying information to the user monitor); and a keyboard and pointing device (eg, mouse or trackball) through which a user can provide input to the computer.
  • a display device eg, a CRT (Cathode Ray Tube) or an LCD (liquid crystal) for displaying information to the user monitor
  • a keyboard and pointing device eg, mouse or trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (eg, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including acoustic input, voice input, or tactile input) to receive input from the user.
  • the systems and techniques described herein may be implemented on a computing system that includes back-end components (eg, as a data server), or a computing system that includes middleware components (eg, an application server), or a computing system that includes front-end components (eg, a user's computer having a graphical user interface or web browser through which a user may interact with implementations of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system may be interconnected by any form or medium of digital data communication (eg, a communication network). Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN), and the Internet.
  • a computer system can include clients and servers.
  • Clients and servers are generally remote from each other and usually interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本申请公开了一种图像融合方法以及装置。具体实现方案为:方法包括:获取待融合的多个图层块对应的配置信息,配置信息包括图层块在目标图中的位置信息;根据配置信息确定各图层块在目标图中的位置分布;获取目标图对应的目标图像信息,并按照位置分布获取各图层块对应的图层信息;将目标图像信息和各图层块对应的图层信息进行融合处理,得到融合后的图像。有效提高了融合速度,节省图像处理时间,提高了显示效率。

Description

一种图像融合方法以及装置
本申请要求于2021年02月24日提交的、申请号为202110209219.2、标题为“一种图像融合方法以及装置”的中国专利申请的优先权,该中国专利申请的公开内容以引用的方式并入本文。
技术领域
本申请涉及图像处理领域,尤其涉及图像融合领域。
背景技术
人工智能系统将视频或图像在显示屏上做显示时,需要标识出视频或图像上检测到的目标,或需要根据检测到的目标投放跟目标相关的其他信息等。为了达到上述目的,通常采用图像融合(alpha-blending)技术。图像融合(alpha-blending)技术可以将不同的待融合图以半透明或不透明的方式融合到目标图(如背景图)中的不同区域,再将融合后的图像进行显示。一幅彩色图像的每个像素用R,G,B三个分量表示,若每个分量用8位,那么一个像素共用3*8=24位表示。在用32位表示一个像素时,若R,G,B分别用8位表示,剩下的8位常称为α通道(alpha channel)位。在每个像素中保存一个alpha值,用来表示这个像素的透明程度。alpha的取值一般为0到255,alpha为0时,表示图片为全透明状态,alpha为255时,表示图片为显示目标图状态,alpha在0到255之间取值时,表示图片为半透明状态。图像融合时,通常的方法是将源像素的RGB值,分别与目标像素(如背景图的像素)的RGB按比例混合,最后得到一个混合后的RGB值。然而,采用现有技术中的图像融合技术时,对视频或图像处理的时间较长,进而导致显示效率较低。
发明内容
本申请实施例提供一种图像融合方法以及装置,以解决相关技术存在的问题,技术方案如下:
第一方面,本申请实施例提供了一种图像融合方法,包括:
获取待融合的多个图层块对应的配置信息,配置信息包括图层块在目标图中的位置信息;
根据配置信息确定各图层块在目标图中的位置分布;
获取目标图对应的目标图像信息,并按照位置分布获取各图层块对应的图层信息;
将目标图像信息和各图层块对应的图层信息进行融合处理,得到融合后的图像。
在一种实施方式中,配置信息还包括各图层块的显示优先级顺序,方法还包括:
根据位置分布确定各图层块之间的重叠区域;
针对包含有重叠区域的多个图层块,根据显示优先级顺序,从低到高依次获取各图层块的图层信息。
在一种实施方式中,获取目标图对应的目标图像信息,并按照位置分布获取各图层块对应的图层信息,包括:
利用按行或列读取的方式,读取目标图中当前行或列的目标图像信息;
在当前行或列中分布有图层块的情况下,按照位置分布确定各图层块的排列顺序;
根据排列顺序获取当前行的图层信息。
在一种实施方式中,位置信息包括图层块对应的识别框,以及识别框在目标图中的起始坐标、结束坐标以及存储地址,根据排列顺序获取当前行的图层信息,包括:
在读取位置落在起始坐标和结束坐标范围内的情况下,生成读请求;
根据读请求和存储地址,按照排列顺序从内存中读取当前行或列的图层信息,或
在图层块对应的识别框中生成当前行或列的图层信息。
在一种实施方式中,还包括:
将目标图像信息分发至第一存储模块中;
对各图层块对应的图层信息进行双线性插值运算,将运算后得到的图层 信息分发至第二存储模块中。
在一种实施方式中,将运算后得到的图层信息分发至第二存储模块中,包括:
针对包含有重叠区域的多个图层块,将最高优先级的图层块对应的图层信息分发至第二存储模块中。
第二方面,本申请实施例提供了一种图像融合装置,包括:
配置信息获取模块,用于获取待融合的多个图层块对应的配置信息,配置信息包括图层块在目标图中的位置信息;
图层块分布确定模块,用于根据配置信息确定各图层块在目标图中的位置分布;
图层信息获取模块,用于获取目标图对应的目标图像信息,并按照位置分布获取各图层块对应的图层信息;
图像融合模块,用于将目标图像信息和各图层块对应的图层信息进行融合处理,得到融合后的图像。
在一种实施方式中,配置信息还包括各图层块的显示优先级顺序,还包括:
重叠区域确定模块,用于根据位置分布确定各图层块之间的重叠区域;
针对包含有重叠区域的多个图层块,根据显示优先级顺序,从低到高依次获取各图层块的图层信息。
在一种实施方式中,图层信息获取模块包括:
目标图像信息读取子模块,用于利用按行或列读取的方式,读取目标图中当前行或列的目标图像信息;
排列顺序确定子模块,用于在当前行或列中分布有图层块的情况下,按照位置分布确定各图层块的排列顺序;
图层信息读取子模块,用于根据排列顺序读取当前行或列的图层信息。
在一种实施方式中,位置信息包括图层块对应的识别框,以及识别框在目标图中的起始坐标、结束坐标以及存储地址,图层信息读取子模块包括:
读请求生成单元,用于在读取位置落在起始坐标和结束坐标范围内的情况下,生成读请求;
图层信息读取单元,用于读取根据读请求和存储地址,按照排列顺序从 内存中读取当前行或列的图层信息;
图层信息生成单元,用于在图层块对应的识别框中生成当前行或列的图层信息。
在一种实施方式中,还包括:
目标图像信息分发模块,用于将目标图像信息分发至第一存储模块中;
图层信息分发模块,用于对各图层块对应的图层信息进行双线性插值运算,将运算后得到的图层信息分发至第二存储模块中。
在一种实施方式中,图层信息分发模块包括:
最高优先级信息分发子模块,用于针对包含有重叠区域的多个图层块,将最高优先级的图层块对应的图层信息分发至第二存储模块中。
第三方面,提供了一种电子设备,包括:
至少一个处理器;以及与至少一个处理器通信连接的存储器;
其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述任一项的方法。
第四方面,提供了一种存储有计算机指令的非瞬时计算机可读存储介质,计算机指令用于使计算机执行上述任一项的方法。
上述申请中的一个实施例具有如下优点或有益效果:本实施例提供的图像融合方法,由于读取目标图的图像信息的同时还能够根据各图层块在目标图中的位置分布读取多个图层块的图层信息,并对读取的目标图的图像信息和多个图层块的图层信息进行融合,得到融合后的图像,进而显示融合后的图像。按照本实施例提供的读取方式来读取目标图和多个图层块,进而对读取的目标图和多个图层块进行融合,解决了现有技术中依次读取和融合的背景图和多个图层块导致的融合速度低,效率低的技术问题,有效提高了融合速度,节省图像处理时间,提高了显示效率。
上述可选方式所具有的其他效果将在下文中结合具体实施例加以说明。
附图说明
附图用于更好地理解本方案,不构成对本申请的限定。其中:
图1是根据本申请一实施例的一种图像融合方法的示意图;
图2是根据本申请一实施例的一种图像显示器件的结构图;
图3是根据本申请另一实施例的一种图像融合方法的示意图;
图4是根据本申请一实施例的图像信息获取模块的结构示意图;
图5是根据本申请一实施例的读取目标图像信息和图层块的图层信息的读取过程示意图;
图6是根据本申请一实施例的一种包含有重叠区域的多个图层块的显示优先级顺序的示意图;
图7是根据本申请一实施例的一种当前行的图层块m和图层块n不交叠的场景图;
图8是根据本申请一实施例的一种读取当前行的图层块m和图层块n交叠的场景图;
图9是根据本申请一实施例的一种图像融合装置的示意图;
图10是根据本申请另一实施例的一种图像融合装置的示意图;
图11是用来实现本申请实施例的一种图像融合方法的电子设备的框图。
具体实施方式
以下结合附图对本申请的示范性实施例做出说明,其中包括本申请实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本申请的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。
现有的图像融合技术中,依次读取并融合背景图和多个图层块。具体的,读取背景图和第一图层块,并将其进行融合,将第一次融合后的图像作为背景图,再次读取第二图层块,将第一次融合后的图像和第二图层块进行融合,以此类推,直到最后一个图层块融合结束,再将最终融合的图像进行显示,导致图像融合时间较长,显示效率低。为了解决上述技术问题,本实施例提供如图1所示的一种图像融合方法,包括如下步骤:
步骤S110:获取待融合的多个图层块对应的配置信息,配置信息包括图层块在目标图中的位置信息;
步骤S120:根据配置信息确定各图层块在目标图中的位置分布;
步骤S130:获取目标图对应的目标图像信息,并按照位置分布获取各图层块对应的图层信息;
步骤S140:将目标图像信息和各图层块对应的图层信息进行融合处理,得到融合后的图像。
一种示例中,提供一种图像显示器件,将待输入的多个图层块(block)和目标图进行融合,并产生对融合后的图像进行显示的接口控制逻辑。例如,可以将软件配置的32个矩形小图(图层块)放大后以半透明或者不透明的方式贴到背景视频图(目标图)的不同区域,所有小图之间是独立的,其中,目标图可以是背景图像。图层块可以是矩形区域,矩形区域可以用矩形边框来使用。矩形框的总体面积可以小于超过4张原图的面积之和。每个图层块可以支持8倍以内整数放大。图像显示器件最大可支持32块图层块进行融合,图层块之间的优先级可配置;图层块的透明度可配置;对图层块中的内容进行识别的识别框可以是矩形框,矩形框的起始位置、宽度以及高度可配置。需要每帧图像配置寄存器和图像的起始地址,支持MIPI(Mobile Industry Processor Interface/移动工业处理器接口)、DPI(Display Pixel Interface,显示像素接口,RGB接口)输出,支持SiI9022视频并口输出。
MIPI-DPI
Figure PCTCN2022073044-appb-000001
SiI9022视频并口
Figure PCTCN2022073044-appb-000002
图像显示器件的硬件框图如图2所示,图像显示器件中的配置信息获 取模块(Block info)通过APB(Advanced Peripheral Bus)外围总线获取多个图层块(矩形框)的配置信息。配置信息包括:图层块在目标图中的位置信息。例如,每个图层块对应的矩形框的尺寸、矩形框在目标图中的初始位置、结束位置以及存储地址。配置信息获取模块(Block info)从内存中获取配置信息,并将其发送至图像信息获取模块(Fetch)和控制模块(ctrl)中。图像信息获取模块用于根据配置信息确定各图层块在目标图中的位置分布,获取目标图对应的图像信息,按照位置分布获取各图层块对应的图层信息。通过数据分发模块(data_distribute)将已获取的目标图的图像信息发送至第一缓冲模块(src_buf)存储,将已获取的多个图层块的图像信息发送至第二缓冲模块(block_buf)存储。控制模块(ctrl)用于从起始位置开始,按照显示顺序即从左到右从上到下的顺序扫描目标图的图像信息和多个图层块的图层信息。图像融合模块(blending alu)利用融合算法对扫描的目标图的图像信息和多个图层块的图层信息进行融合,得到融合后的图像。将融合后的图像发送至显示缓冲模块(display_buf),时序控制模块(dsi_packer)通过协议和接口将融合后的图像进行显示。
本实施方式中,由于读取目标图的图像信息的同时还能够根据各图层块在目标图中的位置分布读取多个图层块的图层信息,并对读取的目标图的图像信息和多个图层块的图层信息进行融合,得到融合后的图像,进而显示融合后的图像,有效提高了融合速度,节省图像处理时间,提高了显示效率。
在一种实施例中,如图3所示,配置信息还包括各图层块的显示优先级顺序,还包括:
步骤S150:根据位置分布确定各图层块之间的重叠区域;
步骤S160:针对包含有重叠区域的多个图层块,根据显示优先级顺序,从低到高依次获取各图层块的图层信息。
一种示例中,在读取图层块时,需要考虑多个图层块的先后顺序,交叠等问题。例如,根据配置信息确定图层块i(block_i)和图层块j(block_j)之间的位置分布关系,根据位置分布关系可以确定block_i和block_j的排列顺序和block_i和block_j之间的重叠区域。如果一行中需要取到block_i和block_j,且block_i和block_j之间具有重叠区域,按照block_i和block_j 的排列顺序和显示优先级顺序,获取这两个图层块的图层信息。
为了简化控制的复杂度,先取读取优先级最低图层块的图层信息,然后再读优先级次低图层块的图层信息,将后读入的图层信息直接覆盖先读入的图层信息,直到读到优先级最高图层块的图层信息,控制上简化了很多,方便控制内存中的图像,在屏幕上做显示。
在一种实施例中,步骤S130,包括:
步骤S131:利用按行或列读取的方式,读取目标图中当前行或列的目标图像信息;
步骤S132:在当前行或列中分布有图层块的情况下,按照位置分布确定各图层块的排列顺序;
步骤S133:根据排列顺序获取当前行或列的图层信息。
一种示例中,如图4所示,图像信息获取模块包括:判断子模块(block_judge)、坐标管理子模块(coor_manager)、地址管理子模块(addr_manager)、请求管理子模块(req_manager)、图层块仲裁子模块(arb)。坐标管理子模块:维护背景图像(目标图)的坐标计算,包括背景图像的横坐标和纵坐标,也用于维护图层块相对于背景图像的坐标计算。判断子模块:每个图层块都有各自独立的判断子模块,判断子模块根据背景图像的纵坐标信息来进行判断是否从内存中请求图层块的图层信息,并计算出所请求的图层块的第几行图层信息。地址管理子模块:根据图像的坐标信息来计算内存中的存储地址信息。请求管理子模块:负责产生针对背景图像的内存访问请求,以及针对不同图层块的内存访问请求。图层块仲裁子模块:当存在针对图层块的多个内存访问请求时,根据图层块之间的位置分布关系来选择对应图层块进行内存访问。
通常选用按行读取的方式获取目标图像信息和图层信息,当然,也可以按列读取的方式。在读取之前,判断子模块(block_judge)需要判断当前行中是否有图层块分布。如果当前行中没有分布图层块,根据图像融合算法的需求可以按照从左到右顺序读一行目标图像信息(例如,背景图像数据)即可。如果当前行中分布有图层块,除了需要读取当前行的目标图像信息外,还需要读取分布在当前行的多个图层块的图层信息。读取过程如图5所示。
图层块仲裁子模块(arb)中:如果当前行有图层块分布,按照位置分布确定各图层块的排列顺序。当然,还需要判断当前行中会穿过多个图层块之间的重叠情况,确定包含有重叠区域的多个图层块的显示优先级顺序。然后,按照排列顺序和显示优先级顺序对图层块的当前行进行读取。如图6所示,例如,block_i在当前行的起始坐标小于block_j在当前行的起始坐标(start_i<start_j),那么可以先获取block_i在当前行的图层信息,写入第二存储模块,再取block_j在当前行的图层信息,写入第二存储模块。针对图层块行,可以在读取一行图层信息之后,读取的图层信息进行存储,这样可以保证存储数据量最小。如果block_i和block_j之间有重叠区域,且block_i的显示优先级高于block_j,那么先读block_j在当前行的图层信息,再读block_i在当前行的图层信息,在重叠区域,block_i的图层信息覆盖于block_j的图层信息。
在一种实施方式中,位置信息包括图层块对应的识别框,以及识别框在目标图中的起始坐标、结束坐标以及存储地址,步骤S133,包括:
在读取位置落在起始坐标和结束坐标范围内的情况下,生成读请求;
根据读请求和存储地址,按照排列顺序从内存中获取当前行或列的图层信息,或
在图层块对应的识别框中生成当前行或列的图层信息。
一种示例中,请求管理模块(req_manager)中:在读取当前行的过程中,如图6所示,如果读取的纵坐标y坐标落在图层块的起始坐标和结束坐标范围内时,生成读请求。坐标管理子模块(coor_manager)中:在从左到右读取当前行的过程中,将背景图像的像素点对应的存储地址和读请求发送给内存,从内存中读取此地址中的背景图像信息。纵坐标管理:如果当前行没有图层块可以融合,那么针对背景图像的y坐标,只需要在读取每行结束时y坐标加1即可,读取下一行背景图像。横坐标坐标管理:如图7所示,当前行中,读取背景图像的当前行长度为1919,存储在第一缓存模块中,读取过程中需要穿过图层块m(block_m)和图层块n(block_n)。图层块m(block_m)和图层块n(block_n)不交叠的情况下,坐标从0直接跳到起始坐标_m进行读取图层信息,在结束坐标_m读取截止,直接跳到下一个图层块n的起始坐标_n,开始读取图层信息,在结束坐标_n 读取截止。如图8所示,图层块m(block_m)和图层块n(block_n)交叠的情况下,读取至交叠处判断图层块m和图层块n的优先级,如果图层块m的优先级大于图层块n的优先级,先读取优先级较低的图层块n在交叠处的图层信息,再读取优先级较高的图层块m在交叠处的图层信息。如果图层块m的优先级小于图层块n的优先级,先读取优先级较低的图层块m在交叠处的图层信息,再读取优先级较高的图层块n在交叠处的图层信息。
在一种实施方式中,还包括:
将目标图像信息分发至第一存储模块中;
对各图层块对应的图层信息进行双线性插值运算,将运算后得到的图层信息分发至第二存储模块中。
一种示例中,考虑到图像显示器件的实时性和设计的复杂度,对获取的图层块的大小进行缩放,通常采用双线性插值的方法来进行图像缩放。由于待插值点只依赖周围四个像素点,所以插值运算最多只依赖两行图像数据。
在一种实施方式中,步骤S180:包括:
针对包含有重叠区域的多个图层块,将最高优先级的图层块对应的图层信息分发至第二存储模块中。
一种示例中,每个图层块会使用缓存中的不同空间,当图层块发生交叠的时候保留优先级高的图层块的图层信息,不保存交叠的图层块的历史图层信息,为了避免了为每个图层块开辟缓存,导致缓存空间占用太多。只需存储当前需要融合的图层块的图层信息即可,存储量较少。
在另一种具体实施方式中,如图9所示,提供了一种图像融合装置,包括:
配置信息获取模块110,用于获取待融合的多个图层块对应的配置信息,配置信息包括图层块在目标图中的位置信息;
图层块分布确定模块120,用于根据配置信息确定各图层块在目标图中的位置分布;
图层信息获取模块130,用于获取目标图对应的目标图像信息,并按照位置分布获取各图层块对应的图层信息;
图像融合模块140,用于将目标图像信息和各图层块对应的图层信息进行融合处理,得到融合后的图像。
在一种实施方式中,如图10所示,配置信息还包括各图层块的显示优先级顺序,还包括:
重叠区域确定模块150,用于根据位置分布确定各图层块之间的重叠区域;
优先级信息获取模块160,用于针对包含有重叠区域的多个图层块,根据显示优先级顺序,从低到高依次获取各图层块的图层信息。
在一种实施方式中,图层信息获取模块130包括:
目标图像信息读取子模块,用于利用按行或列读取的方式,读取目标图中当前行或列的目标图像信息;
排列顺序确定子模块,用于在当前行或列中分布有图层块的情况下,按照位置分布确定各图层块的排列顺序;
图层信息读取子模块,用于根据排列顺序读取当前行或列的图层信息。
在一种实施方式中,位置信息包括图层块对应的识别框,以及识别框在目标图中的起始坐标、结束坐标以及存储地址,图层信息读取子模块包括:
读请求生成单元,用于在读取位置落在起始坐标和结束坐标范围内的情况下,生成读请求;
图层信息读取单元,用于读取根据读请求和存储地址,按照排列顺序从内存中读取当前行或列的图层信息;
图层信息生成单元,用于在图层块对应的识别框中生成当前行或列的图层信息。
在一种实施方式中,还包括:
目标图像信息分发模块,用于将目标图像信息分发至第一存储模块中;
图层信息分发模块,用于对各图层块对应的图层信息进行双线性插值运算,将运算后得到的图层信息分发至第二存储模块中。
在一种实施方式中,图层信息分发模块包括:
最高优先级信息分发子模块,用于针对包含有重叠区域的多个图层块,将最高优先级的图层块对应的图层信息分发至第二存储模块中。
本申请实施例各装置中的各模块的功能可以参见上述方法中的对应描述, 在此不再赘述。
根据本申请的实施例,本申请还提供了一种电子设备和一种可读存储介质。
如图11所示,是根据本申请实施例的一种图像融合方法的电子设备的框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本申请的实现。
如图11所示,该电子设备包括:一个或多个处理器1101、存储器1102,以及用于连接各部件的接口,包括高速接口和低速接口。各个部件利用不同的总线互相连接,并且可以被安装在公共主板上或者根据需要以其它方式安装。处理器可以对在电子设备内执行的指令进行处理,包括存储在存储器中或者存储器上以在外部输入/输出装置(诸如,耦合至接口的显示设备)上显示图形用户界面(Graphical User Interface,GUI)的图形信息的指令。在其它实施方式中,若需要,可以将多个处理器和/或多条总线与多个存储器和多个存储器一起使用。同样,可以连接多个电子设备,各个设备提供部分必要的操作(例如,作为服务器阵列、一组刀片式服务器、或者多处理器系统)。图11中以一个处理器1101为例。
存储器1102即为本申请所提供的非瞬时计算机可读存储介质。其中,存储器存储有可由至少一个处理器执行的指令,以使至少一个处理器执行本申请所提供的一种图像融合方法。本申请的非瞬时计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行本申请所提供的一种图像融合方法。
存储器1102作为一种非瞬时计算机可读存储介质,可用于存储非瞬时软件程序、非瞬时计算机可执行程序以及模块,如本申请实施例中的一种图像融合方法对应的程序指令/模块。处理器1101通过运行存储在存储器1102中的非瞬时软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例中的一种图像融合方法。
存储器1102可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据一种图像融合方法的电子设备的使用所创建的数据等。此外,存储器1102可以包括高速随机存取存储器,还可以包括非瞬时存储器,例如至少一个磁盘存储器件、闪存器件、或其他非瞬时固态存储器件。在一些实施例中,存储器1102可选包括相对于处理器1101远程设置的存储器,这些远程存储器可以通过网络连接至上述电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
上述电子设备还可以包括:输入装置1103和输出装置1104。处理器1101、存储器1102、输入装置1103和输出装置1104可以通过总线或者其他方式连接,图11中以通过总线连接为例。
输入装置1103可接收输入的数字或字符信息,以及产生与上述电子设备的用户设置以及功能控制有关的键信号输入,例如触摸屏、小键盘、鼠标、轨迹板、触摸板、指示杆、一个或者多个鼠标按钮、轨迹球、操纵杆等输入装置。输出装置1104可以包括显示设备、辅助照明装置(例如,LED)和触觉反馈装置(例如,振动电机)等。该显示设备可以包括但不限于,液晶显示器(Liquid Cr11stal Displa11,LCD)、发光二极管(Light Emitting Diode,LED)显示器和等离子体显示器。在一些实施方式中,显示设备可以是触摸屏。
此处描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、专用集成电路(Application Specific Integrated Circuits,ASIC)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
这些计算程序(也称作程序、软件、软件应用、或者代码)包括可编程处理器的机器指令,并且可以利用高级过程和/或面向对象的编程语言、和/或汇编/机器语言来实施这些计算程序。如本文使用的,术语“机器可读介质” 和“计算机可读介质”指的是用于将机器指令和/或数据提供给可编程处理器的任何计算机程序产品、设备、和/或装置(例如,磁盘、光盘、存储器、可编程逻辑装置(programmable logic device,PLD)),包括,接收作为机器可读信号的机器指令的机器可读介质。术语“机器可读信号”指的是用于将机器指令和/或数据提供给可编程处理器的任何信号。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(Cathode Ray Tube,阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(Local Area Network,LAN)、广域网(Wide Area Network,WAN)和互联网。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本申请中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本申请公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本申请保护范围的限制。本领域技术人 员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本申请的精神和原则之内所作的修改、等同替换和改进等,均应包含在本申请保护范围之内。

Claims (14)

  1. 一种图像融合方法,其特征在于,包括:
    获取待融合的多个图层块对应的配置信息,所述配置信息包括所述图层块在目标图中的位置信息;
    根据所述配置信息确定各所述图层块在所述目标图中的位置分布;
    获取所述目标图对应的目标图像信息,并按照所述位置分布获取各所述图层块对应的图层信息;
    将所述目标图像信息和各所述图层块对应的图层信息进行融合处理,得到融合后的图像。
  2. 根据权利要求1所述的方法,其特征在于,所述配置信息还包括各所述图层块的显示优先级顺序,所述方法还包括:
    根据所述位置分布确定各所述图层块之间的重叠区域;
    针对包含有所述重叠区域的多个图层块,根据所述显示优先级顺序,从低到高依次获取各所述图层块的图层信息。
  3. 根据权利要求1或2所述的方法,其特征在于,所述获取所述目标图对应的目标图像信息,并按照所述位置分布获取各所述图层块对应的图层信息,包括:
    利用按行或列读取的方式,读取所述目标图中当前行或列的目标图像信息;
    在当前行或列中分布有所述图层块的情况下,按照所述位置分布确定各所述图层块的排列顺序;
    根据所述排列顺序获取当前行或列的图层信息。
  4. 根据权利要求3所述的方法,其特征在于,所述位置信息包括所述图层块对应的识别框,以及所述识别框在所述目标图中的起始坐标、结束坐标以及存储地址,所述根据所述排列顺序获取当前行或列的图层信息,包括:
    在读取位置落在所述起始坐标和所述结束坐标范围内的情况下,生成读请求;
    根据所述读请求和所述存储地址,按照所述排列顺序从内存中读取当前行或列的图层信息,或
    在所述图层块对应的识别框中生成当前行或列的图层信息。
  5. 根据权利要求2-4中任意一项所述的方法,其特征在于,还包括:
    将所述目标图像信息分发至第一存储模块中;
    对各所述图层块对应的图层信息进行双线性插值运算,将运算后得到的图层信息分发至第二存储模块中。
  6. 根据权利要求5所述的方法,其特征在于,所述将运算后得到的图层信息分发至第二存储模块中,包括:
    针对包含有所述重叠区域的多个图层块,将最高优先级的图层块对应的图层信息分发至所述第二存储模块中。
  7. 一种图像融合装置,其特征在于,包括:
    配置信息获取模块,用于获取待融合的多个图层块对应的配置信息,所述配置信息包括所述图层块在目标图中的位置信息;
    图层块分布确定模块,用于根据所述配置信息确定各所述图层块在所述目标图中的位置分布;
    图层信息获取模块,用于获取所述目标图对应的目标图像信息,并按照所述位置分布获取各所述图层块对应的图层信息;
    图像融合模块,用于将所述目标图像信息和各所述图层块对应的图层信息进行融合处理,得到融合后的图像。
  8. 根据权利要求7所述的装置,其特征在于,所述配置信息还包括各所述图层块的显示优先级顺序,还包括:
    重叠区域确定模块,用于根据所述位置分布确定各所述图层块之间的重叠区域;
    针对包含有所述重叠区域的多个图层块,根据所述显示优先级顺序,从低到高依次获取各所述图层块的图层信息。
  9. 根据权利要求7或8所述的装置,其特征在于,所述图层信息获取模块包括:
    目标图像信息读取子模块,用于利用按行或列读取的方式,读取所述目标图中当前行或列的目标图像信息;
    排列顺序确定子模块,用于在当前行或列中分布有所述图层块的情况下,按照所述位置分布确定各所述图层块的排列顺序;
    图层信息读取子模块,用于根据所述排列顺序读取当前行或列的图层信息。
  10. 根据权利要求9所述的装置,其特征在于,所述位置信息包括所述图层块对应的识别框,以及所述识别框在所述目标图中的起始坐标、结束坐标以及存储地址,所述图层信息读取子模块包括:
    读请求生成单元,用于在读取位置落在所述起始坐标和所述结束坐标范围内的情况下,生成读请求;
    图层信息读取单元,用于读取根据所述读请求和所述存储地址,按照所述排列顺序从内存中读取当前行或列的图层信息;
    图层信息生成单元,用于在所述图层块对应的识别框中生成当前行或列的图层信息。
  11. 根据权利要求8-10中任意一项所述的装置,其特征在于,还包括:
    目标图像信息分发模块,用于将所述目标图像信息分发至第一存储模块中;
    图层信息分发模块,用于对各所述图层块对应的图层信息进行双线性插值运算,将运算后得到的图层信息分发至第二存储模块中。
  12. 根据权利要求11所述的装置,其特征在于,所述图层信息分发模块包括:
    最高优先级信息分发子模块,用于针对包含有所述重叠区域的多个图层块,将最高优先级的图层块对应的图层信息分发至所述第二存储模块中。
  13. 一种电子设备,其特征在于,包括:
    至少一个处理器;以及与所述至少一个处理器通信连接的存储器;
    其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-6中任一项所述的方法。
  14. 一种存储有计算机指令的非瞬时计算机可读存储介质,其特征在于,所述计算机指令用于使所述计算机执行权利要求1-6中任一项所述的方法。
PCT/CN2022/073044 2021-02-24 2022-01-20 一种图像融合方法以及装置 WO2022179362A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110209219.2A CN112907496A (zh) 2021-02-24 2021-02-24 一种图像融合方法以及装置
CN202110209219.2 2021-02-24

Publications (1)

Publication Number Publication Date
WO2022179362A1 true WO2022179362A1 (zh) 2022-09-01

Family

ID=76107137

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/073044 WO2022179362A1 (zh) 2021-02-24 2022-01-20 一种图像融合方法以及装置

Country Status (2)

Country Link
CN (1) CN112907496A (zh)
WO (1) WO2022179362A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115480726A (zh) * 2022-11-15 2022-12-16 泽景(西安)汽车电子有限责任公司 一种显示方法、装置、电子设备及存储介质
CN115880156A (zh) * 2022-12-30 2023-03-31 芯动微电子科技(武汉)有限公司 一种多图层拼接显示控制方法和装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907496A (zh) * 2021-02-24 2021-06-04 嘉楠明芯(北京)科技有限公司 一种图像融合方法以及装置
CN116932193A (zh) * 2022-04-07 2023-10-24 华为技术有限公司 一种显示子系统的通道分配方法、装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090185721A1 (en) * 2007-12-29 2009-07-23 Masaki Hiraga Image data processing method and image processing apparatus
CN102184720A (zh) * 2010-06-22 2011-09-14 上海盈方微电子有限公司 一种支持多层多格式输入的图像合成显示的方法及装置
CN109448077A (zh) * 2018-11-08 2019-03-08 郑州云海信息技术有限公司 一种多图层合并的方法、装置、设备以及存储介质
CN111476066A (zh) * 2019-01-23 2020-07-31 北京奇虎科技有限公司 图像效果的处理方法、装置、计算机设备及存储介质
CN112907496A (zh) * 2021-02-24 2021-06-04 嘉楠明芯(北京)科技有限公司 一种图像融合方法以及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102708280B (zh) * 2012-04-12 2015-09-23 深圳开立生物医疗科技股份有限公司 一种图像显示方法及设备
CN107945112B (zh) * 2017-11-17 2020-12-08 浙江大华技术股份有限公司 一种全景图像拼接方法及装置
CN109729274B (zh) * 2019-01-30 2021-03-09 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及存储介质
CN111402181A (zh) * 2020-03-13 2020-07-10 北京奇艺世纪科技有限公司 图像融合方法、装置及计算机可读存储介质
CN111598796B (zh) * 2020-04-27 2023-09-05 Oppo广东移动通信有限公司 图像处理方法及装置、电子设备、存储介质
CN111768356A (zh) * 2020-06-28 2020-10-13 北京百度网讯科技有限公司 一种人脸图像融合方法、装置、电子设备及存储介质
CN112099645A (zh) * 2020-09-04 2020-12-18 北京百度网讯科技有限公司 一种输入图像的生成方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090185721A1 (en) * 2007-12-29 2009-07-23 Masaki Hiraga Image data processing method and image processing apparatus
CN102184720A (zh) * 2010-06-22 2011-09-14 上海盈方微电子有限公司 一种支持多层多格式输入的图像合成显示的方法及装置
CN109448077A (zh) * 2018-11-08 2019-03-08 郑州云海信息技术有限公司 一种多图层合并的方法、装置、设备以及存储介质
CN111476066A (zh) * 2019-01-23 2020-07-31 北京奇虎科技有限公司 图像效果的处理方法、装置、计算机设备及存储介质
CN112907496A (zh) * 2021-02-24 2021-06-04 嘉楠明芯(北京)科技有限公司 一种图像融合方法以及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115480726A (zh) * 2022-11-15 2022-12-16 泽景(西安)汽车电子有限责任公司 一种显示方法、装置、电子设备及存储介质
CN115480726B (zh) * 2022-11-15 2023-02-28 泽景(西安)汽车电子有限责任公司 一种显示方法、装置、电子设备及存储介质
CN115880156A (zh) * 2022-12-30 2023-03-31 芯动微电子科技(武汉)有限公司 一种多图层拼接显示控制方法和装置
CN115880156B (zh) * 2022-12-30 2023-07-25 芯动微电子科技(武汉)有限公司 一种多图层拼接显示控制方法和装置

Also Published As

Publication number Publication date
CN112907496A (zh) 2021-06-04

Similar Documents

Publication Publication Date Title
WO2022179362A1 (zh) 一种图像融合方法以及装置
US7710429B2 (en) Stationary semantic zooming
US20110292060A1 (en) Frame buffer sizing to optimize the performance of on screen graphics in a digital electronic device
US10089957B2 (en) Page display method and terminal
CN110989878B (zh) 小程序中的动画展示方法、装置、电子设备及存储介质
US11403121B2 (en) Streaming per-pixel transparency information using transparency-agnostic video codecs
US9235925B2 (en) Virtual surface rendering
US9286122B2 (en) Display techniques using virtual surface allocation
US10043489B2 (en) Virtual surface blending and BLT operations
KR20130138143A (ko) 디스플레이 미러링을 위한 시스템 및 방법
US9959668B2 (en) Virtual surface compaction
CN112740278B (zh) 用于图形处理的方法及设备
WO2023160282A1 (zh) 一种显示渲染方法、装置、电子设备和可读存储介质
WO2022252675A1 (zh) 道路标注生成方法、装置、设备以及存储介质
JP2021113990A (ja) 電子地図表示方法、装置、機器及び読み取り可能な記憶媒体
US9471956B2 (en) Graphic remoting system with masked DMA and graphic processing method
US9251557B2 (en) System, method, and computer program product for recovering from a memory underflow condition associated with generating video signals
US20220189027A1 (en) Panorama Rendering Method, Electronic Device and Storage Medium
CN114201251A (zh) 降低书写痕迹显示延时的方法、装置、设备及介质
CN112419145A (zh) 一种图像数据处理方法、装置、设备及存储介质
CN116302277A (zh) 一种数据处理方法、装置、电子设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22758728

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22758728

Country of ref document: EP

Kind code of ref document: A1