CN116579965A - Multi-image fusion method and device - Google Patents
Multi-image fusion method and device Download PDFInfo
- Publication number
- CN116579965A CN116579965A CN202310577177.7A CN202310577177A CN116579965A CN 116579965 A CN116579965 A CN 116579965A CN 202310577177 A CN202310577177 A CN 202310577177A CN 116579965 A CN116579965 A CN 116579965A
- Authority
- CN
- China
- Prior art keywords
- image data
- image
- data
- mapping
- mapping matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007500 overflow downdraw method Methods 0.000 title abstract description 13
- 238000013507 mapping Methods 0.000 claims abstract description 110
- 239000011159 matrix material Substances 0.000 claims abstract description 59
- 238000000605 extraction Methods 0.000 claims abstract description 51
- 238000012545 processing Methods 0.000 claims abstract description 39
- 230000004927 fusion Effects 0.000 claims abstract description 35
- 238000000034 method Methods 0.000 claims abstract description 23
- 230000011218 segmentation Effects 0.000 claims description 15
- 238000000638 solvent extraction Methods 0.000 claims 1
- 238000004891 communication Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 6
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The application discloses a multi-image fusion method and device. Wherein the method comprises the following steps: acquiring first image data and second image data; dividing the first image data to obtain divided image data; matching the segmented image data with the second image data according to an image mapping matrix to obtain image extraction data; and combining and displaying the image extraction data and the first image data to obtain a display result. The application solves the technical problems that in the prior art, the fusion of a plurality of image data is carried out by overlapping or splicing after the respective processing of a plurality of images, and in an application scene with high diversification complexity, the fusion and the processing of certain target points cannot be carried out, and the precision and the efficiency of the image data fusion are reduced.
Description
Technical Field
The application relates to the field of multi-image processing, in particular to a multi-image fusion method and device.
Background
Along with the continuous development of intelligent science and technology, intelligent equipment is increasingly used in life, work and study of people, and the quality of life of people is improved and the learning and working efficiency of people is increased by using intelligent science and technology means.
At present, when a camera array monitoring task is performed, a light field camera probe or camera array equipment is generally deployed at a position to be monitored of an application scene, and image data of different points acquired by multiple cameras are combined and treated, so that a user can conveniently analyze and observe the light field camera probe or the camera array equipment. However, in the prior art, the fusion of a plurality of image data is only performed by means of superposition or lamination fusion after the respective processing of a plurality of images, and in an application scene with high diversification complexity, fusion and processing cannot be performed on certain target points, so that the precision and efficiency of the image data fusion are reduced.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a multi-image fusion method and device, which at least solve the technical problems that in the prior art, fusion of a plurality of image data is carried out by overlapping or splicing after a plurality of images are respectively processed, fusion and processing can not be carried out aiming at certain target points in an application scene with high diversification complexity, and the precision and efficiency of image data fusion are reduced.
According to an aspect of an embodiment of the present application, there is provided a multi-image fusion method including: acquiring first image data and second image data; dividing the first image data to obtain divided image data; matching the segmented image data with the second image data according to an image mapping matrix to obtain image extraction data; and combining and displaying the image extraction data and the first image data to obtain a display result.
Optionally, the first image data comprises at least one piece of image data, and the second image data comprises a separate piece of image data.
Optionally, the matching the segmented image data with the second image data according to an image mapping matrix, to obtain image extraction data includes: obtaining the image mapping matrix, wherein the image mapping matrix comprises:
wherein L1 to L3 are image extraction data, F1 to F3 are divided image data, and P1 to P3 are regional image data of the mapped second image data; inputting the segmented image data into the image mapping matrix to obtain a mapping region for the second image data; and extracting the second image data according to the mapping area to obtain the image extraction data.
Optionally, the inputting the segmented image data into the image mapping matrix, and obtaining the mapping region for the second image data includes: generating an image mapping coordinate set according to the image mapping matrix and the segmented image data; and marking the second image data according to the image mapping coordinate set to obtain the mapping region.
According to another aspect of the embodiment of the present application, there is also provided a multi-image fusion apparatus including: the acquisition module is used for acquiring the first image data and the second image data; the segmentation module is used for carrying out segmentation processing on the first image data to obtain segmented image data; the matching module is used for matching the segmented image data with the second image data according to an image mapping matrix to obtain image extraction data; and the merging module is used for merging and displaying the image extraction data and the first image data to obtain a display result.
Optionally, the first image data comprises at least one piece of image data, and the second image data comprises a separate piece of image data.
Optionally, the segmentation module includes: an obtaining unit, configured to obtain the image mapping matrix, where the image mapping matrix includes:
wherein L1 to L3 are image extraction data, F1 to F3 are divided image data, and P1 to P3 are regional image data of the mapped second image data; an input unit configured to input the divided image data to the image mapping matrix, to obtain a mapping region for the second image data; and the extraction unit is used for extracting the second image data according to the mapping area to obtain the image extraction data.
Optionally, the input unit includes: a generating subunit, configured to generate an image mapping coordinate set according to the image mapping matrix and the segmented image data; and the marking subunit is used for marking the second image data according to the image mapping coordinate set to obtain the mapping region.
According to another aspect of the embodiment of the present application, there is further provided a nonvolatile storage medium, where the nonvolatile storage medium includes a stored program, and when the program runs, the program controls a device in which the nonvolatile storage medium is located to execute a multi-image fusion method.
According to another aspect of the embodiment of the present application, there is also provided an electronic device including a processor and a memory; the memory stores computer readable instructions, and the processor is configured to execute the computer readable instructions, where the computer readable instructions execute a multi-image fusion method when executed.
In the embodiment of the application, the first image data and the second image data are acquired; dividing the first image data to obtain divided image data; matching the segmented image data with the second image data according to an image mapping matrix to obtain image extraction data; the method for combining and displaying the image extraction data and the first image data to obtain a display result solves the technical problems that in the prior art, the fusion of a plurality of image data is carried out by overlapping or overlapping fusion after the respective processing of a plurality of images, and in an application scene with high diversification complexity, the fusion and the processing can not be carried out aiming at certain target points, so that the precision and the efficiency of the image data fusion are reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a multiple image fusion method according to an embodiment of the application;
fig. 2 is a block diagram of a multi-image fusion apparatus according to an embodiment of the present application;
fig. 3 is a block diagram of a terminal device for performing the method according to the application according to an embodiment of the application;
fig. 4 is a memory unit for holding or carrying program code for implementing a method according to the application, according to an embodiment of the application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present application, there is provided a method embodiment of a multi-image fusion method, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that shown or described herein.
Example 1
Fig. 1 is a flowchart of a multi-image fusion method according to an embodiment of the present application, as shown in fig. 1, the method includes the steps of:
step S102, acquiring first image data and second image data.
Specifically, in order to solve the technical problem that in the prior art, the fusion of a plurality of image data is only performed by means of superposition or overlapping fusion after the respective processing of a plurality of images, in an application scene with high diversification complexity, fusion and processing cannot be performed on certain target points, and the precision and efficiency of the fusion of the image data are reduced, the plurality of image data need to be acquired and collected through a camera array, wherein the plurality of image data can comprise first image data and second image data, optionally, the first image data comprises at least one image data, and the second image data comprises one independent image data.
Step S104, the first image data is subjected to segmentation processing, and segmented image data are obtained.
Specifically, in order to perform fusion optimization on the first image data and the second image data in subsequent analysis and processing according to the embodiment of the present application, the first image data needs to be subjected to segmentation processing, that is, an image in the first image data is divided into a plurality of sub-image data according to a segmentation rule such as a pixel ratio according to the segmentation rule, that is, the segmented image data.
Step S106, matching the segmented image data with the second image data according to an image mapping matrix to obtain image extraction data.
Optionally, the matching the segmented image data with the second image data according to an image mapping matrix, to obtain image extraction data includes: obtaining the image mapping matrix, wherein the image mapping matrix comprises:
wherein L1 to L3 are image extraction data, F1 to F3 are divided image data, and P1 to P3 are regional image data of the mapped second image data; inputting the segmented image data into the image mapping matrix to obtain a mapping region for the second image data; and extracting the second image data according to the mapping area to obtain the image extraction data.
Specifically, the second image data is determined according to the specific condition of the first image data, that is, the plurality of sub-image data formed by dividing the first image data respectively represent different image targets or image contents, and mapping matching is performed according to the contents, so that how other image data to be fused are fused into the first image data can be more specifically found, and a new fused image is obtained. For example, the matching the segmented image data with the second image data according to the image mapping matrix, to obtain image extraction data includes: obtaining the image mapping matrix, wherein the image mapping matrix comprises:
wherein L1 to L3 are image extraction data, F1 to F3 are divided image data, and P1 to P3 are regional image data of the mapped second image data; inputting the segmented image data into the image mapping matrix to obtain a mapping region for the second image data; and extracting the second image data according to the mapping area to obtain the image extraction data.
And step S108, combining and displaying the image extraction data and the first image data to obtain a display result.
Optionally, the inputting the segmented image data into the image mapping matrix, and obtaining the mapping region for the second image data includes: generating an image mapping coordinate set according to the image mapping matrix and the segmented image data; and marking the second image data according to the image mapping coordinate set to obtain the mapping region.
By the embodiment, the technical problems that in the prior art, the fusion of a plurality of image data is carried out by overlapping or splicing after the respective processing of a plurality of images, and fusion and processing cannot be carried out on certain target points in an application scene with high diversification complexity, so that the precision and efficiency of the image data fusion are reduced are solved.
Example two
Fig. 2 is a block diagram of a multi-image fusion apparatus according to an embodiment of the present application, as shown in fig. 2, the apparatus including:
the acquiring module 20 is configured to acquire the first image data and the second image data.
Specifically, in order to solve the technical problem that in the prior art, the fusion of a plurality of image data is only performed by means of superposition or overlapping fusion after the respective processing of a plurality of images, in an application scene with high diversification complexity, fusion and processing cannot be performed on certain target points, and the precision and efficiency of the fusion of the image data are reduced, the plurality of image data need to be acquired and collected through a camera array, wherein the plurality of image data can comprise first image data and second image data, optionally, the first image data comprises at least one image data, and the second image data comprises one independent image data.
The segmentation module 22 is configured to perform segmentation processing on the first image data to obtain segmented image data.
Specifically, in order to perform fusion optimization on the first image data and the second image data in subsequent analysis and processing according to the embodiment of the present application, the first image data needs to be subjected to segmentation processing, that is, an image in the first image data is divided into a plurality of sub-image data according to a segmentation rule such as a pixel ratio according to the segmentation rule, that is, the segmented image data.
And a matching module 24, configured to match the segmented image data with the second image data according to an image mapping matrix, so as to obtain image extraction data.
Optionally, the segmentation module includes: an obtaining unit, configured to obtain the image mapping matrix, where the image mapping matrix includes:
wherein L1 to L3 are image extraction data, F1 to F3 are divided image data, and P1 to P3 are regional image data of the mapped second image data; an input unit configured to input the divided image data to the image mapping matrix, to obtain a mapping region for the second image data; and the extraction unit is used for extracting the second image data according to the mapping area to obtain the image extraction data.
Specifically, the second image data is determined according to the specific condition of the first image data, that is, the plurality of sub-image data formed by dividing the first image data respectively represent different image targets or image contents, and mapping matching is performed according to the contents, so that how other image data to be fused are fused into the first image data can be more specifically found, and a new fused image is obtained. For example, the matching the segmented image data with the second image data according to the image mapping matrix, to obtain image extraction data includes: obtaining the image mapping matrix, wherein the image mapping matrix comprises:
wherein L1 to L3 are image extraction data, F1 to F3 are divided image data, and P1 to P3 are regional image data of the mapped second image data; inputting the segmented image data into the image mapping matrix to obtain a mapping region for the second image data; and extracting the second image data according to the mapping area to obtain the image extraction data.
And the merging module 26 is configured to merge and display the image extraction data and the first image data to obtain a display result.
Optionally, the input unit includes: a generating subunit, configured to generate an image mapping coordinate set according to the image mapping matrix and the segmented image data; and the marking subunit is used for marking the second image data according to the image mapping coordinate set to obtain the mapping region.
By the embodiment, the technical problems that in the prior art, the fusion of a plurality of image data is carried out by overlapping or splicing after the respective processing of a plurality of images, and fusion and processing cannot be carried out on certain target points in an application scene with high diversification complexity, so that the precision and efficiency of the image data fusion are reduced are solved.
According to another aspect of the embodiment of the present application, there is further provided a nonvolatile storage medium, where the nonvolatile storage medium includes a stored program, and when the program runs, the program controls a device in which the nonvolatile storage medium is located to execute a multi-image fusion method.
Specifically, the method comprises the following steps: acquiring first image data and second image data; dividing the first image data to obtain divided image data; matching the segmented image data with the second image data according to an image mapping matrix to obtain image extraction data; and combining and displaying the image extraction data and the first image data to obtain a display result. Optionally, the first image dataComprising at least one piece of image data, and said second image data comprises a separate piece of image data. Optionally, the matching the segmented image data with the second image data according to an image mapping matrix, to obtain image extraction data includes: obtaining the image mapping matrix, wherein the image mapping matrix comprises:wherein L1 to L3 are image extraction data, F1 to F3 are divided image data, and P1 to P3 are regional image data of the mapped second image data; inputting the segmented image data into the image mapping matrix to obtain a mapping region for the second image data; and extracting the second image data according to the mapping area to obtain the image extraction data. Optionally, the inputting the segmented image data into the image mapping matrix, and obtaining the mapping region for the second image data includes: generating an image mapping coordinate set according to the image mapping matrix and the segmented image data; and marking the second image data according to the image mapping coordinate set to obtain the mapping region.
According to another aspect of the embodiment of the present application, there is also provided an electronic device including a processor and a memory; the memory stores computer readable instructions, and the processor is configured to execute the computer readable instructions, where the computer readable instructions execute a multi-image fusion method when executed.
Specifically, the method comprises the following steps: acquiring first image data and second image data; dividing the first image data to obtain divided image data; matching the segmented image data with the second image data according to an image mapping matrix to obtain image extraction data; and combining and displaying the image extraction data and the first image data to obtain a display result. Optionally, the first image data comprises at least one piece of image data, and the second image data comprises a separate piece of image data. Optionally, the dividing the image number according to the image mapping matrixMatching the second image data to obtain image extraction data, wherein the image extraction data comprises: obtaining the image mapping matrix, wherein the image mapping matrix comprises:wherein L1 to L3 are image extraction data, F1 to F3 are divided image data, and P1 to P3 are regional image data of the mapped second image data; inputting the segmented image data into the image mapping matrix to obtain a mapping region for the second image data; and extracting the second image data according to the mapping area to obtain the image extraction data. Optionally, the inputting the segmented image data into the image mapping matrix, and obtaining the mapping region for the second image data includes: generating an image mapping coordinate set according to the image mapping matrix and the segmented image data; and marking the second image data according to the image mapping coordinate set to obtain the mapping region.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, fig. 3 is a schematic hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33, and at least one communication bus 34. The communication bus 34 is used to enable communication connections between the elements. The memory 33 may comprise a high-speed RAM memory or may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 31 may be implemented as, for example, a central processing unit (Central Processing Unit, abbreviated as CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 through wired or wireless connections.
Alternatively, the input device 30 may include a variety of input devices, for example, may include at least one of a user-oriented user interface, a device-oriented device interface, a programmable interface of software, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware insertion interface (such as a USB interface, a serial port, etc.) for data transmission between devices; alternatively, the user-oriented user interface may be, for example, a user-oriented control key, a voice input device for receiving voice input, and a touch-sensitive device (e.g., a touch screen, a touch pad, etc. having touch-sensitive functionality) for receiving user touch input by a user; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, for example, an input pin interface or an input interface of a chip, etc.; optionally, the transceiver may be a radio frequency transceiver chip, a baseband processing chip, a transceiver antenna, etc. with a communication function. An audio input device such as a microphone may receive voice data. The output device 32 may include a display, audio, or the like.
In this embodiment, the processor of the terminal device may include functions for executing each module of the data processing apparatus in each device, and specific functions and technical effects may be referred to the above embodiments and are not described herein again.
Fig. 4 is a schematic hardware structure of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of the implementation of fig. 3. As shown in fig. 4, the terminal device of the present embodiment includes a processor 41 and a memory 42.
The processor 41 executes the computer program code stored in the memory 42 to implement the methods of the above-described embodiments.
The memory 42 is configured to store various types of data to support operation at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, video, etc. The memory 42 may include a random access memory (random access memory, simply referred to as RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, a processor 41 is provided in the processing assembly 40. The terminal device may further include: a communication component 43, a power supply component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48. The components and the like specifically included in the terminal device are set according to actual requirements, which are not limited in this embodiment.
The processing component 40 generally controls the overall operation of the terminal device. The processing component 40 may include one or more processors 41 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 40 may include one or more modules that facilitate interactions between the processing component 40 and other components. For example, processing component 40 may include a multimedia module to facilitate interaction between multimedia component 45 and processing component 40.
The power supply assembly 44 provides power to the various components of the terminal device. Power supply components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for terminal devices.
The multimedia component 45 comprises a display screen between the terminal device and the user providing an output interface. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
The audio component 46 is configured to output and/or input audio signals. For example, the audio component 46 includes a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a speech recognition mode. The received audio signals may be further stored in the memory 42 or transmitted via the communication component 43. In some embodiments, audio assembly 46 further includes a speaker for outputting audio signals.
The input/output interface 47 provides an interface between the processing assembly 40 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: volume button, start button and lock button.
The sensor assembly 48 includes one or more sensors for providing status assessment of various aspects for the terminal device. For example, the sensor assembly 48 may detect the open/closed state of the terminal device, the relative positioning of the assembly, the presence or absence of user contact with the terminal device. The sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 48 may also include a camera or the like.
The communication component 43 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot, where the SIM card slot is used to insert a SIM card, so that the terminal device may log into a GPRS network, and establish communication with a server through the internet.
From the above, it will be appreciated that the communication component 43, the audio component 46, and the input/output interface 47, the sensor component 48 referred to in the embodiment of fig. 4 may be implemented as an input device in the embodiment of fig. 3.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.
Claims (10)
1. A method of multiple image fusion, comprising:
acquiring first image data and second image data;
dividing the first image data to obtain divided image data;
matching the segmented image data with the second image data according to an image mapping matrix to obtain image extraction data;
and combining and displaying the image extraction data and the first image data to obtain a display result.
2. The method of claim 1, wherein the first image data comprises at least one piece of image data and the second image data comprises a separate piece of image data.
3. The method of claim 1, wherein matching the segmented image data with the second image data according to an image mapping matrix to obtain image extraction data comprises:
obtaining the image mapping matrix, wherein the image mapping matrix comprises:
wherein L1 to L3 are image extraction data, F1 to F3 are divided image data, and P1 to P3 are regional image data of the mapped second image data;
inputting the segmented image data into the image mapping matrix to obtain a mapping region for the second image data;
and extracting the second image data according to the mapping area to obtain the image extraction data.
4. A method according to claim 3, wherein said inputting the segmented image data into the image mapping matrix to obtain a mapped region for the second image data comprises:
generating an image mapping coordinate set according to the image mapping matrix and the segmented image data;
and marking the second image data according to the image mapping coordinate set to obtain the mapping region.
5. A multiple image fusion apparatus, comprising:
the acquisition module is used for acquiring the first image data and the second image data;
the segmentation module is used for carrying out segmentation processing on the first image data to obtain segmented image data;
the matching module is used for matching the segmented image data with the second image data according to an image mapping matrix to obtain image extraction data;
and the merging module is used for merging and displaying the image extraction data and the first image data to obtain a display result.
6. The apparatus of claim 5, wherein the first image data comprises at least one piece of image data and the second image data comprises a separate piece of image data.
7. The apparatus of claim 5, wherein the partitioning module comprises:
an obtaining unit, configured to obtain the image mapping matrix, where the image mapping matrix includes:
wherein L1 to L3 are image extraction data, F1 to F3 are divided image data, and P1 to P3 are regional image data of the mapped second image data;
an input unit configured to input the divided image data to the image mapping matrix, to obtain a mapping region for the second image data;
and the extraction unit is used for extracting the second image data according to the mapping area to obtain the image extraction data.
8. The apparatus of claim 7, wherein the input unit comprises:
a generating subunit, configured to generate an image mapping coordinate set according to the image mapping matrix and the segmented image data;
and the marking subunit is used for marking the second image data according to the image mapping coordinate set to obtain the mapping region.
9. A non-volatile storage medium, characterized in that the non-volatile storage medium comprises a stored program, wherein the program, when run, controls a device in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for executing the processor, wherein the computer readable instructions when executed perform the method of any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310577177.7A CN116579965B (en) | 2023-05-22 | 2023-05-22 | Multi-image fusion method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310577177.7A CN116579965B (en) | 2023-05-22 | 2023-05-22 | Multi-image fusion method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116579965A true CN116579965A (en) | 2023-08-11 |
CN116579965B CN116579965B (en) | 2024-01-19 |
Family
ID=87539339
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310577177.7A Active CN116579965B (en) | 2023-05-22 | 2023-05-22 | Multi-image fusion method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116579965B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709898A (en) * | 2017-03-13 | 2017-05-24 | 微鲸科技有限公司 | Image fusing method and device |
WO2019184709A1 (en) * | 2018-03-29 | 2019-10-03 | 上海智瞳通科技有限公司 | Data processing method and device based on multi-sensor fusion, and multi-sensor fusion method |
US20200065632A1 (en) * | 2018-08-22 | 2020-02-27 | Alibaba Group Holding Limited | Image processing method and apparatus |
US10818071B1 (en) * | 2019-07-26 | 2020-10-27 | Google Llc | Image-based geometric fusion of multiple depth images using ray casting |
US20210295467A1 (en) * | 2020-03-23 | 2021-09-23 | Ke.Com (Beijing) Technology Co., Ltd. | Method for merging multiple images and post-processing of panorama |
CN113450253A (en) * | 2021-05-20 | 2021-09-28 | 北京城市网邻信息技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
US20210304413A1 (en) * | 2020-12-18 | 2021-09-30 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Image Processing Method and Device, and Electronic Device |
CN115578290A (en) * | 2022-11-01 | 2023-01-06 | 北京拙河科技有限公司 | Image refining method and device based on high-precision shooting matrix |
CN117036300A (en) * | 2023-08-14 | 2023-11-10 | 东南大学 | Road surface crack identification method based on point cloud-RGB heterogeneous image multistage registration mapping |
-
2023
- 2023-05-22 CN CN202310577177.7A patent/CN116579965B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709898A (en) * | 2017-03-13 | 2017-05-24 | 微鲸科技有限公司 | Image fusing method and device |
WO2019184709A1 (en) * | 2018-03-29 | 2019-10-03 | 上海智瞳通科技有限公司 | Data processing method and device based on multi-sensor fusion, and multi-sensor fusion method |
US20200065632A1 (en) * | 2018-08-22 | 2020-02-27 | Alibaba Group Holding Limited | Image processing method and apparatus |
US10818071B1 (en) * | 2019-07-26 | 2020-10-27 | Google Llc | Image-based geometric fusion of multiple depth images using ray casting |
US20210295467A1 (en) * | 2020-03-23 | 2021-09-23 | Ke.Com (Beijing) Technology Co., Ltd. | Method for merging multiple images and post-processing of panorama |
US20210304413A1 (en) * | 2020-12-18 | 2021-09-30 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Image Processing Method and Device, and Electronic Device |
CN113450253A (en) * | 2021-05-20 | 2021-09-28 | 北京城市网邻信息技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN115578290A (en) * | 2022-11-01 | 2023-01-06 | 北京拙河科技有限公司 | Image refining method and device based on high-precision shooting matrix |
CN117036300A (en) * | 2023-08-14 | 2023-11-10 | 东南大学 | Road surface crack identification method based on point cloud-RGB heterogeneous image multistage registration mapping |
Non-Patent Citations (2)
Title |
---|
NICHOLAS LAHAYE.ET AL: ""A Quantitative Validation of Multi-Modal Image Fusion and Segmentation for Object Detection and Tracking"", 《REMOTE SENSING》 * |
赵学军等: ""一种基于深度学习的遥感图像融合方法"", 《信息通信》, no. 5 * |
Also Published As
Publication number | Publication date |
---|---|
CN116579965B (en) | 2024-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115623336B (en) | Image tracking method and device for hundred million-level camera equipment | |
CN115631122A (en) | Image optimization method and device for edge image algorithm | |
CN116614453B (en) | Image transmission bandwidth selection method and device based on cloud interconnection | |
CN116579965B (en) | Multi-image fusion method and device | |
CN115334291A (en) | Tunnel monitoring method and device based on hundred million-level pixel panoramic compensation | |
CN115527045A (en) | Image identification method and device for snow field danger identification | |
CN114866702A (en) | Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device | |
CN116228593B (en) | Image perfecting method and device based on hierarchical antialiasing | |
CN116579964B (en) | Dynamic frame gradual-in gradual-out dynamic fusion method and device | |
CN116468883B (en) | High-precision image data volume fog recognition method and device | |
CN115345808B (en) | Picture generation method and device based on multi-element information acquisition | |
CN116758165B (en) | Image calibration method and device based on array camera | |
CN116757981A (en) | Multi-terminal image fusion method and device | |
CN115984333B (en) | Smooth tracking method and device for airplane target | |
CN115511735B (en) | Snow field gray scale picture optimization method and device | |
CN116309523A (en) | Dynamic frame image dynamic fuzzy recognition method and device | |
CN117896625A (en) | Picture imaging method and device based on low-altitude high-resolution analysis | |
CN116389915B (en) | Method and device for reducing flicker of light field camera | |
CN116452481A (en) | Multi-angle combined shooting method and device | |
CN116402935B (en) | Image synthesis method and device based on ray tracing algorithm | |
CN115858240B (en) | Optical camera data backup method and device | |
CN116664413B (en) | Image volume fog eliminating method and device based on Abbe convergence operator | |
CN116017128A (en) | Edge camera auxiliary image construction method and device | |
CN117351341A (en) | Unmanned aerial vehicle fish school identification method and device based on decomposition optimization | |
CN118777780A (en) | Power transmission line fault positioning data optimization method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |