CN116757983A - Main and auxiliary image fusion method and device - Google Patents

Main and auxiliary image fusion method and device Download PDF

Info

Publication number
CN116757983A
CN116757983A CN202310806625.6A CN202310806625A CN116757983A CN 116757983 A CN116757983 A CN 116757983A CN 202310806625 A CN202310806625 A CN 202310806625A CN 116757983 A CN116757983 A CN 116757983A
Authority
CN
China
Prior art keywords
image data
fused
original image
fusion
preset rule
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310806625.6A
Other languages
Chinese (zh)
Other versions
CN116757983B (en
Inventor
温建伟
邓迪旻
袁潮
温亚磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202310806625.6A priority Critical patent/CN116757983B/en
Publication of CN116757983A publication Critical patent/CN116757983A/en
Application granted granted Critical
Publication of CN116757983B publication Critical patent/CN116757983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The application discloses a main and auxiliary image fusion method and device. Wherein the method comprises the following steps: collecting first original image data of a main device and second original image data of an auxiliary device; cutting the first original image data according to a preset rule to obtain first image data to be fused; calibrating the second original image data according to the parameters of the auxiliary equipment to obtain a marking result; and generating second image data to be fused according to the marking result, and performing image fusion operation on the first image data to be fused and the second image data to be fused to obtain an image fusion result. The application solves the technical problems that the fusion of the main image and the auxiliary image in the prior art only carries out the determinant display of the main image and the auxiliary image, or carries out simple splicing, but can not select the auxiliary image data according to the application scene of the image, and reduces the efficiency and the quality of the image fusion process.

Description

Main and auxiliary image fusion method and device
Technical Field
The application relates to the field of image processing, in particular to a main and auxiliary image fusion method and device.
Background
Along with the continuous development of intelligent science and technology, intelligent equipment is increasingly used in life, work and study of people, and the quality of life of people is improved and the learning and working efficiency of people is increased by using intelligent science and technology means.
Currently, for image processing of an array device of an image capturing system, usually, during image capturing, a single image captured by a single camera is insufficient to meet the requirement of field application, and for this purpose, a plurality of monitoring angles are often captured by using a plurality of image capturing devices, and a main image and a plurality of auxiliary images are fused. However, in the prior art, the fusion of the main image and the auxiliary image is only to perform determinant display on the main image and the auxiliary image or simply splice the main image and the auxiliary image, so that the auxiliary image data cannot be selected according to the application scene of the image, and the efficiency and the quality of the image fusion process are reduced.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a main and auxiliary image fusion method and device, which at least solve the technical problems that in the prior art, the main and auxiliary images are only subjected to determinant display or simple splicing, auxiliary image data cannot be selected according to the application scene of the images, and the efficiency and quality of the image fusion process are reduced.
According to an aspect of the embodiment of the present application, there is provided a primary and secondary image fusion method, including: collecting first original image data of a main device and second original image data of an auxiliary device; cutting the first original image data according to a preset rule to obtain first image data to be fused; calibrating the second original image data according to the parameters of the auxiliary equipment to obtain a marking result; and generating second image data to be fused according to the marking result, and performing image fusion operation on the first image data to be fused and the second image data to be fused to obtain an image fusion result.
Optionally, the performing clipping operation on the first original image data according to a preset rule to obtain first image data to be fused includes: identifying boundary information of the first original image data according to an edge identification model; acquiring the preset rule according to the boundary information, wherein the preset rule is used for defining a core data area of the first original image data; and cutting the first original image data by using the preset rule to obtain the first image data to be fused.
Optionally, the calibrating the second original image data according to the parameter of the auxiliary device, and obtaining the marking result includes: collecting parameters of the auxiliary equipment, wherein the parameters of the auxiliary equipment comprise: shooting function, shooting area and compensation range; and calibrating parameters of the auxiliary equipment on the picture of the second original image data to obtain the marking result, wherein the marking result comprises all calibrated second original image data.
Optionally, the generating the second image data to be fused according to the marking result, and performing an image fusion operation on the first image data to be fused and the second image data to be fused, where obtaining the image fusion result includes: generating the second image data to be fused according to the marking result and the preset rule; and fusing the second fused image data with the first fused image data by using an image stitching algorithm to obtain the fusion result.
According to another aspect of the embodiment of the present application, there is also provided a primary and secondary image fusion apparatus, including: the acquisition module is used for acquiring the first original image data of the main equipment and the second original image data of the auxiliary equipment; the clipping module is used for clipping the first original image data according to a preset rule to obtain first image data to be fused; the calibration module is used for calibrating the second original image data according to the parameters of the auxiliary equipment to obtain a marking result; and the fusion module is used for generating second image data to be fused according to the marking result, and carrying out image fusion operation on the first image data to be fused and the second image data to be fused to obtain an image fusion result.
Optionally, the clipping module includes: an identification unit configured to identify boundary information of the first original image data according to an edge identification model; an obtaining unit, configured to obtain the preset rule according to the boundary information, where the preset rule is used to define a core data area of the first original image data; and the clipping unit is used for clipping the first original image data by utilizing the preset rule to obtain the first image data to be fused.
Optionally, the calibration module includes: the acquisition unit is used for acquiring parameters of the auxiliary equipment, wherein the parameters of the auxiliary equipment comprise: shooting function, shooting area and compensation range; and the calibration unit is used for calibrating the parameters of the auxiliary equipment on the picture of the second original image data to obtain the marking result, wherein the marking result comprises all calibrated second original image data.
Optionally, the fusion module includes: the generating unit is used for generating the second image data to be fused according to the marking result and the preset rule; and the fusion unit is used for fusing the second fused image data with the first fused image data by utilizing an image stitching algorithm to obtain the fusion result.
According to another aspect of the embodiment of the present application, there is further provided a nonvolatile storage medium, where the nonvolatile storage medium includes a stored program, and when the program runs, the device where the nonvolatile storage medium is controlled to execute a primary and secondary image fusion method.
According to another aspect of the embodiment of the present application, there is also provided an electronic device including a processor and a memory; the memory stores computer readable instructions, and the processor is configured to execute the computer readable instructions, where the computer readable instructions execute a primary and secondary image fusion method when executed.
In the embodiment of the application, the first original image data of the main equipment and the second original image data of the auxiliary equipment are acquired; cutting the first original image data according to a preset rule to obtain first image data to be fused; calibrating the second original image data according to the parameters of the auxiliary equipment to obtain a marking result; the method comprises the steps of generating second image data to be fused according to the marking result, carrying out image fusion operation on the first image data to be fused and the second image data to be fused to obtain an image fusion result, and solving the technical problems that in the prior art, the main image and the auxiliary image are only subjected to determinant display or simply spliced, the auxiliary image data cannot be selected according to the application scene of the image, and the efficiency and quality of the image fusion process are reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a primary and secondary image fusion method according to an embodiment of the application;
fig. 2 is a block diagram of a main and auxiliary image fusion apparatus according to an embodiment of the present application;
fig. 3 is a block diagram of a terminal device for performing the method according to the application according to an embodiment of the application;
fig. 4 is a memory unit for holding or carrying program code for implementing a method according to the application, according to an embodiment of the application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present application, a method embodiment of a primary and secondary image fusion method is provided, and it should be noted that the steps illustrated in the flowchart of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
Example 1
Fig. 1 is a flowchart of a primary and secondary image fusion method according to an embodiment of the present application, as shown in fig. 1, the method includes the steps of:
step S102, collecting first original image data of a main device and second original image data of an auxiliary device.
Specifically, in order to solve the technical problem that in the prior art, the fusion of the main image and the auxiliary image is only performed by performing a determinant display on the main image and the auxiliary image, or simply splicing is performed, but the auxiliary image data cannot be selected according to an application scene of the image, so that the efficiency and quality of an image fusion process are reduced.
Step S104, clipping operation is carried out on the first original image data according to a preset rule, and first image data to be fused is obtained.
Optionally, the performing clipping operation on the first original image data according to a preset rule to obtain first image data to be fused includes: identifying boundary information of the first original image data according to an edge identification model; acquiring the preset rule according to the boundary information, wherein the preset rule is used for defining a core data area of the first original image data; and cutting the first original image data by using the preset rule to obtain the first image data to be fused.
Specifically, in the embodiment of the application, the image data of the main image to be combined is obtained by collecting the first original image data, but in order to combine the main image data with diversified auxiliary image data, the main image data needs to be adjusted, and the adjustment can be to trim the boundary of the main image data to obtain the fusible image data mainly comprising the core image data, and the processed auxiliary image data can be combined into the main image data in subsequent processing, so that the quality and effect of the combined image are improved. For example, the performing clipping operation on the first original image data according to the preset rule to obtain first image data to be fused includes: identifying boundary information of the first original image data according to an edge identification model; acquiring the preset rule according to the boundary information, wherein the preset rule is used for defining a core data area of the first original image data; and cutting the first original image data by using the preset rule to obtain the first image data to be fused.
And step S106, calibrating the second original image data according to the parameters of the auxiliary equipment to obtain a marking result.
Optionally, the calibrating the second original image data according to the parameter of the auxiliary device, and obtaining the marking result includes: collecting parameters of the auxiliary equipment, wherein the parameters of the auxiliary equipment comprise: shooting function, shooting area and compensation range; and calibrating parameters of the auxiliary equipment on the picture of the second original image data to obtain the marking result, wherein the marking result comprises all calibrated second original image data.
Specifically, since the embodiment of the present application includes a plurality of auxiliary image data in addition to the main image data, the functions, performances, and operation modes of the auxiliary image data are different, that is, the auxiliary image data are applied to different application scenarios, after the embodiment of the present application acquires the main image data, the auxiliary image data need to be calibrated according to the marks of the auxiliary image device, so as to find the auxiliary image data that is closest to and can best realize the desired function of the main image data, for example, a wide-angle image, an infrared image, a high-speed frame image, and the like, in a specific implementation, the calibrating the second original image data according to the parameters of the auxiliary device includes: collecting parameters of the auxiliary equipment, wherein the parameters of the auxiliary equipment comprise: shooting function, shooting area and compensation range; and calibrating parameters of the auxiliary equipment on the picture of the second original image data to obtain the marking result, wherein the marking result comprises all calibrated second original image data.
And S108, generating second image data to be fused according to the marking result, and performing image fusion operation on the first image data to be fused and the second image data to be fused to obtain an image fusion result.
Optionally, the generating the second image data to be fused according to the marking result, and performing an image fusion operation on the first image data to be fused and the second image data to be fused, where obtaining the image fusion result includes: generating the second image data to be fused according to the marking result and the preset rule; and fusing the second fused image data with the first fused image data by using an image stitching algorithm to obtain the fusion result.
Specifically, after the second original image data and the marking result are obtained, the marking result is required to be screened through a preset screening rule generated by the requirement parameters of the application scene, namely if the application scene is at night, auxiliary equipment with an infrared night vision function is required to be screened to cooperate with the main image data for displaying, and if the application scene is a sports field with high-speed sports, auxiliary equipment with high-speed moving frame shooting is required to cooperate with the main image data for displaying. The specific embodiment may be that the generating the second image data to be fused according to the marking result, and performing an image fusion operation on the first image data to be fused and the second image data to be fused, where obtaining an image fusion result includes: generating the second image data to be fused according to the marking result and the preset rule; and fusing the second fused image data with the first fused image data by using an image stitching algorithm to obtain the fusion result.
By the embodiment, the technical problems that in the prior art, the main image and the auxiliary image are only subjected to determinant display or simply spliced, auxiliary image data cannot be selected according to the application scene of the image, and the efficiency and quality of the image fusion process are reduced are solved.
Example two
Fig. 2 is a block diagram of a primary and secondary image fusion apparatus according to an embodiment of the present application, as shown in fig. 2, the apparatus includes:
and the acquisition module 20 is used for acquiring the first original image data of the main device and the second original image data of the auxiliary device.
Specifically, in order to solve the technical problem that in the prior art, the fusion of the main image and the auxiliary image is only performed by performing a determinant display on the main image and the auxiliary image, or simply splicing is performed, but the auxiliary image data cannot be selected according to an application scene of the image, so that the efficiency and quality of an image fusion process are reduced.
The cropping module 22 is configured to perform cropping operation on the first original image data according to a preset rule, so as to obtain first image data to be fused.
Optionally, the clipping module includes: an identification unit configured to identify boundary information of the first original image data according to an edge identification model; an obtaining unit, configured to obtain the preset rule according to the boundary information, where the preset rule is used to define a core data area of the first original image data; and the clipping unit is used for clipping the first original image data by utilizing the preset rule to obtain the first image data to be fused.
Specifically, in the embodiment of the application, the image data of the main image to be combined is obtained by collecting the first original image data, but in order to combine the main image data with diversified auxiliary image data, the main image data needs to be adjusted, and the adjustment can be to trim the boundary of the main image data to obtain the fusible image data mainly comprising the core image data, and the processed auxiliary image data can be combined into the main image data in subsequent processing, so that the quality and effect of the combined image are improved. For example, the performing clipping operation on the first original image data according to the preset rule to obtain first image data to be fused includes: identifying boundary information of the first original image data according to an edge identification model; acquiring the preset rule according to the boundary information, wherein the preset rule is used for defining a core data area of the first original image data; and cutting the first original image data by using the preset rule to obtain the first image data to be fused.
And the calibration module 24 is used for calibrating the second original image data according to the parameters of the auxiliary equipment to obtain a marking result.
Optionally, the calibration module includes: the acquisition unit is used for acquiring parameters of the auxiliary equipment, wherein the parameters of the auxiliary equipment comprise: shooting function, shooting area and compensation range; and the calibration unit is used for calibrating the parameters of the auxiliary equipment on the picture of the second original image data to obtain the marking result, wherein the marking result comprises all calibrated second original image data.
Specifically, since the embodiment of the present application includes a plurality of auxiliary image data in addition to the main image data, the functions, performances, and operation modes of the auxiliary image data are different, that is, the auxiliary image data are applied to different application scenarios, after the embodiment of the present application acquires the main image data, the auxiliary image data need to be calibrated according to the marks of the auxiliary image device, so as to find the auxiliary image data that is closest to and can best realize the desired function of the main image data, for example, a wide-angle image, an infrared image, a high-speed frame image, and the like, in a specific implementation, the calibrating the second original image data according to the parameters of the auxiliary device includes: collecting parameters of the auxiliary equipment, wherein the parameters of the auxiliary equipment comprise: shooting function, shooting area and compensation range; and calibrating parameters of the auxiliary equipment on the picture of the second original image data to obtain the marking result, wherein the marking result comprises all calibrated second original image data.
And the fusion module 26 is configured to generate second image data to be fused according to the marking result, and perform an image fusion operation on the first image data to be fused and the second image data to be fused, so as to obtain an image fusion result.
Optionally, the fusion module includes: the generating unit is used for generating the second image data to be fused according to the marking result and the preset rule; and the fusion unit is used for fusing the second fused image data with the first fused image data by utilizing an image stitching algorithm to obtain the fusion result.
Specifically, after the second original image data and the marking result are obtained, the marking result is required to be screened through a preset screening rule generated by the requirement parameters of the application scene, namely if the application scene is at night, auxiliary equipment with an infrared night vision function is required to be screened to cooperate with the main image data for displaying, and if the application scene is a sports field with high-speed sports, auxiliary equipment with high-speed moving frame shooting is required to cooperate with the main image data for displaying. The specific embodiment may be that the generating the second image data to be fused according to the marking result, and performing an image fusion operation on the first image data to be fused and the second image data to be fused, where obtaining an image fusion result includes: generating the second image data to be fused according to the marking result and the preset rule; and fusing the second fused image data with the first fused image data by using an image stitching algorithm to obtain the fusion result.
By the embodiment, the technical problems that in the prior art, the main image and the auxiliary image are only subjected to determinant display or simply spliced, auxiliary image data cannot be selected according to the application scene of the image, and the efficiency and quality of the image fusion process are reduced are solved.
According to another aspect of the embodiment of the present application, there is further provided a nonvolatile storage medium, where the nonvolatile storage medium includes a stored program, and when the program runs, the device where the nonvolatile storage medium is controlled to execute a primary and secondary image fusion method.
Specifically, the method comprises the following steps: collecting first original image data of a main device and second original image data of an auxiliary device; cutting the first original image data according to a preset rule to obtain first image data to be fused; calibrating the second original image data according to the parameters of the auxiliary equipment to obtain a marking result; and generating second image data to be fused according to the marking result, and performing image fusion operation on the first image data to be fused and the second image data to be fused to obtain an image fusion result. Optionally, the performing clipping operation on the first original image data according to a preset rule to obtain first image data to be fused includes: identifying boundary information of the first original image data according to an edge identification model; acquiring the preset rule according to the boundary information, wherein the preset rule is used for defining a core data area of the first original image data; and cutting the first original image data by using the preset rule to obtain the first image data to be fused. Optionally, the calibrating the second original image data according to the parameter of the auxiliary device, and obtaining the marking result includes: collecting parameters of the auxiliary equipment, wherein the parameters of the auxiliary equipment comprise: shooting function, shooting area and compensation range; and calibrating parameters of the auxiliary equipment on the picture of the second original image data to obtain the marking result, wherein the marking result comprises all calibrated second original image data. Optionally, the generating the second image data to be fused according to the marking result, and performing an image fusion operation on the first image data to be fused and the second image data to be fused, where obtaining the image fusion result includes: generating the second image data to be fused according to the marking result and the preset rule; and fusing the second fused image data with the first fused image data by using an image stitching algorithm to obtain the fusion result.
According to another aspect of the embodiment of the present application, there is also provided an electronic device including a processor and a memory; the memory stores computer readable instructions, and the processor is configured to execute the computer readable instructions, where the computer readable instructions execute a primary and secondary image fusion method when executed.
Specifically, the method comprises the following steps: collecting first original image data of a main device and second original image data of an auxiliary device; cutting the first original image data according to a preset rule to obtain first image data to be fused; calibrating the second original image data according to the parameters of the auxiliary equipment to obtain a marking result; and generating second image data to be fused according to the marking result, and performing image fusion operation on the first image data to be fused and the second image data to be fused to obtain an image fusion result. Optionally, the performing clipping operation on the first original image data according to a preset rule to obtain first image data to be fused includes: identifying boundary information of the first original image data according to an edge identification model; acquiring the preset rule according to the boundary information, wherein the preset rule is used for defining a core data area of the first original image data; and cutting the first original image data by using the preset rule to obtain the first image data to be fused. Optionally, the calibrating the second original image data according to the parameter of the auxiliary device, and obtaining the marking result includes: collecting parameters of the auxiliary equipment, wherein the parameters of the auxiliary equipment comprise: shooting function, shooting area and compensation range; and calibrating parameters of the auxiliary equipment on the picture of the second original image data to obtain the marking result, wherein the marking result comprises all calibrated second original image data. Optionally, the generating the second image data to be fused according to the marking result, and performing an image fusion operation on the first image data to be fused and the second image data to be fused, where obtaining the image fusion result includes: generating the second image data to be fused according to the marking result and the preset rule; and fusing the second fused image data with the first fused image data by using an image stitching algorithm to obtain the fusion result.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, fig. 3 is a schematic hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33, and at least one communication bus 34. The communication bus 34 is used to enable communication connections between the elements. The memory 33 may comprise a high-speed RAM memory or may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 31 may be implemented as, for example, a central processing unit (Central Processing Unit, abbreviated as CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 through wired or wireless connections.
Alternatively, the input device 30 may include a variety of input devices, for example, may include at least one of a user-oriented user interface, a device-oriented device interface, a programmable interface of software, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware insertion interface (such as a USB interface, a serial port, etc.) for data transmission between devices; alternatively, the user-oriented user interface may be, for example, a user-oriented control key, a voice input device for receiving voice input, and a touch-sensitive device (e.g., a touch screen, a touch pad, etc. having touch-sensitive functionality) for receiving user touch input by a user; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, for example, an input pin interface or an input interface of a chip, etc.; optionally, the transceiver may be a radio frequency transceiver chip, a baseband processing chip, a transceiver antenna, etc. with a communication function. An audio input device such as a microphone may receive voice data. The output device 32 may include a display, audio, or the like.
In this embodiment, the processor of the terminal device may include functions for executing each module of the data processing apparatus in each device, and specific functions and technical effects may be referred to the above embodiments and are not described herein again.
Fig. 4 is a schematic hardware structure of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of the implementation of fig. 3. As shown in fig. 4, the terminal device of the present embodiment includes a processor 41 and a memory 42.
The processor 41 executes the computer program code stored in the memory 42 to implement the methods of the above-described embodiments.
The memory 42 is configured to store various types of data to support operation at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, video, etc. The memory 42 may include a random access memory (random access memory, simply referred to as RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, a processor 41 is provided in the processing assembly 40. The terminal device may further include: a communication component 43, a power supply component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48. The components and the like specifically included in the terminal device are set according to actual requirements, which are not limited in this embodiment.
The processing component 40 generally controls the overall operation of the terminal device. The processing component 40 may include one or more processors 41 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 40 may include one or more modules that facilitate interactions between the processing component 40 and other components. For example, processing component 40 may include a multimedia module to facilitate interaction between multimedia component 45 and processing component 40.
The power supply assembly 44 provides power to the various components of the terminal device. Power supply components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for terminal devices.
The multimedia component 45 comprises a display screen between the terminal device and the user providing an output interface. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
The audio component 46 is configured to output and/or input audio signals. For example, the audio component 46 includes a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a speech recognition mode. The received audio signals may be further stored in the memory 42 or transmitted via the communication component 43. In some embodiments, audio assembly 46 further includes a speaker for outputting audio signals.
The input/output interface 47 provides an interface between the processing assembly 40 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: volume button, start button and lock button.
The sensor assembly 48 includes one or more sensors for providing status assessment of various aspects for the terminal device. For example, the sensor assembly 48 may detect the open/closed state of the terminal device, the relative positioning of the assembly, the presence or absence of user contact with the terminal device. The sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 48 may also include a camera or the like.
The communication component 43 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot, where the SIM card slot is used to insert a SIM card, so that the terminal device may log into a GPRS network, and establish communication with a server through the internet.
From the above, it will be appreciated that the communication component 43, the audio component 46, and the input/output interface 47, the sensor component 48 referred to in the embodiment of fig. 4 may be implemented as an input device in the embodiment of fig. 3.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (10)

1. The main and auxiliary image fusion method is characterized by comprising the following steps of:
collecting first original image data of a main device and second original image data of an auxiliary device;
cutting the first original image data according to a preset rule to obtain first image data to be fused;
calibrating the second original image data according to the parameters of the auxiliary equipment to obtain a marking result;
and generating second image data to be fused according to the marking result, and performing image fusion operation on the first image data to be fused and the second image data to be fused to obtain an image fusion result.
2. The method according to claim 1, wherein the performing clipping operation on the first original image data according to a preset rule to obtain first image data to be fused includes:
identifying boundary information of the first original image data according to an edge identification model;
acquiring the preset rule according to the boundary information, wherein the preset rule is used for defining a core data area of the first original image data;
and cutting the first original image data by using the preset rule to obtain the first image data to be fused.
3. The method according to claim 1, wherein calibrating the second raw image data according to the parameters of the auxiliary device, to obtain a marking result includes:
collecting parameters of the auxiliary equipment, wherein the parameters of the auxiliary equipment comprise: shooting function, shooting area and compensation range;
and calibrating parameters of the auxiliary equipment on the picture of the second original image data to obtain the marking result, wherein the marking result comprises all calibrated second original image data.
4. The method according to claim 1, wherein generating second image data to be fused according to the marking result, and performing an image fusion operation on the first image data to be fused and the second image data to be fused, to obtain an image fusion result includes:
generating the second image data to be fused according to the marking result and the preset rule;
and fusing the second fused image data with the first fused image data by using an image stitching algorithm to obtain the fusion result.
5. A primary and secondary image fusion apparatus, comprising:
the acquisition module is used for acquiring the first original image data of the main equipment and the second original image data of the auxiliary equipment;
the clipping module is used for clipping the first original image data according to a preset rule to obtain first image data to be fused;
the calibration module is used for calibrating the second original image data according to the parameters of the auxiliary equipment to obtain a marking result;
and the fusion module is used for generating second image data to be fused according to the marking result, and carrying out image fusion operation on the first image data to be fused and the second image data to be fused to obtain an image fusion result.
6. The apparatus of claim 5, wherein the clipping module comprises:
an identification unit configured to identify boundary information of the first original image data according to an edge identification model;
an obtaining unit, configured to obtain the preset rule according to the boundary information, where the preset rule is used to define a core data area of the first original image data;
and the clipping unit is used for clipping the first original image data by utilizing the preset rule to obtain the first image data to be fused.
7. The apparatus of claim 5, wherein the calibration module comprises:
the acquisition unit is used for acquiring parameters of the auxiliary equipment, wherein the parameters of the auxiliary equipment comprise: shooting function, shooting area and compensation range;
and the calibration unit is used for calibrating the parameters of the auxiliary equipment on the picture of the second original image data to obtain the marking result, wherein the marking result comprises all calibrated second original image data.
8. The apparatus of claim 5, wherein the fusion module comprises:
the generating unit is used for generating the second image data to be fused according to the marking result and the preset rule;
and the fusion unit is used for fusing the second fused image data with the first fused image data by utilizing an image stitching algorithm to obtain the fusion result.
9. A non-volatile storage medium, characterized in that the non-volatile storage medium comprises a stored program, wherein the program, when run, controls a device in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for executing the processor, wherein the computer readable instructions when executed perform the method of any of claims 1 to 4.
CN202310806625.6A 2023-07-03 2023-07-03 Main and auxiliary image fusion method and device Active CN116757983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310806625.6A CN116757983B (en) 2023-07-03 2023-07-03 Main and auxiliary image fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310806625.6A CN116757983B (en) 2023-07-03 2023-07-03 Main and auxiliary image fusion method and device

Publications (2)

Publication Number Publication Date
CN116757983A true CN116757983A (en) 2023-09-15
CN116757983B CN116757983B (en) 2024-02-06

Family

ID=87956978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310806625.6A Active CN116757983B (en) 2023-07-03 2023-07-03 Main and auxiliary image fusion method and device

Country Status (1)

Country Link
CN (1) CN116757983B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517216A (en) * 2019-08-30 2019-11-29 的卢技术有限公司 A kind of SLAM fusion method and its system based on polymorphic type camera
CN112770042A (en) * 2019-11-05 2021-05-07 RealMe重庆移动通信有限公司 Image processing method and device, computer readable medium, wireless communication terminal
WO2021135601A1 (en) * 2019-12-31 2021-07-08 华为技术有限公司 Auxiliary photographing method and apparatus, terminal device, and storage medium
CN114972023A (en) * 2022-04-21 2022-08-30 合众新能源汽车有限公司 Image splicing processing method, device and equipment and computer storage medium
CN115761600A (en) * 2022-12-22 2023-03-07 北京百度网讯科技有限公司 Video fusion circuit, video fusion method, video fusion device, electronic apparatus, and computer-readable medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517216A (en) * 2019-08-30 2019-11-29 的卢技术有限公司 A kind of SLAM fusion method and its system based on polymorphic type camera
CN112770042A (en) * 2019-11-05 2021-05-07 RealMe重庆移动通信有限公司 Image processing method and device, computer readable medium, wireless communication terminal
WO2021135601A1 (en) * 2019-12-31 2021-07-08 华为技术有限公司 Auxiliary photographing method and apparatus, terminal device, and storage medium
CN114972023A (en) * 2022-04-21 2022-08-30 合众新能源汽车有限公司 Image splicing processing method, device and equipment and computer storage medium
CN115761600A (en) * 2022-12-22 2023-03-07 北京百度网讯科技有限公司 Video fusion circuit, video fusion method, video fusion device, electronic apparatus, and computer-readable medium

Also Published As

Publication number Publication date
CN116757983B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN110087123B (en) Video file production method, device, equipment and readable storage medium
CN107025419B (en) Fingerprint template inputting method and device
CN106970754A (en) The method and device of screenshotss processing
CN105338399A (en) Image acquisition method and device
CN105357449A (en) Shooting method and device, and image processing method and apparatus
CN116261044B (en) Intelligent focusing method and device for hundred million-level cameras
CN116614453B (en) Image transmission bandwidth selection method and device based on cloud interconnection
CN116757983B (en) Main and auxiliary image fusion method and device
CN104780465A (en) Frame parameter adjusting method and device
CN114866702A (en) Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device
CN111064886B (en) Shooting method of terminal equipment, terminal equipment and storage medium
CN115345808B (en) Picture generation method and device based on multi-element information acquisition
CN116579964B (en) Dynamic frame gradual-in gradual-out dynamic fusion method and device
CN116579965B (en) Multi-image fusion method and device
CN116758165B (en) Image calibration method and device based on array camera
CN116228593B (en) Image perfecting method and device based on hierarchical antialiasing
CN115511735B (en) Snow field gray scale picture optimization method and device
CN116485912B (en) Multi-module coordination method and device for light field camera
CN116389915B (en) Method and device for reducing flicker of light field camera
CN116468883B (en) High-precision image data volume fog recognition method and device
CN116302041B (en) Optimization method and device for light field camera interface module
CN117871419A (en) Air quality detection method and device based on optical camera holder control
CN116757981A (en) Multi-terminal image fusion method and device
CN116452481A (en) Multi-angle combined shooting method and device
CN117896625A (en) Picture imaging method and device based on low-altitude high-resolution analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant