Summary of the invention
Embodiment of the disclosure proposes image processing method and device.
In a first aspect, embodiment of the disclosure provides a kind of image processing method, this method comprises: obtaining at least two frames
Image, wherein at least same scene is presented in two field pictures;The selection target image from least two field pictures;By at least two frame figures
The image coordinate of non-object image as in maps in target image, to obtain the image after coordinate conversion;Coordinate is converted
Image and target image afterwards is merged;Based on fusion results, fused image is generated.
In some embodiments, the image coordinate of the non-object image at least two field pictures is mapped into target image
In, comprising: extract the key point of each frame image at least two field pictures;For each frame image in non-object image,
The key point of the image is matched with the key point of target image;Based on matching result, generate the key in the image
The image coordinate of point converts the homography matrix into target image;Based on homography matrix, the image coordinate of the image is reflected
It is incident upon in target image.
In some embodiments, to the image and target image progress image co-registration after coordinate conversion, comprising: turn to coordinate
In image and target image after changing, the pixel value at same coordinate position carries out mean value calculation;And it is described based on fusion
As a result, generating fused image, comprising: based on mean value calculation as a result, generating fused image.
In some embodiments, based on mean value calculation as a result, generating fused image, comprising: determine based on average
The processing image that value obtains after calculating;For each frame image in the image and target image after coordinate conversion, the figure is determined
As the difference between processing image;Based on difference results, weight is arranged to the image;Based on obtained each frame image
The pixel value of each frame image in image and target image after weight, coordinate conversion, to the image and mesh after coordinate conversion
Logo image is merged;Based on fusion results, fused image is generated.
In some embodiments, it determines the image and handles the difference between image, comprising: by each of the image
The pixel value of pixel is compared with the pixel value of the pixel at identical image coordinate position in processing image, is tied based on comparing
Fruit generates the difference between the pixel of the image and the pixel of processing image;And it is described based on difference results, to the image
Weight is set, comprising: is based on difference, weight is arranged to the image.
Second aspect, embodiment of the disclosure provide a kind of image processing apparatus, which includes: acquiring unit, quilt
It is configured to obtain at least two field pictures, wherein at least same scene is presented in two field pictures;Selecting unit is configured to from least
Selection target image in two field pictures;Map unit is configured to sit the image of the non-object image at least two field pictures
Mark maps in target image, to obtain the image after coordinate conversion;Integrated unit, the image after being configured to convert coordinate
It is merged with target image;Production unit is configured to generate fused image based on fusion results.
In some embodiments, map unit is further configured to: extracting each frame image at least two field pictures
Key point;For each frame image in non-object image, the key point of the image and the key point of target image are carried out
Matching;Based on matching result, the image coordinate of the key point in the image is converted the homography square into target image by generation
Battle array;Based on homography matrix, the image coordinate of the image is mapped in target image.
In some embodiments, integrated unit, comprising: computation subunit, be configured to coordinate convert after image and
In target image, the pixel value at same coordinate position carries out mean value calculation;And generation unit, comprising: it is single to generate son
Member is configured to based on mean value calculation as a result, generating fused image.
In some embodiments, subelement is generated, comprising: determining module is configured to determine based on after mean value calculation
Obtained processing image;Setup module, each frame image in image and target image after being configured to convert coordinate,
It determines the image and handles the difference between image;Based on difference results, weight is arranged to the image;Generation module is configured
The pixel of each frame image in image and target image after being converted at the weight based on obtained each frame image, coordinate
Value, image and target image after converting to coordinate merge;Based on fusion results, fused image is generated.
In some embodiments, setup module is further configured to: by the pixel value of each of image pixel
It is compared with the pixel value of the pixel in processing image at identical image coordinate position, based on comparative result, generates the image
Pixel and handle image pixel between difference;Based on difference, weight is arranged to the image.
The third aspect, embodiment of the disclosure provide a kind of terminal device, which includes: one or more places
Manage device;Storage device, for storing one or more programs;When one or more programs are executed by one or more processors,
So that one or more processors realize the method as described in implementation any in first aspect.
Fourth aspect, embodiment of the disclosure provide a kind of computer-readable medium, are stored thereon with computer program,
The method as described in implementation any in first aspect is realized when the computer program is executed by processor.
The image processing method and device that embodiment of the disclosure provides, by obtaining shoot to Same Scene at least two
Frame image, then to this, at least two field pictures are merged, and are generated fused image based on fusion results, can be reduced image
The noise generated in shooting process improves image imaging effect;Image coordinate mapping is carried out at least two field pictures, so that each figure
As corresponding at same position, so as to improve image co-registration speed and image syncretizing effect.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase
Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the exemplary frame of the embodiment of the image processing method or image processing apparatus of the disclosure
Structure 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
Various client applications can be installed on terminal device 101,102,103.Such as the application of image taking class, image
Handle class application, searching class application, U.S. figure class application, the application of instant messaging class etc..Terminal device 101,102,103 can pass through
Network 104 is interacted with server 105, to receive or send message etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
It when part, can be the various electronic equipments with image camera function, can also be the various electronics that can receive user's operation
Equipment, including but not limited to camera, smart phone, tablet computer, E-book reader, pocket computer on knee and desk-top meter
Calculation machine etc..When terminal device 101,102,103 is software, may be mounted in above-mentioned cited electronic equipment.It can
It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module)
It is implemented as single software or software module.It is not specifically limited herein.
Server 105 can be the background server for supporting the client application installed on terminal device 101,102,103.
Server 105 can provide various function downloadings for the client application installed on terminal device 101,102,103, function uses
Background server.(such as schemed by carrying out such as image procossing from the server for providing support for it using client application
As duplicate removal) downloading of function, the client application installed on terminal device 101,102,103 can be made to use corresponding image
Processing function.
It should be noted that server 105 can be hardware, it is also possible to software.When server is hardware, Ke Yishi
The distributed server cluster of ready-made multiple server compositions, also may be implemented into individual server.When server is software,
Multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module) may be implemented into, it can also
To be implemented as single software or software module.It is not specifically limited herein.
It should be noted that image processing method provided by embodiment of the disclosure is by terminal device 101,102,103
It executes.Correspondingly, image processing apparatus can be set in terminal device 101,102,103.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.During image processing used data (such as
Certain image processing functions) it does not need in the case where long-range obtain, above system framework can not include network, only include end
End equipment.
With continued reference to Fig. 2, it illustrates the processes 200 according to one embodiment of the image processing method of the disclosure.It should
Image processing method the following steps are included:
Step 201, at least two field pictures are obtained.
In the present embodiment, above-mentioned image processing method executing subject (such as terminal device shown in FIG. 1 101,102,
103 perhaps servers 105) capture apparatus can be installed or connect with capture apparatus.At least two field pictures can be bat for this
Above-mentioned executing subject is sent to after taking the photograph equipment shooting.Alternatively, this at least two field pictures may be stored in advance in it is local.
Above-mentioned executing subject can obtain above-mentioned at least two field pictures by being used to indicate the routing information for the position that each image is stored.
Herein, at least scene that is presented of two field pictures is identical.In other words, which is presented
The information such as object, shooting background are all the same.As an example, this at least two field pictures be to building A shooting image.
Step 202, the selection target image from least two field pictures.
In the present embodiment, at least two field pictures according to accessed by step 201, above-mentioned executing subject can be from above-mentioned
At least selected in two field pictures wherein a frame as target image.
Specifically, can choose image quality highest one at least two field pictures progress quality testing and be used as mesh
Logo image.Quality testing can include but is not limited to: color saturation of image detection, the target object that image is presented is in the picture
Position detection etc..Herein, above-mentioned executing subject can calculate the pixel value of each image first, be based on picture calculated
Element value determines the full degree of the color of each image.Then, the color saturation value of each image of calculating and best saturation degree will be preset
Value is compared, and based on comparative result, preservation color saturation is closest to preset the image of best intensity value as target figure
Picture.Alternatively, above-mentioned executing subject can detect the object presented in image, the position of target object in the picture is determined
It sets.Specifically, the distance of presented target object range image central point can be calculated, and shared by the target object presented
The ratio of image.Then, the ratio of image shared by the target object presented is selected to be greater than the image of preset threshold.From it is selected go out
Target object shared by the ratio of image be greater than in the image of preset threshold, select the nearest figure in position of range image central point
As being used as target image.
Step 203, the image coordinate of the non-object image at least two field pictures is mapped in target image, to obtain
Image after coordinate conversion.
When in the present embodiment, due to shooting to Same Scene, there is capture apparatus in the influence based on ambient enviroment
Situations such as shooting the rotation of unstable situation or capture apparatus causes same target included in multiple images scheming
Position as in is different.Therefore, it is necessary to each image in the first image sequence is carried out coordinate mapping.It is thus possible to avoid same
During an object leads to image co-registration at the image coordinate location of different images, calculated for pixel values can not be accurately carried out,
Influence the imaging effect of fused image.
Specifically, the focus point of target image and the focus point of non-object image can be determined respectively.In general, capture apparatus
When being shot to scene, it can be focused based on photographed scene.The object to be focused can be set.Then, capture apparatus
Focusing shooting can be carried out based on the object.In general, after capture apparatus has shot image, it will usually focus point be presented.The focusing
The coordinate of point is usually camera coordinates.Since focus point is to carry out to specified object to obtained from focusing, each figure
The focus of picture is used to instruction same target.Then, above-mentioned executing subject can by the focus point of remaining image respectively with target
The focus point of image is compared, and is determined inclined between the focus point of each non-object image and the focus point of target image
Difference.Based on the deviation, the camera coordinates mapping relations of non-object image and target image are determined.For example, the deviation can be based on
It determines for the camera coordinates of non-object image to be converted to the transfer matrix into the camera coordinates of target image.Then, base
In the transformational relation of the transfer matrix, camera coordinates and image coordinate, the image coordinate of non-object image can be mapped into mesh
Image in the image coordinate of logo image, after obtaining coordinate conversion.
In some optional implementations of the present embodiment, the above-mentioned figure by the non-object image at least two field pictures
It as coordinate maps in target image, can specifically include: extracting the key point of each frame image at least two field pictures;It is right
Each frame image in non-object image matches the key point of the image with the key point of target image, based on
With as a result, the image coordinate of the key point in the image is converted the homography matrix into target image by generation;Based on singly answering
Property matrix, the image coordinate of the image is mapped in target image.
Specifically, above-mentioned key point is extracted can for example extract for the key point based on sift.Sift is empty based on scale
Between, operator is described to image scaling, the rotation even image local feature that maintains the invariance of affine transformation.It is possible, firstly, to mention
Taking-up is all the point of interest of Local Extremum on scale space and two dimensional image space, then filters out low unstable of energy
With the point of interest of mistake, finally stable characteristic point is obtained.Then, characteristic point is described.The description of this feature point can wrap
Include the distribution of characteristic point direction and the description of 128 dimensional vectors.To obtain target based on the description of identified characteristic point and characteristic point
The key point of image.The key point of each frame image in remaining non-object image can also be determined with same method.Then,
For each frame image in non-object image, by the key point of the identified image and the progress of the key point of target image
Match.Herein, key point matching can specifically be tieed up by calculating the 128 of the key point of the image and the key point of target image
The Euclidean distance of vector is realized.Wherein, Euclidean distance is smaller, and matching degree is higher.It, can be with when Euclidean distance is less than given threshold
It is determined as successful match.Then, the matching result of the key point based on the image and target image is determined the pass in the image
The image coordinate of key point maps to the homography matrix in target image.Finally, can be incited somebody to action according to calculated homography matrix
The image coordinate of each pixel is multiplied with the homography matrix in the image, so that the image coordinate of the image is mapped to mesh
In logo image.
Step 204, image and target image after converting to coordinate merge.
In the present embodiment, according to the image after the obtained coordinate conversion of step 203, above-mentioned executing subject can be by mesh
Image after logo image and coordinate conversion carries out image co-registration.
Specifically, being based between the calculated non-object image of institute and target image for each non-object image
Transition matrix, the image coordinate of key point identical with target image in non-object image is mapped to the image of target image
In coordinate, to obtain the image after coordinate conversion.Then, identical in the image obtained after target image and coordinate being converted
Image coordinate location at pixel value carry out mean value calculation.
Step 205, fusion results are based on, fused image is generated.
In the present embodiment, it according to the fusion in step 204 to image and target image after coordinate conversion, can be based on
Fusion results, to generate fused image.
Specifically, the mean value calculation based on each pixel value is as a result, image after mean value calculation can be generated.By the figure
As the blending image as above-mentioned at least two field pictures.By carrying out image procossing using which, compared to single-frame images
It is handled to remove noise, handled obtained image can be made to be closer to real scene, improve scenario reduction.
With further reference to Fig. 3, it illustrates an application scenario diagrams of the image processing method of the disclosure.
In application scenarios as shown in Figure 3, electronic equipment gets two field pictures, image A and image B.Wherein, image
A and image B is shot for Same Scene.It can be seen from the figure that there are picture noises in the picture that image A is presented
Region a;In the picture that image B is presented, there are picture noise region b.Then, using image A as target image, by image B's
Image coordinate maps in image A, the image C after obtaining coordinate conversion.Finally, image A and image C is subjected to image co-registration,
Obtain fused image D.It can be seen from the figure that image A after merging with image C in obtained image D, reduces image
A and image not in picture noise, improve the image quality of image.
The image processing method that embodiment of the disclosure provides, by obtaining at least two frame figures shot to Same Scene
Picture, then to this, at least two field pictures are merged, and are generated fused image based on fusion results, can be reduced image taking
The noise generated in the process improves image imaging effect;Image coordinate mapping is carried out at least two field pictures, so that each image phase
It is corresponded to at position, so as to improve image co-registration speed and image syncretizing effect.
With further reference to Fig. 4, it illustrates the processes according to another embodiment of the image processing method of the disclosure
400.The image processing method the following steps are included:
Step 401, at least two field pictures are obtained.
In the present embodiment, above-mentioned image processing method executing subject (such as terminal device shown in FIG. 1 101,102,
103 perhaps servers 105) capture apparatus can be installed or connect with capture apparatus.At least two field pictures can be bat for this
Above-mentioned executing subject is sent to after taking the photograph equipment shooting.Alternatively, this at least two field pictures may be stored in advance in it is local.
Above-mentioned executing subject can obtain above-mentioned at least two field pictures by being used to indicate the routing information for the position that each image is stored.
Herein, at least scene that is presented of two field pictures is identical.In other words, which is presented
The information such as object, shooting background are all the same.As an example, this at least two field pictures be to building A shooting image.
Step 402, the selection target image from least two field pictures.
Step 403, the image coordinate of the non-object image at least two field pictures is mapped in target image, to obtain
Image after coordinate conversion.
Step 401, step 402, the specific implementation of step 403 and its bring beneficial effect can refer to shown in Fig. 2
Step 201, step 202, the associated description of step 203, details are not described herein.
Step 404, in the image and target image after coordinate conversion, the pixel value at same coordinate position is averaged
Value calculates.
In the present embodiment, the figure after coordinate corresponding with each non-object image conversion determined according to step 402
Picture, above-mentioned executing subject can by the image after target image and coordinate conversion, pixel at identical image coordinate position
Pixel value carries out mean value calculation, to obtain the corresponding new pixel value of each image coordinate location.
Step 405, it determines based on the processing image obtained after mean value calculation.
It in the present embodiment, can according to the corresponding new pixel value of each obtained image coordinate location of step 404
To generate new processing image.
Step 406, for each frame image in the image and target image after coordinate conversion, the image and processing are determined
Difference between image is based on difference results, and weight is arranged to the image.
In the present embodiment, determine that the image and the difference handled between image can specifically include: for target image,
It is directly compared with processing image using the image, based on comparative result, determines the frame image and handle the difference between image
It is different.For non-object image, it is compared, is based on processing image using the image after coordinate corresponding with frame image conversion
Comparison result determines the frame image and handles the difference between image.The characteristic point in image due to not carrying out coordinate conversion
Image coordinate and processing image in same characteristic features point image coordinate have deviation, being directly compared will lead to comparison result
Error is larger, can reduce the accuracy of determined difference.After converting the coordinate of non-object image, then determine non-mesh
The accuracy of determined difference can be improved in difference between logo image and target image.
Herein, the image that image co-registration is participated in for each frame can determine the image and phase in processing image first
With the difference of the pixel value at coordinate position, the sum of all differences obtained based on the image are then determined.It is then determined the difference
The index of the sum of value, as the first value.Obtained each frame is participated in into the difference between the image of image co-registration and processing image
The index of the sum of value is added, as second value.Finally, by the ratio between the first value and second value as the figure after being converted with the frame coordinate
As corresponding weight.Herein, the image of above-mentioned participation image co-registration specifically includes target image, and with the non-targeted figure of each frame
As the image after the conversion of corresponding coordinate.
Step 407, every in the image and target image after weight, coordinate conversion based on obtained each frame image
The pixel value of one frame image, image and target image after converting to coordinate merge.
In the present embodiment, can be based on weight corresponding with each frame image, by the pixel value of the frame image multiplied by
The weight, obtains pretreatment image.Then, by obtained all pretreatment images, at identical image coordinate position
Pixel value carries out mean value calculation.
Step 408, fusion results are based on, fused image is generated.
In the present embodiment, mean value calculation is carried out according to the pixel value in step 407 at identical image coordinate position, it can
To obtain mean value calculation result.Then, it according to each obtained pixel value of the result after mean value calculation, generates new
Image.Using new image generated as blending image.
Figure 4, it is seen that the present embodiment is highlighted to each frame image unlike embodiment shown in Fig. 2
The step of weight is arranged can determine that the image participates in the ratio of fusion based on the weight of each frame image, avoid certain noise
The ratio of the biggish image participation fusion of excessive or color difference is excessive to be caused in obtained blending image, and a certain band of position is still deposited
In biggish noise, the blending image obtained by method shown in the embodiment can effectively reduce noise, to improve
Image imaging effect.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, present disclose provides image processing apparatus
One embodiment, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to various electricity
In sub- equipment.
As shown in figure 5, image processing apparatus 500 provided in this embodiment includes acquiring unit 501, selecting unit 502, reflects
Penetrate unit 503, integrated unit 504 and generation unit 505.Wherein, acquiring unit 501 are configured to obtain at least two field pictures,
Wherein, at least same scene is presented in two field pictures;Selecting unit 502 is configured to the selection target figure from least two field pictures
Picture;Map unit 503 is configured to the image coordinate of the non-object image at least two field pictures mapping to target image
In, to obtain the image after coordinate conversion;Integrated unit 504, be configured to coordinate convert after image and target image into
Row fusion;Generation unit 505 is configured to generate fused image based on fusion results.
In the present embodiment, in image processing apparatus 500: acquiring unit 501, map unit 503, is melted at selecting unit 502
The specific processing and its brought technical effect for closing unit 504 and generation unit 505 can be respectively with reference in Fig. 2 corresponding embodiments
Step 201, step 202, step 203, the related description of step 204 and step 205, details are not described herein.
In some optional implementations of the present embodiment, map unit 502 is further configured to: extracting at least two
The key point of each frame image in frame image;For each frame image in non-object image, by the key point of the image with
The key point of target image is matched, be based on matching result, generate by the image coordinate of the key point in the image convert to
Homography matrix in target image;Based on homography matrix, the image coordinate of the image is mapped in target image.
In some optional implementations of the present embodiment, integrated unit 504, comprising: computation subunit (is not shown in figure
Out), in the image and target image after being configured to convert coordinate, the pixel value at same coordinate position carries out average value meter
It calculates;And generation unit 505, comprising: generate subelement (not shown), be configured to based on mean value calculation as a result, raw
At fused image.
In some optional implementations of the present embodiment, subelement (not shown) is generated, comprising: determining module
(not shown) is configured to determine based on the processing image obtained after mean value calculation;Setup module (not shown),
Each frame image in image and target image after being configured to convert coordinate, determines between the image and processing image
Difference;Based on difference results, weight is arranged to the image;Generation module (not shown) is configured to based on acquired
Each frame image weight, coordinate conversion after image and target image in each frame image pixel value, to coordinate turn
Image and target image after changing are merged;Based on fusion results, fused image is generated.
In some optional implementations of the present embodiment, setup module (not shown) is further configured to:
By the pixel value of the pixel in the pixel value of each of image pixel and processing image at identical image coordinate position into
Row compares, and based on comparative result, generates the difference between the pixel of the image and the pixel of processing image;Based on difference, to this
Weight is arranged in image.
The image processing apparatus that embodiment of the disclosure provides, by obtaining at least two frame figures shot to Same Scene
Picture, then to this, at least two field pictures are merged, and are generated fused image based on fusion results, can be reduced image taking
The noise generated in the process improves image imaging effect;Image coordinate mapping is carried out at least two field pictures, so that each image phase
It is corresponded to at position, so as to improve image co-registration speed and image syncretizing effect.
Below with reference to Fig. 6, it illustrates the electronic equipment that is suitable for being used to realize embodiment of the disclosure, (example is as shown in figure 1
Terminal device) 600 structural schematic diagram.Terminal device in embodiment of the disclosure can include but is not limited to such as move electricity
Words, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia
Player), the mobile terminal and such as number TV, desktop computer etc. of car-mounted terminal (such as vehicle mounted guidance terminal) etc.
Fixed terminal.Terminal device shown in Fig. 6 is only an example, function to embodiment of the disclosure and should not use model
Shroud carrys out any restrictions.
As shown in fig. 6, electronic equipment 600 may include processing unit (such as central processing unit, graphics processor etc.)
601, random access can be loaded into according to the program being stored in read-only memory (ROM) 602 or from storage device 608
Program in memory (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with electronic equipment
Various programs and data needed for 600 operations.Processing unit 601, ROM 602 and RAM 603 pass through the phase each other of bus 604
Even.Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 607 of dynamic device etc.;Storage device 608 including such as tape, hard disk etc.;And communication device 609.Communication device
609, which can permit electronic equipment 600, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 6 shows tool
There is the electronic equipment 600 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with
Alternatively implement or have more or fewer devices.Each box shown in Fig. 6 can represent a device, can also root
According to needing to represent multiple devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 609, or from storage device 608
It is mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the implementation of the disclosure is executed
The above-mentioned function of being limited in the method for example.
It is situated between it should be noted that the computer-readable medium of embodiment of the disclosure description can be computer-readable signal
Matter or computer readable storage medium either the two any combination.Computer readable storage medium for example can be with
System, device or the device of --- but being not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or it is any more than
Combination.The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires
Electrical connection, portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type are programmable
Read-only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic are deposited
Memory device or above-mentioned any appropriate combination.In embodiment of the disclosure, computer readable storage medium, which can be, appoints
What include or the tangible medium of storage program that the program can be commanded execution system, device or device use or and its
It is used in combination.And in embodiment of the disclosure, computer-readable signal media may include in a base band or as carrier wave
The data-signal that a part is propagated, wherein carrying computer-readable program code.The data-signal of this propagation can be adopted
With diversified forms, including but not limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal is situated between
Matter can also be any computer-readable medium other than computer readable storage medium, which can be with
It sends, propagate or transmits for by the use of instruction execution system, device or device or program in connection.Meter
The program code for including on calculation machine readable medium can transmit with any suitable medium, including but not limited to: electric wire, optical cable,
RF (radio frequency) etc. or above-mentioned any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned terminal device;It is also possible to individualism, and not
It is fitted into the terminal device.Above-mentioned computer-readable medium carries one or more program, when said one or more
When a program is executed by the electronic equipment, so that the electronic equipment: obtaining at least two field pictures, at least two field pictures are presented identical
Scene;The selection target image from least two field pictures;The image coordinate of non-object image at least two field pictures is mapped
Into target image, to obtain the image after coordinate conversion;Image and target image after converting to coordinate merge, and are based on
Fusion results generate fused image.
The behaviour for executing embodiment of the disclosure can be write with one or more programming languages or combinations thereof
The computer program code of work, programming language include object oriented program language-such as Java,
Smalltalk, C++ further include conventional procedural programming language-such as " C " language or similar program design language
Speech.Program code can be executed fully on the user computer, partly be executed on the user computer, as an independence
Software package execute, part on the user computer part execute on the remote computer or completely in remote computer or
It is executed on server.In situations involving remote computers, remote computer can pass through the network of any kind --- packet
Include local area network (LAN) or wide area network (WAN) --- it is connected to subscriber computer, or, it may be connected to outer computer (such as
It is connected using ISP by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in embodiment of the disclosure can be realized by way of software, can also be passed through
The mode of hardware is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor
Including a kind of processor, including acquiring unit, selecting unit, map unit, integrated unit and generation unit.Wherein, these lists
The title of member does not constitute the restriction to the unit itself under certain conditions, for example, acquiring unit is also described as " obtaining
Take the unit of at least two field pictures ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member it should be appreciated that embodiment of the disclosure involved in invention scope, however it is not limited to the specific combination of above-mentioned technical characteristic and
At technical solution, while should also cover do not depart from foregoing invention design in the case where, by above-mentioned technical characteristic or its be equal
Feature carries out any combination and other technical solutions for being formed.Such as disclosed in features described above and embodiment of the disclosure (but
It is not limited to) technical characteristic with similar functions is replaced mutually and the technical solution that is formed.