CN114519753A - Image generation method, system, electronic device, storage medium and product - Google Patents

Image generation method, system, electronic device, storage medium and product Download PDF

Info

Publication number
CN114519753A
CN114519753A CN202210132413.XA CN202210132413A CN114519753A CN 114519753 A CN114519753 A CN 114519753A CN 202210132413 A CN202210132413 A CN 202210132413A CN 114519753 A CN114519753 A CN 114519753A
Authority
CN
China
Prior art keywords
image frame
image
foreground
rotation
translation matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210132413.XA
Other languages
Chinese (zh)
Inventor
蒋海峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wingtech Information Technology Co Ltd
Original Assignee
Shanghai Wingtech Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wingtech Information Technology Co Ltd filed Critical Shanghai Wingtech Information Technology Co Ltd
Priority to CN202210132413.XA priority Critical patent/CN114519753A/en
Publication of CN114519753A publication Critical patent/CN114519753A/en
Priority to PCT/CN2022/100540 priority patent/WO2023151214A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/608Rotation of whole images or parts thereof by skew deformation, e.g. two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the field of image processing, and provides a software image generation method, a system, an electronic device, a storage medium and a product, wherein the method comprises the following steps: obtaining a first image frame and a second image frame, and respectively carrying out image segmentation on the first image frame and the second image frame to obtain foreground regions of the first image frame and the second image frame; judging whether the foreground region of the first image frame and the foreground region of the second image frame have matched feature points or not; if the matched characteristic points exist, solving a rotation and translation matrix based on the matched characteristic points, otherwise, solving the rotation and translation matrix based on a default method; and correcting the foreground area of the second image frame based on the rotation and translation matrix, and generating a composite image according to the first image frame and the corrected second image frame. By separating the foreground and the background, the image is synthesized in a targeted manner, the image with high definition can be synthesized, and the image synthesis quality is improved.

Description

Image generation method, system, electronic device, storage medium and product
Technical Field
The present application relates to the field of image processing, and in particular, to an image generation method, system, electronic device, storage medium, and product.
Background
With the development of the imaging technology of the electronic device, users have higher requirements on imaging quality, and manufacturers provide new challenges for processing images of the electronic device.
The image technology image generation method of the existing electronic equipment synthesizes the characteristics of all pictures, and the synthesized image is unnatural or has the phenomena of edge blurring, artifacts and overlapping when the image synthesis is carried out by using the method, so that obviously, the existing imaging mode seriously influences the impression of a user, brings inconvenience to the user and influences the use experience.
Disclosure of Invention
In view of the above, it is necessary to provide an image generation method, system, electronic device, storage medium, and product that can improve imaging quality in view of the above technical problems.
The embodiment of the application provides an image generation method, which comprises the following steps:
obtaining a first image frame and a second image frame, and respectively carrying out image segmentation on the first image frame and the second image frame to obtain foreground regions of the first image frame and the second image frame;
judging whether the foreground region of the first image frame and the foreground region of the second image frame have matched feature points or not;
if the matched characteristic points exist, solving a rotation and translation matrix based on the matched characteristic points, otherwise, solving the rotation and translation matrix based on a default method;
and correcting the foreground area of the second image frame based on the rotation and translation matrix, and generating a composite image according to the first image frame and the corrected second image frame.
In one embodiment, solving the roto-translation matrix based on the matched feature points if there are matched feature points, otherwise solving the roto-translation matrix based on a default method, comprises:
and if the matched characteristic points exist, solving the matched characteristic points by using a RANSAC method to obtain the rotation and translation matrix, otherwise, iteratively calculating the minimum error of the foreground regions of the first image frame and the second image frame by using an ECC method to obtain the rotation and translation matrix.
In one embodiment, the foreground regions of the first image frame and the second image frame each include at least one block, and the rotation and translation matrix corresponding to each block is solved according to the matched feature points in each block in the foreground regions of the first image frame and the second image frame.
In one embodiment, the rectifying the foreground region of the second image frame based on the rotational-translation matrix comprises:
and correcting each block of the second image frame based on the corresponding rotation and translation matrix of each block.
In one embodiment, before determining whether there are matched feature points in the foreground region of the first image frame and the foreground region of the second image frame, the method further includes:
acquiring feature points of the first image frame and feature points of a second image frame;
and performing feature matching on the feature points of the first image frame and the feature points of the second image frame.
In one embodiment, the feature matching the feature points of the first image frame and the feature points of the second image frame includes:
applying a first method to match the characteristic points of the first image frame and the characteristic points of the second image frame for the first time;
and on the basis of the primary matching, a second method is applied to perform secondary matching on the feature points of the first image frame and the feature points of the second image frame.
An embodiment of the present application provides an image generation system, including:
the image segmentation device comprises a segmentation module, a foreground region acquisition module and a foreground region acquisition module, wherein the segmentation module is used for acquiring a first image frame and a second image frame and respectively carrying out image segmentation on the first image frame and the second image frame to obtain foreground regions of the first image frame and the second image frame;
the judging module is used for judging whether the foreground area of the first image frame and the foreground area of the second image frame have matched feature points or not;
the solving module is used for solving the rotation and translation matrix based on the matched characteristic points if the matched characteristic points exist, or else, solving the rotation and translation matrix based on a default method;
and the synthesis module is used for correcting the foreground area of the second image frame based on the rotation and translation matrix and generating a synthesized image according to the first image frame and the corrected second image frame.
An embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the instruction, the program, the code set, or the set of instructions is loaded and executed by the processor to implement the steps of an image generation method provided in any embodiment of the present application.
Embodiments of the present application provide a non-transitory computer-readable storage medium, and when instructions in the storage medium are executed by a processor of a mobile terminal, the mobile terminal is enabled to perform the steps of an image generation method provided in any embodiment of the present application.
The embodiment of the present application provides a computer program product, and when instructions in the computer program product are executed by a processor of a mobile terminal, the mobile terminal is enabled to execute the steps of implementing an image generation method provided in any embodiment of the present application.
According to the image generation method, the image generation system, the electronic equipment, the storage medium and the product, the image with high definition can be synthesized by separating the foreground and the background and synthesizing the image in a targeted manner. Compared with the traditional synthesis mode in the prior art, the method has the advantages that the problem that the synthesis cannot be normally completed or the definition of the synthesized image is low due to the fact that the definition difference between the image foreground and the background is too large is solved, and the use experience of a user is improved.
Drawings
FIG. 1 is a schematic flow chart diagram of an image generation method in one embodiment;
FIG. 2 is a block diagram showing the configuration of an image generating system according to an embodiment;
FIG. 3 is a diagram of the internal structure of an electronic device in one embodiment;
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in FIG. 1, an image generation method is provided. The embodiment is exemplified by applying the method to a mobile terminal, so that it is understood that the method can also be applied to a server, and can also be applied to a system comprising the terminal and the server, and is implemented through interaction between the terminal and the server.
Step S101, obtaining a first image frame and a second image frame, and respectively carrying out image segmentation on the first image frame and the second image frame to obtain foreground regions of the first image frame and the second image frame.
Specifically, by acquiring a first frame and a second frame of an image, the foreground and the scene of the two images are segmented, wherein the foreground includes but is not limited to people, still objects and articles, and the background includes but is not limited to scenery and buildings.
In a specific implementation process, the images of the first frame and the second frame may be obtained by shooting through an electronic device, where the electronic device is, for example: smart phones, tablet computers, and the like.
In the above embodiment, the foreground regions of the first image frame and the second image frame each include at least one block, and the rotation and translation matrix corresponding to each block is solved according to the matched feature points in each block in the foreground regions of the first image frame and the second image frame.
Specifically, a first image frame and a second image frame are segmented, wherein a deep learning semantic model is adopted for segmentation, the segmented image at least comprises a block, and the number of pixels contained in the block is related to the foreground area of the first image frame and the foreground area of the second image frame. Matching is carried out according to the characteristic points of the foreground areas of the first image frame and the second image frame, and the blocks are rotated, translated and the like, so that the first image frame and the second image frame are aligned.
In the above step, the rectifying the foreground region of the second image frame based on the rotation-translation matrix includes: and correcting each block of the second image frame based on the corresponding rotation and translation matrix of each block.
Specifically, the first image frame is used as a reference frame to correct the second image frame to the first image frame. And correcting the blocks of the second image frame based on the characteristic points so as to achieve better image impression.
Step S102, judging whether the foreground area of the first image frame and the foreground area of the second image frame have matched characteristic points.
Specifically, whether the feature points are included in the foreground of the first image frame and the foreground of the second image frame is determined. For example, the feature points may be simply understood as more prominent points in the image frame, such as contour points, bright points in darker areas, dark points in lighter areas, harris corner points, and the like.
And S103, if the matched characteristic points exist, solving a rotation and translation matrix based on the matched characteristic points, otherwise, solving the rotation and translation matrix based on a default method.
Specifically, if there are feature points that can be matched, the second image frame is adjusted based on RANSAC. If the characteristic points do not exist, the image registration method based on the maximization of an Enhanced Correlation Coefficient (ECC) is used for carrying out iterative calculation.
And step S104, correcting the foreground region of the second image frame based on the rotation and translation matrix, and generating a composite image according to the first image frame and the corrected second image frame.
Specifically, the second image frame is corrected according to the corresponding rule, and the corrected second image frame is combined with the first image frame to become a new image frame.
In the above steps, if there are matched feature points, solving a rotational-translational matrix based on the matched feature points, otherwise, solving the rotational-translational matrix based on a default method, including:
and if the matched characteristic points exist, solving the matched characteristic points by using a RANSAC method to obtain the rotation and translation matrix, otherwise, iteratively calculating the minimum error of the foreground regions of the first image frame and the second image frame by using an ECC method to obtain the rotation and translation matrix.
Specifically, RANSAC is an abbreviation of RANdom SAmple Consensus. It can iteratively estimate the parameters of the mathematical model from a set of observed data sets comprising "outliers". For example, a suitable 2-dimensional line is found from a set of observations. Assuming that the observation data includes an intra-local point and an extra-local point, wherein the intra-local point is approximately passed by a straight line, and the extra-local point is far away from the straight line, RANSAC can obtain a model calculated only by the intra-local point, and the probability is high enough. ECC is to enhance the correlation coefficient, which has the advantage that the photometric distortion of contrast and brightness is not changed.
In the above step, before determining whether there is a matching feature point in the foreground region of the first image frame and the foreground region of the second image frame, the method further includes:
acquiring feature points of the first image frame and feature points of a second image frame;
and performing feature matching on the feature points of the first image frame and the feature points of the second image frame.
Specifically, for example, the feature points of the first image frame and the second image frame may be points where the difference in brightness is greater than a preset threshold, points where the change in color is greater than a preset threshold, and corner points. And matching the characteristic points of the two image frames to facilitate subsequent operation of the images.
In the above step, the performing feature matching on the feature points of the first image frame and the feature points of the second image frame includes:
applying a first method to match the feature points of the first image frame with the feature points of the second image frame for one time; and on the basis of the primary matching, a second method is applied to perform secondary matching on the feature points of the first image frame and the feature points of the second image frame.
Specifically, secondary accurate matching is performed on the matched feature points, so that a better visual effect is achieved.
In summary, the image generation method provided by the application can synthesize the image with high definition by separating the foreground and the background and synthesizing the image in a targeted manner. Compared with the traditional synthesis mode in the prior art, the method has the advantages that the problem that the synthesis cannot be normally completed or the definition of the synthesized image is low due to the fact that the definition difference between the image foreground and the background is too large is solved, and the use experience of a user is improved.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 2, there is provided an image generation system comprising: a segmentation module 210, a judgment module 220, a solution module 230, and a synthesis module 240.
A segmentation module 210, configured to obtain a first image frame and a second image frame, and perform image segmentation on the first image frame and the second image frame respectively to obtain foreground regions of the first image frame and the second image frame;
a determining module 220, configured to determine whether there are matched feature points in the foreground region of the first image frame and the foreground region of the second image frame;
a solving module 230, configured to solve the rotational-translational matrix based on the matched feature points if the matched feature points exist, or else solve the rotational-translational matrix based on a default method;
a synthesizing module 240, configured to correct a foreground region of the second image frame based on the rotational-translational matrix, and generate a synthesized image according to the first image frame and the corrected second image frame.
In one embodiment, the solving module 230 further comprises:
and if the matched characteristic points exist, solving the matched characteristic points by using a RANSAC method to obtain the rotation and translation matrix, otherwise, iteratively calculating the minimum error of the foreground regions of the first image frame and the second image frame by using an ECC method to obtain the rotation and translation matrix.
In summary, the image generation system provided by the application can synthesize the image with high definition by separating the foreground and the background and synthesizing the image in a targeted manner. Compared with the traditional synthesis mode in the prior art, the method has the advantages that the problem that the synthesis cannot be normally completed or the definition of the synthesized image is low due to the fact that the definition difference between the image foreground and the background is too large is solved, and the use experience of a user is improved.
For a specific limitation of an image generation system, reference may be made to the above limitation on an image generation method, which is not described herein again. All or part of the modules in the application starting system of the mobile terminal can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, an electronic device is provided, which may be a terminal device, and its internal structure diagram may be as shown in fig. 3. The terminal equipment comprises a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the terminal device is configured to provide computing and control capabilities. The memory of the terminal equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the terminal device is used for communicating with an external terminal in a wired or wireless mode, and the wireless mode can be realized through WIFI, an operator network, Near Field Communication (NFC) or other technologies. The computer program is executed by a processor to implement an application opening method. The display screen of the terminal equipment can be a liquid crystal display screen or a communication ink display screen, and the input device of the terminal equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the terminal equipment, an external keyboard, a touch pad or a mouse and the like.
It will be understood by those skilled in the art that the configuration shown in fig. 3 is a block diagram of only a portion of the configuration associated with the present application, and does not constitute a limitation on the communication device to which the present application is applied, and a particular communication device may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
In one embodiment, an image generation system provided herein may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 3. The memory of the computer device may store various program modules constituting the image generation system, such as the segmentation module, the judgment module, the solution module, and the synthesis module shown in fig. 2. The computer program constituted by the respective program modules causes the processor to execute the steps in one of the image generation methods of the embodiments of the present application described in the present specification.
For example, the mobile terminal shown in fig. 3 may be implemented by a segmentation module in an application start system of an electronic device shown in fig. 2, to obtain a first image frame and a second image frame, and perform image segmentation on the first image frame and the second image frame, respectively, to obtain foreground regions of the first image frame and the second image frame. And the judging module is used for judging whether the foreground area of the first image frame and the foreground area of the second image frame have matched characteristic points or not. And the solving module is executed for solving the rotation and translation matrix based on the matched characteristic points if the matched characteristic points exist, or else, solving the rotation and translation matrix based on a default method. And the synthesis module is used for correcting the foreground region of the second image frame based on the rotation and translation matrix and generating a synthesized image according to the first image frame and the corrected second image frame.
In one embodiment, an electronic device is provided, comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program: obtaining a first image frame and a second image frame, and respectively carrying out image segmentation on the first image frame and the second image frame to obtain foreground regions of the first image frame and the second image frame; judging whether the foreground region of the first image frame and the foreground region of the second image frame have matched feature points or not; if the matched characteristic points exist, solving a rotation and translation matrix based on the matched characteristic points, otherwise, solving the rotation and translation matrix based on a default method; and correcting the foreground area of the second image frame based on the rotation and translation matrix, and generating a composite image according to the first image frame and the corrected second image frame.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and if the matched characteristic points exist, solving the matched characteristic points by using a RANSAC method to obtain the rotation and translation matrix, otherwise, iteratively calculating the minimum error of the foreground regions of the first image frame and the second image frame by using an ECC method to obtain the rotation and translation matrix.
In conclusion, the electronic device provided by the application can synthesize the image with high definition by separating the foreground and the background and synthesizing the image in a targeted manner. Compared with the traditional synthesis mode in the prior art, the method has the advantages that the problem that the synthesis cannot be normally completed or the definition of the synthesized image is low due to the fact that the definition difference between the image foreground and the background is too large is solved, and the use experience of a user is improved.
In one embodiment, a non-transitory computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of: obtaining a first image frame and a second image frame, and respectively carrying out image segmentation on the first image frame and the second image frame to obtain foreground regions of the first image frame and the second image frame; judging whether the foreground region of the first image frame and the foreground region of the second image frame have matched feature points or not; if the matched characteristic points exist, solving a rotation and translation matrix based on the matched characteristic points, otherwise, solving the rotation and translation matrix based on a default method; and correcting the foreground area of the second image frame based on the rotation and translation matrix, and generating a composite image according to the first image frame and the corrected second image frame.
In one embodiment, the computer program when executed by the processor further performs the steps of: and if the matched characteristic points exist, solving the matched characteristic points by using a RANSAC method to obtain the rotation and translation matrix, otherwise, iteratively calculating the minimum error of the foreground regions of the first image frame and the second image frame by using an ECC method to obtain the rotation and translation matrix.
In summary, the non-transitory computer-readable storage medium provided by the present application can synthesize an image with high definition by separating the foreground and the background and synthesizing the image in a targeted manner. Compared with the traditional synthesis mode in the prior art, the method has the advantages that the problem that the synthesis cannot be normally completed or the definition of the synthesized image is low due to the fact that the definition difference between the image foreground and the background is too large is solved, and the use experience of a user is improved.
In one embodiment, a computer program product is provided, the instructions in which, when executed by a processor of a mobile terminal, enable a communication device to perform the steps of: obtaining a first image frame and a second image frame, and respectively carrying out image segmentation on the first image frame and the second image frame to obtain foreground regions of the first image frame and the second image frame; judging whether the foreground region of the first image frame and the foreground region of the second image frame have matched feature points or not; if the matched characteristic points exist, solving a rotation and translation matrix based on the matched characteristic points, otherwise, solving the rotation and translation matrix based on a default method; and correcting the foreground area of the second image frame based on the rotation and translation matrix, and generating a composite image according to the first image frame and the corrected second image frame.
In one embodiment, if there are matched feature points, the matched feature points are solved by using a RANSAC method to obtain the rotational translation matrix, otherwise, the minimum error of the foreground regions of the first image frame and the second image frame is iteratively calculated by using an ECC method to obtain the rotational translation matrix.
In conclusion, by separating the foreground and the background and combining the images in a targeted manner, the image with high definition can be combined. Compared with the traditional synthesis mode in the prior art, the method has the advantages that the problem that the synthesis cannot be normally completed or the definition of the synthesized image is low due to the fact that the definition difference between the image foreground and the background is too large is solved, and the use experience of a user is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM is available in many forms, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), and the like.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features. The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image generation method, comprising:
obtaining a first image frame and a second image frame, and respectively carrying out image segmentation on the first image frame and the second image frame to obtain foreground regions of the first image frame and the second image frame;
judging whether the foreground region of the first image frame and the foreground region of the second image frame have matched feature points or not;
if the matched characteristic points exist, solving a rotation and translation matrix based on the matched characteristic points, otherwise, solving the rotation and translation matrix based on a default method;
and correcting the foreground area of the second image frame based on the rotation and translation matrix, and generating a composite image according to the first image frame and the corrected second image frame.
2. The image generation method according to claim 1, wherein solving the rotation-translation matrix based on the matched feature points if there are matched feature points, and otherwise solving the rotation-translation matrix based on a default method, comprises:
and if the matched characteristic points exist, solving the matched characteristic points by using a RANSAC method to obtain the rotation and translation matrix, otherwise, iteratively calculating the minimum error of the foreground regions of the first image frame and the second image frame by using an ECC method to obtain the rotation and translation matrix.
3. The image generation method of claim 1, wherein the foreground regions of the first and second image frames each comprise at least one block, and the rotation-translation matrix corresponding to each block is solved according to the matched feature points in each block in the foreground regions of the first and second image frames.
4. The image generation method of claim 3, wherein the rectifying the foreground region of the second image frame based on the rotational-translation matrix comprises:
and correcting each block of the second image frame based on the corresponding rotation and translation matrix of each block.
5. The image generation method according to any one of claims 1 to 4, further comprising, before determining whether there are matching feature points in the foreground region of the first image frame and the foreground region of the second image frame:
acquiring feature points of the first image frame and feature points of a second image frame;
and performing feature matching on the feature points of the first image frame and the feature points of the second image frame.
6. The image generation method according to claim 5, wherein the feature matching the feature points of the first image frame and the feature points of the second image frame includes:
applying a first method to match the characteristic points of the first image frame and the characteristic points of the second image frame for the first time;
and on the basis of the primary matching, a second method is applied to perform secondary matching on the feature points of the first image frame and the feature points of the second image frame.
7. An image generation system, comprising:
the image segmentation device comprises a segmentation module, a foreground region acquisition module and a foreground region acquisition module, wherein the segmentation module is used for acquiring a first image frame and a second image frame and respectively carrying out image segmentation on the first image frame and the second image frame to obtain foreground regions of the first image frame and the second image frame;
the judging module is used for judging whether the foreground area of the first image frame and the foreground area of the second image frame have matched feature points or not;
the solving module is used for solving the rotation and translation matrix based on the matched characteristic points if the matched characteristic points exist, or else, solving the rotation and translation matrix based on a default method;
and the synthesis module is used for correcting the foreground area of the second image frame based on the rotation and translation matrix and generating a synthesized image according to the first image frame and the corrected second image frame.
8. An electronic device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the instruction, the program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the image generation method according to any one of claims 1 to 6.
9. A non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the image generation method of any of claims 1-6.
10. A computer program product, characterized in that the instructions in the computer program product, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the image generation method according to any of claims 1-6.
CN202210132413.XA 2022-02-14 2022-02-14 Image generation method, system, electronic device, storage medium and product Pending CN114519753A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210132413.XA CN114519753A (en) 2022-02-14 2022-02-14 Image generation method, system, electronic device, storage medium and product
PCT/CN2022/100540 WO2023151214A1 (en) 2022-02-14 2022-06-22 Image generation method and system, electronic device, storage medium, and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210132413.XA CN114519753A (en) 2022-02-14 2022-02-14 Image generation method, system, electronic device, storage medium and product

Publications (1)

Publication Number Publication Date
CN114519753A true CN114519753A (en) 2022-05-20

Family

ID=81597033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210132413.XA Pending CN114519753A (en) 2022-02-14 2022-02-14 Image generation method, system, electronic device, storage medium and product

Country Status (2)

Country Link
CN (1) CN114519753A (en)
WO (1) WO2023151214A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023151214A1 (en) * 2022-02-14 2023-08-17 上海闻泰信息技术有限公司 Image generation method and system, electronic device, storage medium, and product

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128971A1 (en) * 2008-11-25 2010-05-27 Nec System Technologies, Ltd. Image processing apparatus, image processing method and computer-readable recording medium
CN110689554B (en) * 2019-09-25 2022-04-12 深圳大学 Background motion estimation method and device for infrared image sequence and storage medium
US11830208B2 (en) * 2020-03-25 2023-11-28 Intel Corporation Robust surface registration based on parameterized perspective of image templates
CN113837936B (en) * 2020-06-24 2024-08-02 上海汽车集团股份有限公司 Panoramic image generation method and device
CN114519753A (en) * 2022-02-14 2022-05-20 上海闻泰信息技术有限公司 Image generation method, system, electronic device, storage medium and product

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023151214A1 (en) * 2022-02-14 2023-08-17 上海闻泰信息技术有限公司 Image generation method and system, electronic device, storage medium, and product

Also Published As

Publication number Publication date
WO2023151214A1 (en) 2023-08-17

Similar Documents

Publication Publication Date Title
US11373275B2 (en) Method for generating high-resolution picture, computer device, and storage medium
CN109829930B (en) Face image processing method and device, computer equipment and readable storage medium
WO2018176925A1 (en) Hdr image generation method and apparatus
US10929648B2 (en) Apparatus and method for data processing
CN110378846B (en) Method, device, medium and electronic equipment for processing image buffing
CN109584179A (en) A kind of convolutional neural networks model generating method and image quality optimization method
US20200380639A1 (en) Enhanced Image Processing Techniques for Deep Neural Networks
WO2021052028A1 (en) Image color migration method, apparatus, computer device and storage medium
CN107209925A (en) Method, device and computer program product for generating super-resolution image
CN107564085B (en) Image warping processing method and device, computing equipment and computer storage medium
CN112767281A (en) Image ghost eliminating method, device, electronic equipment and storage medium
CN110580693A (en) Image processing method, image processing device, computer equipment and storage medium
US20220398704A1 (en) Intelligent Portrait Photography Enhancement System
CN114519753A (en) Image generation method, system, electronic device, storage medium and product
CN117830077A (en) Image processing method and device and electronic equipment
CN111540060B (en) Display calibration method and device of augmented reality equipment and electronic equipment
WO2021035979A1 (en) Image filling method and apparatus based on edge learning, terminal, and readable storage medium
CN116506732A (en) Image snapshot anti-shake method, device and system and computer equipment
US10354125B2 (en) Photograph processing method and system
CN115049572A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115082345A (en) Image shadow removing method and device, computer equipment and storage medium
CN110913193B (en) Image processing method, device, apparatus and computer readable storage medium
CN113422967A (en) Screen projection display control method and device, terminal equipment and storage medium
CN113674169A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN118317144B (en) Video processing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination