CN113902657A - Image splicing method and device and electronic equipment - Google Patents

Image splicing method and device and electronic equipment Download PDF

Info

Publication number
CN113902657A
CN113902657A CN202110990427.0A CN202110990427A CN113902657A CN 113902657 A CN113902657 A CN 113902657A CN 202110990427 A CN202110990427 A CN 202110990427A CN 113902657 A CN113902657 A CN 113902657A
Authority
CN
China
Prior art keywords
image
overlapping area
fusion
sub
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110990427.0A
Other languages
Chinese (zh)
Inventor
刘伟舟
胡晨
周舒畅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd, Beijing Megvii Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN202110990427.0A priority Critical patent/CN113902657A/en
Publication of CN113902657A publication Critical patent/CN113902657A/en
Priority to PCT/CN2022/102342 priority patent/WO2023024697A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image splicing method, an image splicing device and electronic equipment, wherein a first image and a second image to be spliced are obtained; determining an initial spliced image of the first image and the second image; performing fusion processing on a target overlapping area in the initial splicing image by using a first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area; and determining target spliced images corresponding to the first image and the second image based on the fusion overlapping region and the initial spliced image. In the method, after the initial splicing image of the first image and the second image is determined, the first neural network model is adopted to perform fusion processing on the target overlapping area in the initial splicing image to obtain the corresponding fusion overlapping area, and fusion-related calculation of each pixel in the initial splicing image based on a CPU is not needed, so that the time for performing fusion calculation on all pixels in the initial splicing image is saved, the fusion processing efficiency is improved, and the image splicing processing efficiency is further improved.

Description

Image splicing method and device and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to an image stitching method, an image stitching device and electronic equipment.
Background
Image stitching is the process of combining multiple images with overlapping fields of view to produce an image with a larger field angle and higher resolution. In the related art, a Central Processing Unit (CPU) is generally used to perform fusion processing on a picture to be stitched, and since the CPU needs a certain processing time to calculate each pixel, the time for performing fusion calculation on all pixels in the initially stitched picture is long, which reduces the efficiency of fusion processing, and further reduces the efficiency of image stitching.
Disclosure of Invention
The invention aims to provide an image splicing method, an image splicing device and electronic equipment, so as to improve the processing efficiency of image splicing.
The invention provides an image splicing method, which comprises the following steps: acquiring a first image and a second image to be spliced; determining an initial spliced image of the first image and the second image; performing fusion processing on a target overlapping area in the initial splicing image by using a first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area; and determining target spliced images corresponding to the first image and the second image based on the fusion overlapping region and the initial spliced image.
Further, before determining an initial stitched image of the first image and the second image, the method further comprises: performing illumination compensation on the second image; accordingly, determining an initial stitched image of the first image and the second image comprises: and determining an initial spliced image based on the first image and the second image after illumination compensation.
Further, the step of performing illumination compensation on the second image comprises: and performing illumination compensation on the second image based on the second neural network model and the first image.
Further, performing illumination compensation on the second image based on the second neural network model and the first image, including: determining a projective transformation matrix based on the first image and the second image; determining an initial overlapping region of the first image and the second image based on the projective transformation matrix; wherein the initial overlap region comprises: a first sub-overlapping area corresponding to the first image and a second sub-overlapping area corresponding to the second image; inputting the first sub-overlapping area and the second sub-overlapping area into a second neural network model, and determining the mapping relation between the first pixel value of each pixel in the first sub-overlapping area and the second pixel value of the pixel at the same position in the second sub-overlapping area through the second neural network model; obtaining a mapping relation output by a second neural network model; and matching the pixel value of each pixel point in the second image in the color channel with the pixel value of each pixel point in the first image in the color channel based on the mapping relation aiming at each color channel so as to perform illumination compensation on the second image.
Further, the step of determining an initial overlapping area of the first image and the second image based on the projective transformation matrix comprises: acquiring boundary coordinates of a second image; wherein the boundary coordinates are used to indicate an image area of the second image; determining boundary coordinates after projective transformation based on the projective transformation matrix and the boundary coordinates of the second image; determining a projectively transformed second image based on the projectively transformed boundary coordinates; and determining a superposed image area of the second image after projective transformation and the first image as an initial superposed area.
Further, the step of determining a projective transformation matrix based on the first image and the second image comprises: extracting at least one first characteristic point in the first image and at least one second characteristic point in the second image; determining at least one matching feature point pair based on the at least one first feature point and the at least one second feature point; determining a projective transformation matrix based on the at least one matching characteristic point pair.
Further, the step of performing illumination compensation on the second image based on the second neural network model and the first image comprises: and inputting the first image and the second image into a second neural network model, and performing illumination compensation on the second image through a second neural network based on the first image to obtain a second image after the illumination compensation.
Further, the target overlap region includes: the third sub-overlapping area corresponding to the first image and the fourth sub-overlapping area corresponding to the second image after illumination compensation; the first neural network model comprises a splicing model and a fusion model; the step of performing fusion processing on the target overlapping area in the initial spliced image by using the first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area comprises the following steps: inputting the third sub-overlapping area and the fourth sub-overlapping area into a splicing model, and searching for a splicing seam between the third sub-overlapping area and the fourth sub-overlapping area through the splicing model to obtain a splicing seam area corresponding to the third sub-overlapping area and the fourth sub-overlapping area; fusing the third sub-overlapping area and the fourth sub-overlapping area based on the splicing seam area to obtain an initial fusion overlapping area; and inputting the initial fusion overlapping area, the third sub-overlapping area and the fourth sub-overlapping area into a fusion model, and performing fusion processing on the initial fusion overlapping area, the third sub-overlapping area and the fourth sub-overlapping area through the fusion model to obtain a fusion overlapping area.
Further, the target overlap region includes: the third sub-overlapping area corresponding to the first image and the fourth sub-overlapping area corresponding to the second image after illumination compensation; the first neural network model comprises a stitching model; the step of performing fusion processing on the target overlapping area in the initial spliced image by using the first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area comprises the following steps: inputting the third sub-overlapping area and the fourth sub-overlapping area into a splicing model to obtain a splicing area corresponding to the third sub-overlapping area and the fourth sub-overlapping area; and performing feathering treatment on the seam splicing region to obtain an overlapped region after feathering, and determining the overlapped region after feathering as a fusion overlapped region.
Further, the fusion model is obtained by training in the following way: acquiring a first picture; carrying out translation and/or rotation processing on the first picture to obtain a second picture; performing fusion processing on the first picture and the second picture to obtain an initial fusion picture; and training a fusion model based on the first picture, the second picture and the initial fusion picture.
Further, the step of determining a target stitched image corresponding to the first image and the second image based on the fused overlapping region and the initial stitched image includes: and replacing the target overlapping area in the initial spliced image with the fusion overlapping area to obtain a target spliced image corresponding to the first image and the second image.
Further, the color channels of the first image and the second image are arranged in an RGGB manner.
The invention provides an image splicing device, which comprises: the acquisition module is used for acquiring a first image and a second image to be spliced; the first determining module is used for determining an initial splicing image of the first image and the second image; the fusion module is used for carrying out fusion processing on the target overlapping area in the initial splicing image by utilizing the first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area; and the second determining module is used for determining the target spliced image corresponding to the first image and the second image based on the fusion overlapping area and the initial spliced image.
The invention provides an electronic device which comprises a processing device and a storage device, wherein the storage device stores a computer program, and the computer program executes the image splicing method according to any one of the above methods when the computer program is run by the processing device.
The invention provides a machine-readable storage medium, which stores a computer program, and the computer program is executed by a processing device to execute any one of the image stitching methods.
The invention provides an image splicing method, an image splicing device and electronic equipment, wherein a first image and a second image to be spliced are obtained; determining an initial spliced image of the first image and the second image; then, carrying out fusion processing on the target overlapping area in the initial splicing image by using a first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area; and finally, determining target spliced images corresponding to the first image and the second image based on the fusion overlapping area and the initial spliced image. In the method, after the initial splicing image of the first image and the second image is determined, the first neural network model is adopted to perform fusion processing on the target overlapping area in the initial splicing image to obtain the corresponding fusion overlapping area, and fusion-related calculation of each pixel in the initial splicing image based on a CPU is not needed, so that the time for performing fusion calculation on all pixels in the initial splicing image is saved, the fusion processing efficiency is improved, and the image splicing processing efficiency is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic system according to an embodiment of the present invention;
FIG. 2 is a flowchart of an image stitching method according to an embodiment of the present invention;
FIG. 3 is a flowchart of another image stitching method according to an embodiment of the present invention;
FIG. 4 is a flowchart of another image stitching method according to an embodiment of the present invention;
FIG. 5 is a flowchart of another image stitching method according to an embodiment of the present invention;
fig. 6 is a flowchart of a RAW domain image stitching method according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an image stitching apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In recent years, technical research based on artificial intelligence, such as computer vision, deep learning, machine learning, image processing, and image recognition, has been actively developed. Artificial Intelligence (AI) is an emerging scientific technology for studying and developing theories, methods, techniques and application systems for simulating and extending human intelligence. The artificial intelligence subject is a comprehensive subject and relates to various technical categories such as chips, big data, cloud computing, internet of things, distributed storage, deep learning, machine learning and neural networks. Computer vision is used as an important branch of artificial intelligence, particularly a machine is used for identifying the world, and the computer vision technology generally comprises the technologies of face identification, living body detection, fingerprint identification and anti-counterfeiting verification, biological feature identification, face detection, pedestrian detection, target detection, pedestrian identification, image processing, image identification, image semantic understanding, image retrieval, character identification, video processing, video content identification, behavior identification, three-dimensional reconstruction, virtual reality, augmented reality, synchronous positioning and map construction (SLAM), computational photography, robot navigation and positioning and the like. With the research and progress of artificial intelligence technology, the technology is applied to various fields, such as security, city management, traffic management, building management, park management, face passage, face attendance, logistics management, warehouse management, robots, intelligent marketing, computational photography, mobile phone images, cloud services, smart homes, wearable equipment, unmanned driving, automatic driving, smart medical treatment, face payment, face unlocking, fingerprint unlocking, testimony verification, smart screens, smart televisions, cameras, mobile internet, live webcasts, beauty treatment, medical beauty treatment, intelligent temperature measurement and the like.
Image stitching is the process of combining multiple images with overlapping fields of view to produce a larger field of view and a high resolution image. The Field angle Of a single eye Of a common person is about 120 degrees, the Field angles Of two eyes can be about 160-220 degrees generally, the FOV (Field Of Vision) Of a common camera is about 40-60 degrees generally, high-definition imaging Of a detailed object is difficult to ensure a large imaging Field Of view, a plurality Of cameras with small Field angles can be combined into a multi-camera with a large Field angle through an image splicing technology, and the multi-camera has important application value in the fields Of security, teleconferencing, sports event director and the like. In the related art, image stitching processing is usually performed based on RGB (R: Red; G: Green; B: Blue, Blue) domain images, while the current novel image processing and analyzing process is usually realized based on RAW domain images, for example, scene problems such as dim light, backlight and the like which cannot be solved by RGB domain images are solved by detecting, segmenting or identifying RAW domain images, and under the condition that image processing and analyzing are mainly concentrated on RAW domain images, when image stitching is performed on corresponding RGB domain images, because the RGB domain images lack part of image details compared with the RAW domain images, the stitched images in RGB domains lack corresponding image analysis results, and thus the processing such as later-stage image detection, segmentation and the like is not facilitated; in the related art, a Central Processing Unit (CPU) is usually used to perform fusion-related calculation processing on each pixel in the initially spliced picture, and since the CPU needs a certain processing time to calculate each pixel, the time for performing fusion calculation on all pixels in the initially spliced picture is long, which reduces the fusion processing efficiency and further reduces the processing efficiency of image splicing. Based on this, embodiments of the present invention provide an image stitching method, an image stitching device, and an electronic device, where the technique may be applied to an application that needs to perform image stitching processing, and the following describes an embodiment of the present invention in detail.
First, an example electronic system 100 for implementing the image stitching method, apparatus, and electronic device of embodiments of the present invention is described with reference to fig. 1.
As shown in FIG. 1, an electronic system 100 includes one or more processing devices 102, one or more memory devices 104, an input device 106, an output device 108, and one or more image capture devices 110, which are interconnected via a bus system 112 and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic system 100 shown in fig. 1 are exemplary only, and not limiting, and that the electronic system may have other components and structures as desired.
The processing device 102 may be a gateway or an intelligent terminal, or a device including a Central Processing Unit (CPU) or other form of processing unit having data processing capability and/or instruction execution capability, and may process data of other components in the electronic system 100 and may control other components in the electronic system 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processing device 102 to implement client functionality (implemented by the processing device) and/or other desired functionality in embodiments of the present invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may capture preview video frames or image data and store the captured preview video frames or image data in the storage 104 for use by other components.
For example, the devices in the electronic system for implementing the image stitching method, apparatus and electronic device according to the embodiments of the present invention may be integrally disposed, or may be disposed in a decentralized manner, such as integrally disposing the processing device 102, the storage device 104, the input device 106 and the output device 108, and disposing the image capturing device 110 at a designated position where a target image can be captured. When the above-mentioned devices in the electronic system are integrally provided, the electronic system may be implemented as an intelligent terminal such as a camera, a smart phone, a tablet computer, a vehicle-mounted terminal, or may be a server.
The embodiment provides an image stitching method, which is executed by a processing device in the electronic system; the processing device may be any device or chip having data processing capabilities. As shown in fig. 2, the method comprises the steps of:
step S202, a first image and a second image to be spliced are obtained.
In the first image and the second image, the first image may be used as a reference image, the second image may be used as an image to be registered, the second image may also be used as a reference image, and the first image may be used as an image to be registered, which may be specifically selected according to actual requirements; for convenience of description, taking a first image as a reference image and a second image as an image to be registered as an example, the first image is usually located in a reference image coordinate system, and the first image can be represented by a tar image, etc.; the second image is typically located in the image coordinate system to be registered, which may be represented as a src image, etc.; in actual implementation, when the images need to be stitched, the first image and the second image to be stitched generally need to be acquired first, and the first image and the second image generally are images with a part of overlapping fields of view.
Step S204, determining an initial splicing image of the first image and the second image.
The obtained first image and the second image are initially stitched, for example, if the first image is used as a reference image and the second image is used as an image to be registered, the second image may be projected to the first image for initial stitching, so as to obtain an initial stitched image obtained by stitching the first image and the second image.
And S206, performing fusion processing on the target overlapping area in the initial spliced image by using the first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area.
The first neural network model can be realized by various convolutional neural networks, such as a residual error network, a VGG network and the like; since the first image and the second image are images with a part of overlapped view fields, the target overlapped area can be understood as an overlapped area after the initial splicing processing is carried out on the first image and the second image; in practical implementation, after the initial stitched image is determined, a target overlapping region where the first image and the second image are overlapped may be determined from the initial stitched image, and the target overlapping region is subjected to fusion processing by using the first neural network, so as to obtain a corresponding fusion overlapping region.
And S208, determining target spliced images corresponding to the first image and the second image based on the fusion overlapping area and the initial spliced image.
In actual implementation, after the fusion overlapping region is determined, a target spliced image of the first image and the second image can be obtained based on the fusion overlapping region and the initial spliced image; the target mosaic image can be a seamless panoramic image or a high-resolution image formed by splicing two or more images with overlapped parts; wherein two or more images with overlapping portions may be images taken at different times, different viewing angles, or different sensors.
The image splicing method comprises the steps of firstly, obtaining a first image and a second image to be spliced; determining an initial spliced image of the first image and the second image; then, carrying out fusion processing on the target overlapping area in the initial splicing image by using a first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area; and finally, determining target spliced images corresponding to the first image and the second image based on the fusion overlapping area and the initial spliced image. In the method, after the initial splicing image of the first image and the second image is determined, the first neural network model is adopted to perform fusion processing on the target overlapping area in the initial splicing image to obtain the corresponding fusion overlapping area, and fusion-related calculation of each pixel in the initial splicing image based on a CPU is not needed, so that the time for performing fusion calculation on all pixels in the initial splicing image is saved, the fusion processing efficiency is improved, and the image splicing processing efficiency is further improved.
The embodiment of the invention also provides another image splicing method which is realized on the basis of the method of the embodiment; as shown in fig. 3, the method comprises the steps of:
step S302, a first image and a second image to be spliced are obtained.
And step S304, performing illumination compensation on the second image.
The step S304 can be specifically realized by the following step one:
and step one, performing illumination compensation on the second image based on the second neural network model and the first image.
The second neural network model can be realized by various convolutional neural networks, such as a residual error network, a VGG network and the like; the second neural network model may be a network model different from the first neural network model, or may be the first neural network model, and the process of performing illumination compensation is only executed by a certain sub-model or sub-module in the first neural network model.
The first step can be specifically realized by the following steps A to D:
and step A, determining a projective transformation matrix based on the first image and the second image.
In practical implementation, a projective transformation matrix may be determined according to a first image and a second image obtained, where the first image and the second image generally have the same number of color channels, for example, the first image has R, G, G and B color channels, and the second image also has R, G, G and B color channels; specifically, the step a can be specifically realized by steps a to c:
step a, extracting at least one first characteristic point in the first image and at least one second characteristic point in the second image.
In order to better perform image matching, it is generally required to select representative areas in an image, such as corner points, edge points, bright points of a dark area or dark points of a bright area in the image, and the like, where the first feature points may be corner points, edge points, bright points of a dark area or dark points of a bright area extracted from a first image, and the number of the first feature points may be one or more; the second feature points may be corner points, edge points, bright points of dark areas or dark points of bright areas, and the like extracted from the second image, and the number of the second feature points may be one or more.
In this embodiment, in the color channel of each pixel of the first image, each color channel is preset with a corresponding first weight value; in the color channels of the second image, each color channel is preset with a corresponding second weight value; in actual implementation, the first weight value and the second weight value are usually preset fixed values, and may be specifically set according to actual needs, which is not limited herein, for example, the first image and the second image both have R, G, G and B color channels, in the first image, the first weight value corresponding to the R channel is 0.3, the first weight values corresponding to the two G channels are both 0.2, and the first weight value corresponding to the B channel is 0.3; in the second image, the second weight value corresponding to the R channel is 0.3, the second weight values corresponding to the two G channels are both 0.2, and the second weight value corresponding to the B channel is 0.3.
In the step a, the step of extracting at least one first feature point in the first image may include steps a0 to a 3:
step a0, for each pixel in the first image, multiplying the component of each color channel in the pixel by the first weight value corresponding to the color channel to obtain the first calculation result of each color channel.
The color channel may be understood as a channel for storing image color information, for example, an RGB image has three color channels, which are an R channel, a G channel, and a B channel; the above-mentioned component of each color channel may be understood as a luminance value of each color channel; in practical implementation, the first image typically comprises a plurality of pixels, each pixel typically having a plurality of color channels, for each pixel, the component of each color channel in the pixel may be multiplied by the corresponding first weight value to obtain a first calculation result of the color channel, for example, with a first image having R, G, G and B color channels, wherein, the first weight value corresponding to the R channel is 0.3, the first weight values corresponding to the two G channels are both 0.2, the first weight value corresponding to the B channel is 0.3, wherein the component corresponding to the R channel of one designated pixel is 150, the components corresponding to the two G channels are both 100, the component corresponding to the B channel is 80 for example, then the first calculation for the R channel is 150 x 0.3-45, the first calculation for the two G channels is 100 x 0.2-20, and the first calculation for the B channel is 80 x 0.3-24 for that given pixel.
Step a1, summing the first calculation results of each color channel in the pixel to obtain a first gray value corresponding to the pixel.
After the first calculation result of each color channel in each pixel in the first image is obtained through the step a0, the first calculation results of each color channel in the pixel may be added to obtain the first gray scale value of the pixel, for example, still taking the specified pixel in the step a0 as an example, the first calculation result of each color channel in the specified pixel may be added to obtain the first gray scale value corresponding to the specified pixel, that is, 45+20 +24 is 109.
Step a2, determining a gray-scale map of the first image based on the first gray-scale value corresponding to each pixel in the first image.
After the first gray scale value corresponding to each pixel in the first image is obtained through the above steps a0 and a1, a gray scale map of the first image can be obtained according to the obtained plurality of first gray scale values.
Step a3, extracting at least one first feature point from the gray-scale map of the first image.
In actual implementation, one or more first Feature points may be extracted from the obtained gray-Scale image of the first image based on an algorithm such as Scale-Invariant Feature Transform (SIFT-Invariant Feature Transform) or SuperPoint, and specifically, reference may be made to a process of extracting Feature points in a manner such as SIFT or SuperPoint in the related art, which is not described herein again; the SIFT has scale invariance, can detect key points in an image and is a local feature descriptor; SuperPoint is a feature point detection and descriptor extraction method based on self-supervision training.
In the step a, the step of extracting at least one second feature point in the second image may include steps a4 to a 7:
step a4, for each pixel in the second image, multiplying the component of each color channel in the pixel by the second weight value corresponding to the color channel to obtain a second calculation result of each color channel.
In practical implementation, the second image also typically includes a plurality of pixels, each pixel typically having a plurality of color channels, for each pixel, the component of each color channel in the pixel may be multiplied by the corresponding second weight value to obtain a second calculation result of the color channel, for example, still with the second image having R, G, G and B color channels, wherein, the first weight value corresponding to the R channel is 0.3, the first weight values corresponding to the two G channels are both 0.2, the first weight value corresponding to the B channel is 0.3, wherein the component corresponding to the R channel of one target pixel is 100, the components corresponding to the two G channels are both 120, the component corresponding to the B channel is 150 for example, then the second calculation result for the R channel is 100 x 0.3-30, the second calculation result for the two G channels is 120 x 0.2-24, and the second calculation result for the B channel is 150 x 0.3-45 in the target pixel.
Step a5, summing the second calculation results of each color channel in the pixel to obtain a second gray value corresponding to the pixel.
After the second calculation result of each color channel in each pixel in the second image is obtained through the step a4, the second calculation results of each color channel in the pixel may be added to obtain the second gray scale value of the pixel, for example, also taking the target pixel in the step a4 as an example, the second calculation result of each color channel in the target pixel may be added to obtain the second gray scale value corresponding to the target pixel, that is, 30+24+24+45 ═ 123.
Step a6, determining a gray map of the second image based on the second gray value corresponding to each pixel in the second image.
When the second gray scale value corresponding to each pixel in the second image is obtained through the above steps a4 and a5, a gray scale map of the second image can be obtained according to the obtained plurality of second gray scale values.
Step a7, extracting at least one second feature point from the gray-scale map of the second image.
In practical implementation, one or more second feature points may be extracted from the obtained grayscale image of the second image based on an algorithm such as SIFT or SuperPoint.
And b, determining at least one matched characteristic point pair based on the at least one first characteristic point and the at least one second characteristic point.
After at least one first feature point in the first image and at least one second feature point in the second image are extracted through the step a, feature point matching may be performed, specifically, based on the at least one first feature point and the at least one second feature point, one or more matching feature point pairs in the first image and the second image may be obtained by using an algorithm such as KNN (K-nearest neighbor classification algorithm) or RANSAC (RANdom SAmple Consensus algorithm), and the number of the matching feature point pairs is generally not less than four pairs; specifically, reference may be made to a process of matching feature points by using KNN or RANSAC or other methods in the related art, which is not described herein again; the core idea of the method is that if most of K nearest neighbor samples of a sample in a feature space belong to a certain class, the sample also belongs to the class and has the characteristics of the samples on the class; RANSAC is random sampling consistency matching, the method calculates a homography matrix between two images by using matching points, and then judges whether a certain match is a correct match or not by using a reprojection error.
And c, determining a projective transformation matrix based on the at least one matched characteristic point pair.
After determining at least one matching feature point pair, the projective transformation matrix may be calculated based on matching information of the matching feature point pair, such as coordinates of the matching feature point pair, and specifically, refer to the process of calculating the projective transformation matrix according to the matching feature point pair in the related art, which is not described herein again. Based on the projective transformation matrix, the second image may be projected onto the first image, where the projective transformation matrix is used to perform projective transformation on each color channel included in each pixel in the second image according to the calculated projective transformation matrix, and after the second image is projected onto the first image, the transformed corresponding coordinate may be a decimal, and at this time, it is usually necessary to perform interpolation processing on the coordinate according to the value of an adjacent pixel, specifically, reference may be made to an interpolation processing method in the related art, for example, a Cubic dual-Cubic interpolation method or the like may be used to reduce the influence of an analytic force, where the analytic force may be understood as the definition of the second image after projective transformation.
Step B, determining an initial overlapping area of the first image and the second image based on the projective transformation matrix; wherein the initial overlap region comprises: the first sub-overlapping area corresponding to the first image and the second sub-overlapping area corresponding to the second image.
And projecting the second image to the first image based on the projective transformation matrix determined in the above step, wherein after the projective transformation is completed, an initial overlapping region of the first image and the projective transformed second image can be obtained, the initial overlapping region including the first sub-overlapping region of the first image and the second sub-overlapping region of the projective transformed second image.
The step B can be specifically realized by steps h to k:
step h, acquiring a boundary coordinate of a second image; wherein the boundary coordinates are used to indicate an image area of the second image.
The boundary coordinates may be understood as edge coordinates indicating an overall shape of the second image, from which an image area corresponding to the second image may be obtained. In practical implementation, when the initial overlapping area needs to be determined, the boundary coordinates of the second image are generally acquired first, and the number of the boundary coordinates is generally multiple.
And i, determining the boundary coordinates after projective transformation based on the projective transformation matrix and the boundary coordinates of the second image.
When the boundary coordinates of the second image are obtained and the second image is projected to the first image based on the projective transformation matrix, the boundary coordinates of the second image after projective transformation can be obtained; for example, if the boundary coordinates of the second image are four corner coordinates, the four corner coordinates of the second image after projective transformation can be obtained after projective transformation is completed based on the projective transformation matrix.
And j, determining the second image after projective transformation based on the boundary coordinates after projective transformation.
And after determining the boundary coordinates of the second image after projective transformation, determining the second image after projective transformation according to the image area enclosed by the boundary coordinates after projective transformation. For example, taking the boundary coordinates of the second image as four corner coordinates as an example, the image area surrounded by the four corner coordinates of the projectively transformed second image is the projectively transformed second image.
And k, determining the superposed image area of the second image after projection transformation and the first image as an initial superposed area.
And after the projective transformation second image is determined, taking the intersection of the projective transformation second image and the first image to obtain the initial overlapping area.
And step C, inputting the first sub-overlapping area and the second sub-overlapping area into a second neural network model, and determining the mapping relation between the first pixel value of each pixel in the first sub-overlapping area and the second pixel value of the pixel at the same position in the second sub-overlapping area through the second neural network model.
In practical implementation, after the initial overlap region is determined, all the pixel values of the first sub-overlap region corresponding to the first image and the second sub-overlap region corresponding to the second image may be extracted, the histogram distribution of the pixel values of the image corresponding to the first sub-overlap region in the first image and the histogram distribution of the pixel values of the image corresponding to the second sub-overlap region in the second image are respectively calculated, the histogram distribution of the pixel values of the image corresponding to the second sub-overlap region is transformed to match the histogram distribution of the pixel values of the image corresponding to the first sub-overlap region based on the histogram matching, which may be specifically implemented by constructing a pixel value mapping Table, which may be a Look-Up Table (LUT) or the like, and the number of color channels included in the first image and the second image is the same, each color channel typically has its corresponding pixel value mapping table; for example, the first image and the second image each include R, G, G and B four color channels, the first sub-overlap region and the second sub-overlap region are input to a pre-trained second neural network model, a mapping relationship between a first pixel value of each pixel in the first sub-overlap region and a second pixel value of a pixel at the same position in the second sub-overlap region is determined, four LUT tables with 0-65535 may be learned by GT (group Truth, which represents the classification accuracy of a training set with supervised learning and is used for proving or overriding a certain assumption), and finally, four LUT tables with 0-65535 may be calculated by the second neural network model, where the four LUT tables with 0-65535 are usually different, and the number of LUT tables is associated with the number of color channels. In the related art, the pixel value mapping table is usually obtained by calculation based on a CPU conventional method, and since calculation is usually performed pixel by pixel, the calculation process time is long, and the efficiency is low, this embodiment may accelerate the calculation method in the related art through an NN (Neural Network).
The pixel value mapping table may be understood as a mapping table of pixel gray values, where the mapping table transforms the actually sampled pixel gray values into another corresponding gray value through a certain transformation, such as threshold, inversion, binarization, contrast adjustment, linear transformation, etc., so as to highlight useful information of the image and enhance the optical contrast of the image.
Step D, obtaining a mapping relation output by the second neural network model;
and E, aiming at each color channel, matching the pixel value of each pixel point in the second image in the color channel with the pixel value of each pixel point in the first image in the color channel based on the mapping relation so as to perform illumination compensation on the second image.
After the mapping relationship output by the second neural network is obtained, for example, after the pixel value mapping table corresponding to each color channel is obtained, the pixel value mapping table may be applied to the entire second image, that is, the value range of each color channel of the second image may be mapped to a distribution similar to that of the first image through the pixel value mapping table, that is, the pixel value of each pixel point in the second image in the color channel may be matched with the pixel value of each pixel point in the first image in the color channel. For example, still taking the example that the first image and the second image both include R, G, G and B color channels, and four LUT tables of 0 to 65535 corresponding to the four color channels are obtained through calculation by the second neural network model, for each color channel, the pixel values of the pixel points of the second image in the color channel can be mapped onto a distribution similar to that of the first image through the LUT table corresponding to the color channel, so as to implement illumination compensation for the second image.
The first step can be realized by the following step H:
and H, inputting the first image and the second image into a second neural network model, and carrying out illumination compensation on the second image based on the first image through a second neural network to obtain a second image after the illumination compensation.
In practical implementation, the first image and the second image may be input into a second neural network model trained in advance, the second image is subjected to illumination compensation through the second neural network based on the first image, the first image and the second image subjected to illumination compensation based on histogram matching are used as GT to supervise training of the second neural network model, and finally the second image subjected to illumination compensation is obtained through calculation of the second neural network model.
In the related art, a CPU calculates a corresponding compensation coefficient for each pixel in a picture to be stitched, and then compensates each pixel, and since the CPU needs a certain processing time for calculating each pixel, the time for compensating all pixels in the picture is long, and the processing efficiency of illumination compensation is reduced. In the illumination compensation mode in this embodiment, the pre-trained neural network model is used to perform illumination compensation on the second image, and calculation of each pixel based on the CPU is not required, so that time for compensating all pixels in the image is saved, and processing efficiency of the illumination compensation is improved.
And step S306, determining an initial spliced image based on the first image and the second image after illumination compensation.
And according to the obtained projection transformation matrix, projecting the second image after illumination compensation to the first image and carrying out initial splicing to obtain an initial spliced image spliced by the first image and the second image after illumination compensation.
And S308, performing fusion processing on the target overlapping area in the initial spliced image by using the first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area.
Step S310, determining target splicing images corresponding to the first image and the second image based on the fusion overlapping area and the initial splicing image.
The image splicing method includes the steps of obtaining a first image and a second image to be spliced, carrying out illumination compensation on the second image, determining an initial spliced image based on the first image and the second image subjected to illumination compensation, carrying out fusion processing on a target overlapping area in the initial spliced image by using a first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area, and determining a target spliced image corresponding to the first image and the second image based on the fusion overlapping area and the initial spliced image. In the method, after the initial splicing image of the first image and the second image is determined, the first neural network model is adopted to perform fusion processing on the target overlapping area in the initial splicing image to obtain the corresponding fusion overlapping area, and fusion-related calculation of each pixel in the initial splicing image based on a CPU is not needed, so that the time for performing fusion calculation on all pixels in the initial splicing image is saved, the fusion processing efficiency is improved, and the image splicing processing efficiency is further improved.
The embodiment of the invention also provides another image splicing method which is realized on the basis of the method of the embodiment; the method mainly describes a specific process of performing fusion processing on a target overlapping region in an initial spliced image by using a first neural network model to obtain a fusion overlapping region corresponding to the target overlapping region, wherein the target overlapping region comprises the following steps: the third sub-overlapping area corresponding to the first image and the fourth sub-overlapping area corresponding to the second image after illumination compensation; the first neural network model comprises a splicing model and a fusion model; the splicing model can be realized by various convolution neural networks, such as a residual error network, a VGG network and the like; the fusion model can also be realized by various convolution neural networks, such as a residual error network, a VGG network and the like; the splicing model and the fusion model can be sub-modules or sub-models in the first neural network model, and can also be two independent neural network models; as shown in fig. 4, the method includes the steps of:
step S402, a first image and a second image to be spliced are obtained.
Step S404, an initial stitched image of the first image and the second image is determined.
Step S406, inputting the third sub-overlapping area and the fourth sub-overlapping area into a splicing model, and searching for a splicing seam between the third sub-overlapping area and the fourth sub-overlapping area through the splicing model to obtain a splicing seam area corresponding to the third sub-overlapping area and the fourth sub-overlapping area.
In practical implementation, after an initial stitched image is determined, according to a target overlapping region of the initial stitched image, a third sub overlapping region and a fourth sub overlapping region included in the target overlapping region can be determined, the third sub overlapping region and the fourth sub overlapping region are input into a pre-trained stitching model, a seam between the third sub overlapping region and the fourth sub overlapping region is searched through the stitching model, and the searched seam region is output, the seam region is usually a point set of the seam or a stitching mask, the seam region can also be a seam line, in the related art, a traditional manner such as graphcut, vonorio and the like is usually adopted for seam search to calculate an optimal seam line of the third sub overlapping region and the fourth sub overlapping region, the spatial continuity at the seam connection position can be reduced by searching the optimal seam line, and calculation of item-by item data is generally required based on a CPU, the calculation time is long, wherein the graph is an energy optimization algorithm and can be used for front background segmentation, stereoscopic vision or cutout and the like in the field of image processing; in this embodiment, the above-mentioned traditional calculation mode can be distilled based on the spliced model trained in advance, and the calculation by the CPU is not required, so that hardware acceleration can be realized, the processing speed is accelerated, and the processing efficiency is improved.
And step S408, fusing the third sub-overlapping region and the fourth sub-overlapping region based on the splicing region to obtain an initial fusion overlapping region.
In practical implementation, after obtaining the seam region, for example, the seam region is a point set of a seam that is not feathered, and the third sub-overlapping region and the fourth sub-overlapping region may be directly fused based on the point set of the seam that is not feathered, so as to obtain an initial fusion overlapping region.
Step S410, inputting the initial fusion overlap region, the third sub-overlap region, and the fourth sub-overlap region into a fusion model, and performing fusion processing on the initial fusion overlap region, the third sub-overlap region, and the fourth sub-overlap region through the fusion model to obtain a fusion overlap region.
In the related art, the fusion processing of the overlapped area usually needs to perform calculation fusion on data item by item through a CPU, and the calculation time is long. In the present embodiment, in actual implementation, image fusion processing may be performed based on NN-blending, that is, a fusion method of a neural network, so as to improve processing efficiency, specifically, the obtained initial fusion overlapping region may be sent to a fusion model together with two un-fused original images, that is, a third sub-overlapping region and a fourth sub-overlapping region, and the initial fusion overlapping region, the third sub-overlapping region, and the fourth sub-overlapping region may be subjected to fusion processing by the fusion model, for example, image-to-image processing, so as to obtain a result after fusion optimization.
The fusion model can be obtained by training in the following five steps to eight steps:
and step five, acquiring a first picture.
And step six, carrying out translation and/or rotation processing on the first picture to obtain a second picture.
In practical implementation, the first picture may be any one picture; the second picture can be obtained by processing the first picture through small-range translation and/or rotation and the like; in practical implementation, a training sample can be constructed by using a single picture in the training process of the fusion model, the single picture is the first picture, and the single picture is translated and rotated in a small range to obtain a changed picture, namely the second picture.
And seventhly, performing fusion processing on the first picture and the second picture to obtain an initial fusion picture.
After the first picture and the second picture are obtained, the two pictures may be fused through a randomly generated seam, for example, the initial fused picture may be obtained through Alpha-Blending.
And step eight, training a fusion model based on the first picture, the second picture and the initial fusion picture.
And taking the first picture, the second picture and the initial fusion picture as training samples together to train the fusion model. Specifically, the training sample composed of the first picture, the second picture and the initial fusion picture may be input into the initial fusion model to output a fusion picture optimized for the initial fusion picture, a loss value is determined based on the fusion picture and the first picture, and a weight parameter of the initial fusion model is updated based on the loss value; and continuing to execute the step of obtaining the first picture until the initial fusion model is converged to obtain a fusion model. Wherein, the loss value can be understood as the difference between the output fusion picture optimized for the initial fusion picture and the first picture; the weight parameters may include all parameters in the initial fusion model, such as convolution kernel parameters, and when the initial fusion model is trained, all parameters in the initial fusion model are updated to train the initial fusion model based on the fusion picture and the first picture. And then continuing to execute the step of obtaining the training sample until the initial fusion model converges or the LOSS value converges, and finally obtaining the trained fusion model, wherein in actual implementation, the output of the initial fusion model can be supervised by a first picture, the supervised LOSS is the distance L1, and the L1-LOSS of the seam splicing region can be weighted by 2 x.
Step S412, determining target splicing images corresponding to the first image and the second image based on the fusion overlapping area and the initial splicing image.
The image stitching method obtains the first image and the second image to be stitched. An initial stitched image of the first image and the second image is determined. And inputting the third sub-overlapping area and the fourth sub-overlapping area into the splicing model to obtain a splicing area corresponding to the third sub-overlapping area and the fourth sub-overlapping area. And fusing the third sub-overlapping area and the fourth sub-overlapping area based on the splicing seam area to obtain an initial fusion overlapping area. And inputting the initial fusion overlapping region, the third sub-overlapping region and the fourth sub-overlapping region into the fusion model to obtain a fusion overlapping region. And determining target spliced images corresponding to the first image and the second image based on the fusion overlapping region and the initial spliced image. In the method, the first neural network model comprises a splicing model and a fusion model, a third sub-overlapping region corresponding to the first image and a splicing region corresponding to a fourth sub-overlapping region corresponding to the second image after illumination compensation are searched out through the splicing model, an initial fusion overlapping region is obtained, and a fusion overlapping region is obtained through the fusion model.
The embodiment of the invention also provides another image splicing method which is realized on the basis of the method of the embodiment; the method mainly describes a specific process of performing fusion processing on a target overlapping region in an initial spliced image by using a first neural network model to obtain a fusion overlapping region corresponding to the target overlapping region, and a specific process of determining target spliced images corresponding to a first image and a second image based on the fusion overlapping region and the initial spliced image, wherein the target overlapping region comprises the following steps: the third sub-overlapping area corresponding to the first image and the fourth sub-overlapping area corresponding to the second image after illumination compensation; the first neural network model comprises a stitching model; as shown in fig. 5, the method includes the steps of:
step S502, a first image and a second image to be spliced are obtained.
In practical implementation, the first image and the second image are generally RAW images, and the RAW images are usually subjected to format conversion, that is, color channels of the first image and the second image are rearranged so as to facilitate subsequent illumination compensation, seam searching and other processing, and a bayer-pattern of an original RAW domain is generally processed into an RGGB (R: Red, G: Green, B: Blue, Blue) arrangement mode, that is, the color channels of the first image and the second image are RGGB arrangement mode. In the mode, the first image and the second image can be both RAW images, splicing can be directly carried out based on the RAW images, the first image and the second image do not need to be spliced after being converted into RGB images, and compared with the RGB images, the RAW images do not lack image details, so that the image splicing result of the RAW is more beneficial to processing such as image detection and segmentation in the later period.
Step S504, an initial stitched image of the first image and the second image is determined.
The RAW image is usually RAW data obtained by converting a captured light source signal into a digital signal by a CMOS (Complementary Metal-Oxide-Semiconductor) or CCD (Charge Coupled Device) image sensor, and is unprocessed, and compared with an RGB image, the RAW image includes complete image details, and has greater advantages in post-processing, such as adding and subtracting exposure, adjusting highlight/shadow, increasing and decreasing contrast, adjusting color gradation and curve, and the like. In practical implementation, when images need to be stitched, both the first image and the second image that are acquired first are generally RAW images.
Step S506, inputting the third sub-overlapping area and the fourth sub-overlapping area into the splicing model to obtain a splicing seam area corresponding to the third sub-overlapping area and the fourth sub-overlapping area.
Step S508, performing feathering on the patchwork area to obtain an overlapped area after feathering, and determining the overlapped area after feathering as a fusion overlapped area.
The feathering treatment is to blur the edge of the selected pixel area and mix the selected area with the surrounding pixels, i.e. the inner and outer connected parts of the selected pixel area are blurred, so as to achieve the effect of gradual change and natural connection, and the larger the feathering value is, the wider the blurring range is, that is, the color gradually becomes soft; the smaller the feathering value is, the narrower the blurring range is, and the feathering value can be adjusted according to the actual situation. In actual implementation, after the seam area is obtained, the fusion overlapping area can be obtained based on Blending operation, that is, fusion operation, specifically, a feathering effect can be constructed on the seam area obtained by seam search in an Alpha-Blending mode, and a feathered pixel value can be selected from 16-22 pixels, so that the fusion overlapping area is finally obtained. Specifically, refer to the process of Alpha-Blending mode fusion in the related art, which is not described herein again.
And step S510, replacing the target overlapping area in the initial spliced image with the fusion overlapping area to obtain a target spliced image corresponding to the first image and the second image.
In practical implementation, since the first image and the second image are both RAW images, that is, both images are images without lossy compression processing, the target stitched image is also a RAW image, that is, after the fusion overlapping region is replaced by the target overlapping region in the initial stitched image, a stitched RAW image, that is, the target stitched image, can be obtained.
After the target mosaic image is obtained, analysis processing such as detection, segmentation or recognition can be carried out on the target mosaic image to obtain a corresponding analysis result; because the image splicing result is the RAW image, the method can realize the intelligent image analysis of the RAW image. The target mosaic Image can be processed by adopting a core algorithm of an ISP (Image Signal processing) such as Demasoic and the like to obtain a corresponding RGB Image mosaic result, wherein the ISP is mainly used for performing post-processing on a signal output by a front-end Image sensor, and mainly has the functions of linear correction, noise removal, dead pixel removal, interpolation, white balance, automatic exposure control and the like, the ISP comprises various core algorithms such as Demasoic and the like, and the ISP can output an Image of an RGB domain through related algorithm processing to obtain a mosaic result convenient to visualize.
The image stitching method obtains the first image and the second image to be stitched. And inputting the third sub-overlapping area and the fourth sub-overlapping area into the splicing model to obtain a splicing area corresponding to the third sub-overlapping area and the fourth sub-overlapping area. And performing feathering treatment on the seam splicing region to obtain an overlapped region after feathering, and determining the overlapped region after feathering as a fusion overlapped region. And replacing the target overlapping area in the initial spliced image with the fusion overlapping area to obtain a target spliced image corresponding to the first image and the second image. In the method, the first neural network model comprises a splicing model, a third sub-overlapping region corresponding to the first image and a splicing region corresponding to a fourth sub-overlapping region corresponding to the second image after illumination compensation are searched out through the splicing model, and a fusion overlapping region is obtained through a feather processing mode.
The embodiment of the invention also provides an illumination compensation method, which comprises the following steps:
step 602, a first image and a second image to be stitched are obtained.
And step 604, performing illumination compensation on the second image based on the second neural network model and the first image.
The step 604 may be specifically implemented by the following steps eleven to fifteen:
step eleven, determining a projective transformation matrix based on the first image and the second image.
The eleventh step can be specifically realized by the following steps M to O:
and step M, extracting at least one first characteristic point in the first image and at least one second characteristic point in the second image.
And N, determining at least one matched characteristic point pair based on the at least one first characteristic point and the at least one second characteristic point.
And step O, determining a projective transformation matrix based on the at least one matched characteristic point pair.
Step twelve, determining an initial overlapping area of the first image and the second image based on the projective transformation matrix; wherein the initial overlap region comprises: the first sub-overlapping area corresponding to the first image and the second sub-overlapping area corresponding to the second image.
This step twelve can be specifically realized by the following steps P to S:
step P, acquiring a boundary coordinate of a second image; wherein the boundary coordinates are used to indicate an image area of the second image.
And Q, determining the boundary coordinates after projective transformation based on the projective transformation matrix and the boundary coordinates of the second image.
And step R, determining the second image after projective transformation based on the boundary coordinates after projective transformation.
And S, determining a superposed image area of the second image after projection transformation and the first image as an initial superposed area.
And step thirteen, inputting the first sub-overlapping area and the second sub-overlapping area into a second neural network model, and determining the mapping relation between the first pixel value of each pixel in the first sub-overlapping area and the second pixel value of the pixel at the same position in the second sub-overlapping area through the second neural network model.
And step fourteen, acquiring the mapping relation output by the second neural network model.
And step fifteen, aiming at each color channel, matching the pixel value of each pixel point in the second image in the color channel with the pixel value of each pixel point in the first image in the color channel based on the mapping relation so as to perform illumination compensation on the second image.
This step 604 may be specifically realized by the following step twenty:
and twenty, inputting the first image and the second image into a second neural network model, and performing illumination compensation on the second image through a second neural network based on the first image to obtain a second image after the illumination compensation. In this embodiment, the detailed implementation of each step may refer to the related description in the foregoing embodiment, and is not repeated herein.
According to the illumination compensation method, after the first image and the second image to be spliced are obtained, illumination compensation is carried out on the second image based on the second neural network model and the first image. In the method, the pre-trained neural network model is adopted to perform illumination compensation on the second image, and each pixel does not need to be calculated based on a CPU (central processing unit), so that the time for compensating all pixels in the image is saved, and the processing efficiency of the illumination compensation is improved.
To further understand the above embodiments, a flowchart of a RAW domain image stitching method shown in fig. 6 is provided below, for convenience of description, two images are taken as an example, the two images are respectively represented by a tar image (corresponding to the first image) and an src image (corresponding to the second image), and multiple images can also be stitched in this way, first, a bayer array (bayer array) image of an original RAW domain is processed into an rggb arrangement mode, that is, a bayer-tar is subjected to format conversion and converted into an rggb-tar image, and a bayer-src is subjected to format conversion and converted into an rggb-src image.
The following processes of spatial registration of the rggb-tar image and the rggb-src image are specifically performed, and the specific process of spatial registration may be that feature points are extracted from the two images, the feature points may be extracted by using a conventional SIFT or a mode such as SuperPoint based on a neural network, and then the feature points are matched to obtain matched feature point pairs, a projection transformation matrix is calculated based on feature point matching information of the matched feature point pairs, projected area information of the rggb-src is calculated based on the projection transformation matrix and boundary coordinates of the rggb-src image, and an overlapping area (corresponding to the initial overlapping area) is obtained by intersection with the rggb-tar image.
Then, illumination compensation can be carried out on the rggb-src image based on a pre-trained neural network model, color consistency of different camera images can be optimized through the illumination compensation, and a mode can be adopted, wherein one mode is that two small graphs of an overlapping area of the rggb-tar image and the rggb-src image are input into the neural network model, learning is carried out through an LUT table with GT being 0-65535, finally four LUT tables with 0-65535 can be obtained through calculation through the neural network model, and the illumination compensation is carried out on the rggb-src image based on the LUT tables; the other mode is that the rggb-tar image and the rggb-src image are input into a neural network model, GT is used as the original rggb-tar image, the rggb-src image after illumination compensation is performed based on histogram matching is supervised, and finally the rggb-src image after illumination compensation can be calculated through the neural network model.
After the illumination-compensated rggb-src image is obtained, projection change processing can be performed on the rggb-tar image and the illumination-compensated rggb-src image, and specifically, the illumination-compensated rggb-src image is projected onto the rggb-tar image based on a projection transformation matrix to obtain an initial spliced image; based on the rggb-src image after illumination compensation, a new overlap region is re-extracted, based on two small graphs of the new overlap region of the rggb-tar image and the rggb-src image, a patchwork search and blending (fusion) are performed, specifically, the two small graphs of the new overlap region may be input to a pre-trained stitching model, a stitched mask or a patchwork point set (corresponding to the patchwork region) is output through the stitching model, and then a fused overlap region (corresponding to the fused overlap region) is obtained through an Alpha-blending fusion mode or a fusion mode based on the pre-trained fusion model. Finally, replacing the overlapped area, and replacing the overlapped area in the initial spliced image with the fused overlapped area to obtain a RAW domain splicing result; optionally, after the RAW domain splicing result is obtained, the RAW domain splicing result may be subjected to RAW domain image intelligent analysis, and an RGB visualization result may also be obtained through ISP step processing such as Demasaic.
The processes of illumination compensation, seam searching, fusion and the like in the RAW domain image splicing method can be realized based on a neural network, so that hardware acceleration can be realized, the processing speed is increased, and the processing efficiency is improved.
An embodiment of the present invention further provides a schematic structural diagram of an image stitching apparatus, as shown in fig. 7, the apparatus includes: an obtaining module 70, configured to obtain a first image and a second image to be stitched; a first determining module 71, configured to determine an initial stitched image of the first image and the second image; the fusion module 72 is configured to perform fusion processing on the target overlapping area in the initial stitched image by using the first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area; and a second determining module 73, configured to determine a target stitched image corresponding to the first image and the second image based on the fusion overlapping region and the initial stitched image.
The image splicing device firstly acquires a first image and a second image to be spliced; determining an initial spliced image of the first image and the second image; then, carrying out fusion processing on the target overlapping area in the initial splicing image by using a first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area; and finally, determining target spliced images corresponding to the first image and the second image based on the fusion overlapping area and the initial spliced image. According to the device, after the initial splicing images of the first image and the second image are determined, the first neural network model is adopted to perform fusion processing on the target overlapping area in the initial splicing images to obtain the corresponding fusion overlapping area, fusion-related calculation is not needed to be performed on each pixel in the initial splicing images based on a CPU (central processing unit), the time for performing fusion calculation on all pixels in the initial splicing images is saved, the fusion processing efficiency is improved, and the image splicing processing efficiency is further improved.
Further, the apparatus is further configured to: performing illumination compensation on the second image; correspondingly, the first determining module is further configured to: and determining an initial spliced image based on the first image and the second image after illumination compensation.
Further, the first determining module is further configured to: and performing illumination compensation on the second image based on the second neural network model and the first image.
Further, the first determining module is further configured to: determining a projective transformation matrix based on the first image and the second image; determining an initial overlapping region of the first image and the second image based on the projective transformation matrix; wherein the initial overlap region comprises: a first sub-overlapping area corresponding to the first image and a second sub-overlapping area corresponding to the second image; inputting the first sub-overlapping area and the second sub-overlapping area into a second neural network model, and determining the mapping relation between the first pixel value of each pixel in the first sub-overlapping area and the second pixel value of the pixel at the same position in the second sub-overlapping area through the second neural network model; obtaining a mapping relation output by a second neural network model; and matching the pixel value of each pixel point in the second image in the color channel with the pixel value of each pixel point in the first image in the color channel based on the mapping relation aiming at each color channel so as to perform illumination compensation on the second image.
Further, the first determining module is further configured to: acquiring boundary coordinates of a second image; wherein the boundary coordinates are used to indicate an image area of the second image; determining boundary coordinates after projective transformation based on the projective transformation matrix and the boundary coordinates of the second image; determining a projectively transformed second image based on the projectively transformed boundary coordinates; and determining a superposed image area of the second image after projective transformation and the first image as an initial superposed area.
Further, the first determining module is further configured to: extracting at least one first characteristic point in the first image and at least one second characteristic point in the second image; determining at least one matching feature point pair based on the at least one first feature point and the at least one second feature point; determining a projective transformation matrix based on the at least one matching characteristic point pair.
Further, the first determining module is further configured to: and inputting the first image and the second image into a second neural network model, and performing illumination compensation on the second image through a second neural network based on the first image to obtain a second image after the illumination compensation.
Further, the target overlap region includes: the third sub-overlapping area corresponding to the first image and the fourth sub-overlapping area corresponding to the second image after illumination compensation; the first neural network model comprises a splicing model and a fusion model; the fusion module is further configured to: inputting the third sub-overlapping area and the fourth sub-overlapping area into a splicing model, and searching for a splicing seam between the third sub-overlapping area and the fourth sub-overlapping area through the splicing model to obtain a splicing seam area corresponding to the third sub-overlapping area and the fourth sub-overlapping area; fusing the third sub-overlapping area and the fourth sub-overlapping area based on the splicing seam area to obtain an initial fusion overlapping area; and inputting the initial fusion overlapping area, the third sub-overlapping area and the fourth sub-overlapping area into a fusion model, and performing fusion processing on the initial fusion overlapping area, the third sub-overlapping area and the fourth sub-overlapping area through the fusion model to obtain a fusion overlapping area.
Further, the target overlap region includes: the third sub-overlapping area corresponding to the first image and the fourth sub-overlapping area corresponding to the second image after illumination compensation; the first neural network model comprises a stitching model; the fusion module is further configured to: inputting the third sub-overlapping area and the fourth sub-overlapping area into a splicing model to obtain a splicing area corresponding to the third sub-overlapping area and the fourth sub-overlapping area; and performing feathering treatment on the seam splicing region to obtain an overlapped region after feathering, and determining the overlapped region after feathering as a fusion overlapped region.
Further, the fusion module is further configured to: acquiring a first picture; carrying out translation and/or rotation processing on the first picture to obtain a second picture; performing fusion processing on the first picture and the second picture to obtain an initial fusion picture; and training a fusion model based on the first picture, the second picture and the initial fusion picture.
Further, the second determining module is further configured to: and replacing the target overlapping area in the initial spliced image with the fusion overlapping area to obtain a target spliced image corresponding to the first image and the second image.
Further, the color channels of the first image and the second image are arranged in an RGGB manner.
The image stitching device provided by the embodiment of the invention has the same implementation principle and technical effect as the image stitching method embodiment, and for brief description, the image stitching device embodiment can refer to the corresponding content in the image stitching method embodiment.
The embodiment of the invention also provides electronic equipment which comprises processing equipment and a storage device, wherein the storage device stores a computer program, and the computer program executes the image stitching method according to any one of the above items when the computer program is run by the processing equipment.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the electronic device described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
The embodiment of the invention also provides a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processing device, the steps of the image stitching method are executed.
The image stitching method, the image stitching device and the computer program product of the electronic device provided by the embodiment of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment, and will not be described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and/or devices may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (15)

1. An image stitching method, characterized in that the method comprises:
acquiring a first image and a second image to be spliced;
determining an initial stitched image of the first image and the second image;
performing fusion processing on a target overlapping area in the initial splicing image by using a first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area;
and determining a target spliced image corresponding to the first image and the second image based on the fusion overlapping area and the initial spliced image.
2. The method of claim 1, wherein prior to determining an initial stitched image of the first image and the second image, the method further comprises:
performing illumination compensation on the second image;
correspondingly, the determining an initial stitched image of the first image and the second image includes:
and determining the initial spliced image based on the first image and the illumination-compensated second image.
3. The method of claim 2, wherein the step of illumination compensating the second image comprises:
illumination compensation is performed on the second image based on a second neural network model and the first image.
4. The method of claim 3, wherein the illumination compensating the second image based on the second neural network model and the first image comprises:
determining a projective transformation matrix based on the first image and the second image;
determining an initial overlapping region of the first image and the second image based on the projective transformation matrix; wherein the initial overlap region comprises: a first sub-overlapping area corresponding to the first image and a second sub-overlapping area corresponding to the second image;
inputting the first sub-overlapping area and the second sub-overlapping area into the second neural network model, and determining a mapping relation between a first pixel value of each pixel in the first sub-overlapping area and a second pixel value of a pixel at the same position in the second sub-overlapping area through the second neural network model;
obtaining the mapping relation output by the second neural network model;
and matching the pixel value of each pixel point in the second image in the color channel with the pixel value of each pixel point in the first image in the color channel based on the mapping relation so as to perform illumination compensation on the second image.
5. The method of claim 4, wherein the step of determining an initial overlap region of the first image and the second image based on the projective transformation matrix comprises:
acquiring boundary coordinates of the second image; wherein the boundary coordinates are used to indicate an image area of the second image;
determining projective transformation boundary coordinates based on the projective transformation matrix and the boundary coordinates of the second image;
determining a projectively transformed second image based on the projectively transformed boundary coordinates;
and determining a superposed image area of the projectively transformed second image and the first image as the initial overlapping area.
6. The method of claim 4, wherein the step of determining a projective transformation matrix based on the first image and the second image comprises:
extracting at least one first feature point in the first image and at least one second feature point in the second image;
determining at least one matching pair of feature points based on the at least one first feature point and the at least one second feature point;
determining a projective transformation matrix based on the at least one matching feature point pair.
7. The method of claim 3, wherein the step of illumination compensating the second image based on the second neural network model and the first image comprises:
inputting the first image and the second image into the second neural network model, and performing illumination compensation on the second image through the second neural network based on the first image to obtain the second image after illumination compensation.
8. The method of claim 1, wherein the target overlap region comprises: a third sub-overlapping area corresponding to the first image and a fourth sub-overlapping area corresponding to the second image after illumination compensation; the first neural network model comprises a splicing model and a fusion model; the step of performing fusion processing on the target overlapping area in the initial stitched image by using the first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area comprises:
inputting the third sub-overlapping area and the fourth sub-overlapping area into the splicing model, and searching for a splicing seam between the third sub-overlapping area and the fourth sub-overlapping area through the splicing model to obtain a splicing seam area corresponding to the third sub-overlapping area and the fourth sub-overlapping area;
fusing the third sub-overlapping region and the fourth sub-overlapping region based on the seam splicing region to obtain an initial fusion overlapping region;
inputting the initial fusion overlapping region, the third sub-overlapping region and the fourth sub-overlapping region into the fusion model, and performing fusion processing on the initial fusion overlapping region, the third sub-overlapping region and the fourth sub-overlapping region through the fusion model to obtain the fusion overlapping region.
9. The method of claim 1, wherein the target overlap region comprises: a third sub-overlapping area corresponding to the first image and a fourth sub-overlapping area corresponding to the second image after illumination compensation; the first neural network model comprises a stitching model; the step of performing fusion processing on the target overlapping area in the initial stitched image by using the first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area comprises:
inputting the third sub-overlapping area and the fourth sub-overlapping area into the splicing model to obtain a splicing seam area corresponding to the third sub-overlapping area and the fourth sub-overlapping area;
and performing feathering treatment on the seam splicing region to obtain a feathered overlapping region, and determining the feathered overlapping region as the fusion overlapping region.
10. The method of claim 8, wherein the fusion model is trained by:
acquiring a first picture;
carrying out translation and/or rotation processing on the first picture to obtain a second picture;
performing fusion processing on the first picture and the second picture to obtain an initial fusion picture;
training the fusion model based on the first picture, the second picture and the initial fusion picture.
11. The method according to claim 1, wherein the step of determining the target stitched image corresponding to the first image and the second image based on the fused overlapping region and the initial stitched image comprises:
and replacing the target overlapping area in the initial spliced image with the fusion overlapping area to obtain the target spliced image corresponding to the first image and the second image.
12. The method according to claim 1, wherein the arrangement of the color channels of the first image and the second image is an RGGB arrangement.
13. An image stitching device, characterized in that the device comprises:
the acquisition module is used for acquiring a first image and a second image to be spliced;
a first determining module for determining an initial stitched image of the first image and the second image;
the fusion module is used for carrying out fusion processing on a target overlapping area in the initial splicing image by utilizing a first neural network model to obtain a fusion overlapping area corresponding to the target overlapping area;
and the second determining module is used for determining a target spliced image corresponding to the first image and the second image based on the fusion overlapping area and the initial spliced image.
14. An electronic device, comprising a processing device and a storage means, the storage means storing a computer program which, when executed by the processing device, performs the image stitching method according to any one of claims 1-12.
15. A machine-readable storage medium, characterized in that the machine-readable storage medium stores a computer program which, when executed by a processing device, performs the image stitching method according to any one of claims 1-12.
CN202110990427.0A 2021-08-26 2021-08-26 Image splicing method and device and electronic equipment Pending CN113902657A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110990427.0A CN113902657A (en) 2021-08-26 2021-08-26 Image splicing method and device and electronic equipment
PCT/CN2022/102342 WO2023024697A1 (en) 2021-08-26 2022-06-29 Image stitching method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110990427.0A CN113902657A (en) 2021-08-26 2021-08-26 Image splicing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113902657A true CN113902657A (en) 2022-01-07

Family

ID=79188035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110990427.0A Pending CN113902657A (en) 2021-08-26 2021-08-26 Image splicing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN113902657A (en)
WO (1) WO2023024697A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023024697A1 (en) * 2021-08-26 2023-03-02 北京旷视科技有限公司 Image stitching method and electronic device
CN116055659A (en) * 2023-01-10 2023-05-02 如你所视(北京)科技有限公司 Original image processing method and device, electronic equipment and storage medium
CN117544862A (en) * 2024-01-09 2024-02-09 北京大学 Image stitching method based on image moment parallel processing

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116167921B (en) * 2023-04-21 2023-07-11 深圳市南天门网络信息有限公司 Method and system for splicing panoramic images of flight space capsule
CN117372252B (en) * 2023-12-06 2024-02-23 国仪量子技术(合肥)股份有限公司 Image stitching method and device, storage medium and electronic equipment
CN117575976B (en) * 2024-01-12 2024-04-19 腾讯科技(深圳)有限公司 Image shadow processing method, device, equipment and storage medium
CN117680977B (en) * 2024-02-04 2024-04-16 季华实验室 Robot feeding, splicing and aligning method, device, equipment and medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882308A (en) * 2010-07-02 2010-11-10 上海交通大学 Method for improving accuracy and stability of image mosaic
US9940695B2 (en) * 2016-08-26 2018-04-10 Multimedia Image Solution Limited Method for ensuring perfect stitching of a subject's images in a real-site image stitching operation
CN109523491A (en) * 2018-12-13 2019-03-26 深圳市路畅智能科技有限公司 Method and apparatus are uniformed for looking around the illumination of looking around that auxiliary is parked
CN110473143B (en) * 2019-07-23 2023-11-10 平安科技(深圳)有限公司 Three-dimensional MRA medical image stitching method and device and electronic equipment
CN112862685B (en) * 2021-02-09 2024-02-23 北京迈格威科技有限公司 Image stitching processing method, device and electronic system
CN113902657A (en) * 2021-08-26 2022-01-07 北京旷视科技有限公司 Image splicing method and device and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023024697A1 (en) * 2021-08-26 2023-03-02 北京旷视科技有限公司 Image stitching method and electronic device
CN116055659A (en) * 2023-01-10 2023-05-02 如你所视(北京)科技有限公司 Original image processing method and device, electronic equipment and storage medium
CN116055659B (en) * 2023-01-10 2024-02-20 如你所视(北京)科技有限公司 Original image processing method and device, electronic equipment and storage medium
CN117544862A (en) * 2024-01-09 2024-02-09 北京大学 Image stitching method based on image moment parallel processing
CN117544862B (en) * 2024-01-09 2024-03-29 北京大学 Image stitching method based on image moment parallel processing

Also Published As

Publication number Publication date
WO2023024697A1 (en) 2023-03-02

Similar Documents

Publication Publication Date Title
WO2023024697A1 (en) Image stitching method and electronic device
CN110910486B (en) Indoor scene illumination estimation model, method and device, storage medium and rendering method
CN107301620B (en) Method for panoramic imaging based on camera array
CN111062905A (en) Infrared and visible light fusion method based on saliency map enhancement
CN112270688B (en) Foreground extraction method, device, equipment and storage medium
CN111402146A (en) Image processing method and image processing apparatus
WO2023011013A1 (en) Splicing seam search method and apparatus for video image, and video image splicing method and apparatus
CN108171735B (en) Billion pixel video alignment method and system based on deep learning
CN108616700B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108846807A (en) Light efficiency processing method, device, terminal and computer readable storage medium
CN108111768A (en) Control method, apparatus, electronic equipment and the computer readable storage medium of focusing
CN109712177A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN116681636B (en) Light infrared and visible light image fusion method based on convolutional neural network
CN113630549A (en) Zoom control method, device, electronic equipment and computer-readable storage medium
CN114627034A (en) Image enhancement method, training method of image enhancement model and related equipment
CN113298177B (en) Night image coloring method, device, medium and equipment
Song et al. Real-scene reflection removal with raw-rgb image pairs
CN114372931A (en) Target object blurring method and device, storage medium and electronic equipment
Zheng et al. Overwater image dehazing via cycle-consistent generative adversarial network
Fu et al. Image stitching techniques applied to plane or 3-D models: a review
CN116310105B (en) Object three-dimensional reconstruction method, device, equipment and storage medium based on multiple views
CN111738964A (en) Image data enhancement method based on modeling
KR20210057925A (en) Streaming server and method for object processing in multi-view video using the same
Guo et al. Low-light color imaging via cross-camera synthesis
Van Vo et al. High dynamic range video synthesis using superpixel-based illuminance-invariant motion estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination