CN117544862B - Image stitching method based on image moment parallel processing - Google Patents

Image stitching method based on image moment parallel processing Download PDF

Info

Publication number
CN117544862B
CN117544862B CN202410029459.8A CN202410029459A CN117544862B CN 117544862 B CN117544862 B CN 117544862B CN 202410029459 A CN202410029459 A CN 202410029459A CN 117544862 B CN117544862 B CN 117544862B
Authority
CN
China
Prior art keywords
image
moment
camera
preset
overlapping area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410029459.8A
Other languages
Chinese (zh)
Other versions
CN117544862A (en
Inventor
李昌晋
袁继权
曹昕妍
任伟
林金龙
曹喜信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202410029459.8A priority Critical patent/CN117544862B/en
Publication of CN117544862A publication Critical patent/CN117544862A/en
Application granted granted Critical
Publication of CN117544862B publication Critical patent/CN117544862B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3876Recombination of partial images to recreate the original image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The present disclosure relates to the field of image data processing or generating technologies, and in particular, to an image stitching method based on image moment parallel processing. The method comprises the following steps: s100, acquiring an initial overlapping region image P of the first image 1,0 The first image is an image acquired by the first camera at the target moment; s200, acquiring an initial overlapping region image P of the second image 2,0 The second image is an image acquired by the second camera at the target moment; s300, obtaining P 1,0 K-lMoment m of order image 0 l1,k And P 2,0 K-lImage moment m 0 l2,k The method comprises the steps of carrying out a first treatment on the surface of the S400, if |m 0 l1,k ‑m 0 l2,k |≤ε 0 Then P is taken 1,0 And P 2,0 And determining an overlapping area of the first image and the second image and performing image stitching of the first image and the second image. The invention can improve the quality of the image splicing result.

Description

Image stitching method based on image moment parallel processing
Technical Field
The invention relates to the technical field of general image data processing or generation, in particular to an image stitching method based on image moment parallel processing.
Background
The array camera comprises a plurality of cameras which are arranged in a preset mode, the acquisition areas between any two adjacent cameras are overlapped, and an image with a larger visual field can be obtained by splicing the images acquired by the array camera at the target moment. For any two adjacent cameras of the array camera, such as a first camera and a second camera, based on the real overlapping area of a first image shot by the first camera at the target moment and a second image shot by the second camera at the target moment, the first image and the second image can be spliced by using an image splicing method. However, how to improve the accuracy of the obtained overlapping area and further improve the quality of the image stitching result is a problem to be solved.
Disclosure of Invention
The invention aims to provide an image stitching method based on image moment parallel processing, which can improve the accuracy of an acquired overlapping area and further improve the quality of an image stitching result.
According to the invention, an image stitching method based on image moment parallel processing is provided, which comprises the following steps:
s100, acquiring an initial overlapping region image P of the first image 1,0 The first image is an image acquired by the first camera at the target moment.
S200, acquiring an initial overlapping region image P of the second image 2,0 The second image is an image acquired by a second camera at a target moment, and the first camera and the second camera are two adjacent cameras in the array camera; the initial overlapping area of the first image is close to the second camera side and comprises a 0 A region of column pixel points; the initial overlapping area of the second image is close to the first camera side and comprises a 0 A region of column pixel points; a, a 0 The number of columns of the preset initial overlapped pixel points.
S300, obtaining P 1,0 K-l order image moment of (2)And P 2,0 K-l moment of image->k is a preset image moment, and l is a preset vertical moment.
S400, ifWill P 1,0 And P 2,0 Determining an overlapping area of the first image and the second image and performing image stitching of the first image and the second image; epsilon 0 Is a preset moment difference threshold.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention acquires the initial overlapping region image P of the first image 1,0 And an initial overlap region image P of the second image 2,0 The initial overlapping region is defined by a predetermined initial overlapping pixel point column number a 0 The invention obtains P separately for the purpose of judging whether the initial overlapping area is the overlapping area of the first image and the second image 1,0 And P 2,0 Of the k-l order image moment of (if P) 1,0 And P 2,0 If the difference of the k-l order image moment is smaller than or equal to a preset image moment difference threshold, judging the initial overlapping area as the overlapping area of the first image and the second image, and performing image stitching of the first image and the second image based on the initial overlapping area. The invention judges whether the initial overlapping area is the overlapping area of the first image and the second image before the first image and the second image are spliced, improves the accuracy of the acquired overlapping area, and can splice the images of the first image and the second image only when the initial overlapping area is adjusted to be the overlapping area of the first image and the second image, thereby avoiding the problem of lower splicing quality of the images caused by taking the non-overlapping area as the overlapping area and improving the acquired imagesQuality of the splice result.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image stitching method based on image moment parallel processing according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
According to the present invention, there is provided an image stitching method based on image moment parallel processing, as shown in fig. 1, comprising the steps of:
s100, acquiring an initial overlapping region image P of the first image 1,0 The first image is an image acquired by the first camera at the target moment.
S200, acquiring an initial overlapping region image P of the second image 2,0 The second image is an image acquired by a second camera at a target moment, and the first camera and the second camera are two adjacent cameras in the array camera; the initial overlapping area of the first image is close to the second camera side and comprises a 0 A region of column pixel points; the initial overlapping area of the second image is close to the first camera side and comprises a 0 A region of column pixel points; a, a 0 The number of columns of the preset initial overlapped pixel points.
In this embodiment, the first camera and the second camera are two adjacent cameras juxtaposed in the horizontal direction, the pixel point at the lower left corner of the image is taken as the origin, the horizontal right direction is taken as the positive x-axis direction, the vertical x-axis and the upward direction is taken as the positive y-axis direction, if the second camera is located at the right side of the first camera, the pixel point with the largest x-coordinate column in the image shot by the first camera is the pixel point overlapped with the image shot by the second camera, and the pixel point with the smallest x-coordinate column in the image shot by the second camera is the pixel point overlapped with the image shot by the first camera.
Alternatively, a 0 Is an empirical value.
S300, obtaining P 1,0 K-l order image moment of (2)And P 2,0 K-l moment of image->k is a preset image moment, and l is a preset vertical moment.
Optionally, in acquiring P 1,0 K-l order image moment of (2)And P 2,0 K-l moment of image->Previously to P 1,0 And P 2,0 Preprocessing, and then obtaining P after preprocessing 1,0 K-l order image moment +.>And P after pretreatment 2,0 K-l moment of image->Optionally, the preprocessing includes denoising, graying, and the like.
In the present embodiment of the present invention,f ij is P 1,0 The gray value of the pixel point of the j-th column of the i-th row, the value range of i is 1 to M1, and M1 is P 1,0 The value range of j is 1 to N1, N1 is P 1,0 Is the number of columns of pixels; f (f) eh Is P 2,0 The gray value of the pixel point of the h column of the e-th row, the value range of e is 1 to M2, and M2 is P 2,0 The number of lines of pixel points of (1) and h ranges from 1 to N2, and N2 is P 2,0 Is a number of columns of pixels. In this embodiment, m1=m2, n1=n2=a 0
In this embodiment, the parallel processing method is used for acquisitionAnd->I.e. using multiple computing units for parallel computation to obtain +.>And->Thereby improving the acquisition->And->Is not limited to the above-described embodiments.
In the present embodiment of the present invention,when l=0, s i0 (j)=s i0 (j-1)+f ij The method comprises the steps of carrying out a first treatment on the surface of the When l=1, s i1 (j)=s i1 (j-1)+s i0 (j) The method comprises the steps of carrying out a first treatment on the surface of the When l=2, s i2 (j)=s i2 (j-1)+s i1 (j)+s i1 (j-1); when l=3, s i3 (j)=s i3 (j-1)+s i2 (j-1)+2s i2 (j)-s i1 (j) The method comprises the steps of carrying out a first treatment on the surface of the When l>In the time of 3, the time of the preparation, for example, when l=4, s i4 (j)=s i4 (j-1)+4s i3 (j-1)+3s i2 (j-1)+3s i2 (j)-2s i1 (j) A. The invention relates to a method for producing a fibre-reinforced plastic composite In the embodiment, the vertical moment is acquired first, and the image moment is acquired on the basis of the acquired vertical moment, wherein only addition operation is needed in the process of acquiring the vertical moment, multiplication operation is not needed, and algorithm complexity is reduced.
Preferably, the acquiring process of l and k includes:
s310, obtain P 1,0 Number of targets of preset type quat 1,0
In this embodiment, the preset type of object is an object that the user focuses on, and optionally, the preset type of object is a person or a vehicle.
Those skilled in the art will appreciate that any method of detecting a predetermined type of object in an image in the prior art falls within the scope of the present invention.
S320, obtaining P 1,0 The duty ratio beta of the pixel points of the target of the preset type 1,0
In this embodiment, beta 1,0 Is P 1,0 The total number of pixel points and P of all targets with preset types 1,0 Number num of pixels of (a) 1,0 Ratio of the two components.
S330, obtaining the sampling frequency f of the first camera 1
In this embodiment, the sampling frequency of the second camera is the same as the sampling frequency of the first camera.
S340, obtaining P 1,0 Number num of middle pixel points 1,0
S350, according to qua 1,0 、β 1,0 、f 1 And num 1,0 Determining l and k, both l and k and qua 1,0 Positively correlated, both l and k are beta 1,0 Positively correlated, both l and k are with f 1 Negatively correlated, both l and k are with num 1,0 And (5) negative correlation.
In this embodiment, l and k are both equal to qua 1,0 And beta 1,0 Positively correlated, both l and k are with f 1 And num 1,0 Negative correlation, namely increasing the values of l and k when the number of targets of a preset type included in the initial overlapping area of the first image and the second image is more and the targets occupy a larger area, so as to increase the accuracy of measuring the images by the vertical moment and the image moment; reducing the values of l and k when the sampling frequency of the first camera is large and the number of pixel points of the initial overlapping area is large, so as to reduce the calculation cost in the process of acquiring the vertical moment and the image moment; thus, the l and k combine the importance of the first image and the second image with the computational overhead of obtaining the k-l order image moment.
Preferably, S350 includes:
s351, obtaining a first target value v 1 ,v 1 =r 1 ×qua′ 1,0 +r 2 ×β′ 1,0 -r 3 ×f′ 1 -r 4 ×num′ 1,0 ,r 1 +r 2 +r 3 +r 4 =1,r 1 、r 2 、r 3 And r 4 Respectively the number of the preset type targets, the duty ratio of the pixel points of the preset type targets, the sampling frequency of the first camera and the weight corresponding to the number of the pixel points, and qua' 1,0 、β′ 1,0 、f′ 1 And num' 1,0 Respectively corresponding to qua 1,0 、β 1,0 、f 1 And num 1,0 And carrying out normalization processing on the values.
In this embodiment, r 1 、r 2 、r 3 And r 4 Preset according to the user demand, optionally r 1 =r 2 =r 3 =r 4 =0.25。
Those skilled in the art will appreciate that any normalization algorithm known in the art falls within the scope of the present invention.
S352, v 1 Matching is carried out in a preset association table; the preset association table comprises a plurality of first entries, each of which records a target value rangeA vertical moment corresponding to the target value range and an image moment corresponding to the target value range; there is no overlap between the target value ranges of any two first entry records.
S353, if v 1 And (3) determining the vertical moment number of a certain first item record in the first association table as l and the image moment number of the first item record as k if the first item record belongs to the target value range of the certain first item record in the first association table.
In this embodiment, there is a correspondence among the target value range, the vertical moment and the image moment recorded in one first entry, when v 1 When the target value range of the x first item record of the association table is included, the vertical moment order and the image moment order of the x first item record of the association table are respectively taken as l and k.
Based on S351-S353, the present embodiment may be P 1,0 And P 2,0 And determining a relatively matched vertical moment number and image moment number, and considering the importance degree and the calculation cost of the image.
S400, ifWill P 1,0 And P 2,0 Determining an overlapping area of the first image and the second image and performing image stitching of the first image and the second image; epsilon 0 Is a preset moment difference threshold.
In this embodiment ε 0 Is an empirical value.
As a specific embodiment, P 1,0 And P 2,0 Determining an overlapping region of the first image and the second image and performing image stitching of the first image and the second image includes:
s410, obtaining P 1,0 Feature point set F of (1) 1,0 And P 2,0 Feature point set F of (1) 2,0
Those skilled in the art will appreciate that any method of obtaining a feature point in the prior art falls within the scope of the present invention.
S420, pair F 1,0 And F 2,0 Performing feature point matchingMatching to obtain a plurality of characteristic point pairs.
Those skilled in the art will appreciate that any method of feature matching in the prior art falls within the scope of the present invention.
And S430, performing image stitching of the first image and the second image according to the feature point pairs.
Those skilled in the art will appreciate that any method of image stitching based on feature point pairs in the prior art falls within the scope of the present invention.
Based on the above S410-S430, the present embodiment obtains P 1,0 Feature points and P in (1) 2,0 Due to P 1,0 And P 2,0 The images are overlapped area images of the first image and the second image, so that the characteristic points except the overlapped area in the first image and the second image do not interfere the characteristic point matching process in S420, the accuracy of the characteristic point pairs obtained in S420 is higher, and the quality of the finally obtained splicing result is improved.
The present embodiment acquires the initial overlapping region image P of the first image 1,0 And an initial overlap region image P of the second image 2,0 The initial overlapping region is defined by a predetermined initial overlapping pixel point column number a 0 In order to determine whether the initial overlapping region is an overlapping region of the first image and the second image, the present embodiment acquires P respectively 1,0 And P 2,0 Of the k-l order image moment of (if P) 1,0 And P 2,0 If the difference of the k-l order image moment is smaller than or equal to a preset image moment difference threshold, judging the initial overlapping area as the overlapping area of the first image and the second image, and performing image stitching of the first image and the second image based on the initial overlapping area. In the embodiment, whether the initial overlapping area is the overlapping area of the first image and the second image is judged before the first image and the second image are spliced, and the image splicing of the first image and the second image is performed only when the initial overlapping area is adjusted to be the overlapping area of the first image and the second image, so that the quality of the image splicing caused by taking the non-overlapping area as the overlapping area can be avoidedThe problem of lower is solved, and the quality of the acquired image splicing result is improved.
In this embodiment, S400 further includes: if it isS500 is entered.
S500, initializing a first variable b to be 1.
S600, obtain P 1,b K-l order image moment of (2)And P 2,b K-l moment of image->P 1,b The b-th update overlap region of the first image is the image of the first image which is close to the second camera side and comprises a 0 -b x Δa columns of areas of pixels, Δa being a preset column step size; p (P) 2,b The b-th update overlap region of the second image is the second image which is close to the first camera side and comprises a 0 -a region of bx Δa columns of pixels.
Alternatively, Δa is an empirical value. For example Δa=1 or 2 or 5 or 10.
Preferably, the method for obtaining Δa includes:
s610, obtaining a preset first table d, d= (d) 1 ,d 2 ,…,d c ,…,d w ),d c For item c included in d, d c =(d c,1 ,d c,2 ,d c,3 ,k c ,l c ,Δa c ),d c,1 Is d c Recorded pixel column number range d c,2 Is d c The recorded line number range of pixel points, d c,3 Is d c Recorded image moment difference range, k c Is d c Recorded image moment order range, l c Is d c The recorded vertical moment order range, Δa c Is d c Recorded and d c,1 、d c,2 、d c,3 、k c And l c The value range of c is 1 to w, and w is the number of entries included in d.
In this embodiment, d may be pre-constructed based on experience, where Δa c The number of pixel point columns included in the overlapping region is d c,1 The number of lines of pixel points included in the overlapping region belongs to d c,2 The difference of the moment of the image belongs to d c,3 The moment order of the image belongs to k c And the vertical moment number belongs to l c In the case of (2), the size of the overlap region and the number of repetitions of S600 are preferably equal to each other.
S620, willMatching in d, if a 0 ∈d c,1 、M1∈d c,2 、/>∈d c,3 、k∈k c 、l∈l c Then Δa is determined as Δa c
Based on S610-S620, in this embodiment, by constructing the first table in advance, under the condition that the number of columns, the number of rows, the difference size of image moments, the image moment order and the vertical moment order included in the overlapping region are known, a relatively matched step length can be determined, where the relatively matched step length considers the size of the overlapping region and the number of repetitions of S600, and the relatively matched step length is a preferred value, and corresponds to a relatively larger overlapping region and a relatively smaller number of overlaps.
S700, ifWill P 1,b And P 2,b Determining an overlapping area of the first image and the second image and performing image stitching of the first image and the second image; if->Then update b=b+1, repeat S600 untilWill P 1,b And P 2,b And determining an overlapping area of the first image and the second image and performing image stitching of the first image and the second image.
Based on the above S500-S700, the embodiment is as followsIn the case of (1), the number of columns of pixel points corresponding to the overlapping region is continuously reduced with deltaa as a step length until +.>Therefore, the embodiment obtains a real overlapping area with relatively more pixel point columns, which is beneficial to improving the quality of the result of splicing the first image and the second image.
While certain specific embodiments of the invention have been described in detail by way of example, it will be appreciated by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the invention. Those skilled in the art will also appreciate that many modifications may be made to the embodiments without departing from the scope and spirit of the invention. The scope of the present disclosure is defined by the appended claims.

Claims (6)

1. An image stitching method based on image moment parallel processing is characterized by comprising the following steps:
s100, acquiring an initial overlapping region image P of the first image 1,0 The first image is an image acquired by the first camera at the target moment;
s200, acquiring an initial overlapping region image P of the second image 2,0 The second image is an image acquired by a second camera at a target moment, and the first camera and the second camera are two adjacent cameras in the array camera; the initial overlapping area of the first image is close to the second camera side and comprises a 0 A region of column pixel points; the initial overlapping area of the second image is the secondThe image is close to the first camera side and comprises a 0 A region of column pixel points; a, a 0 The number of columns of the preset initial overlapped pixel points;
s300, obtaining P 1,0 K-l order image moment of (2)And P 2,0 K-l moment of image->k is a preset image moment number, and l is a preset vertical moment number;
s400, ifWill P 1,0 And P 2,0 Determining an overlapping area of the first image and the second image and performing image stitching of the first image and the second image; epsilon 0 A preset image moment difference threshold value;
s400 further includes: if it isThen S500 is entered;
s500, initializing a first variable b to be 1;
s600, obtain P 1,b K-l order image moment of (2)And P 2,b K-l moment of image->P 1,b The b-th update overlap region of the first image is the image of the first image which is close to the second camera side and comprises a 0 -b x Δa columns of areas of pixels, Δa being a preset column step size; p (P) 2,b The b-th update overlapping area of the second image is close to the first camera sideComprises a 0 -a region of b x Δa columns of pixels;
s700, ifWill P 1,b And P 2,b Determining an overlapping area of the first image and the second image and performing image stitching of the first image and the second image; if->Then update b=b+1, repeat S600 untilWill P 1,b And P 2,b And determining an overlapping area of the first image and the second image and performing image stitching of the first image and the second image.
2. The image stitching method based on image moment parallel processing as recited in claim 1, wherein, f ij is P 1,0 The gray value of the pixel point of the j-th column of the i-th row, the value range of i is 1 to M1, and M1 is P 1,0 The value range of j is 1 to N1, N1 is P 1,0 Is the number of columns of pixels; f (f) eh Is P 2,0 The gray value of the pixel point of the h column of the e-th row, the value range of e is 1 to M2, and M2 is P 2,0 The number of lines of pixel points of (1) and h ranges from 1 to N2, and N2 is P 2,0 Is the number of columns of pixels; m1=m2, n1=n2=a 0
3. The image stitching method based on image moment parallel processing according to claim 1, wherein the acquiring process of l and k includes:
s310, obtain P 1,0 Number of targets of preset type quat 1,0
S320, obtaining P 1,0 The duty ratio beta of the pixel points of the target of the preset type 1,0
S330, obtaining the sampling frequency f of the first camera 1
S340, obtaining P 1,0 Number num of middle pixel points 1,0
S350, according to qua 1,0 、β 1,0 、f 1 And num 1,0 Determining l and k, both l and k and qua 1,0 Positively correlated, both l and k are beta 1,0 Positively correlated, both l and k are with f 1 Negatively correlated, both l and k are with num 1,0 And (5) negative correlation.
4. The image stitching method based on image moment parallel processing as recited in claim 3, wherein S350 includes:
s351, obtaining a first target value v 1 ,v 1 =r 1 ×qua′ 1,0 +r 2 ×β′ 1,0 -r 3 ×f′ 1 -r 4 ×num′ 1,0 ,r 1 +r 2 +r 3 +r 4 =1,r 1 、r 2 、r 3 And r 4 Respectively the number of the preset type targets, the duty ratio of the pixel points of the preset type targets, the sampling frequency of the first camera and the weight corresponding to the number of the pixel points, and qua' 1,0 、β′ 1,0 、f′ 1 And num' 1,0 Respectively corresponding to qua 1,0 、β 1,0 、f 1 And num 1,0 Performing normalization processing on the values;
s352, v 1 Matching is carried out in a preset association table; the preset association table comprises a plurality of first items, and each first item is recorded with a target value range, a vertical moment number corresponding to the target value range and an image moment number corresponding to the target value range; no overlap exists between the target value ranges of any two first item records;
s353, if v 1 Belonging to the first fieldAnd determining the vertical moment order of a certain first item record in the association table as l and the image moment order of the first item record as k if the target value range of the certain first item record is the same.
5. A method of image stitching based on image moment parallel processing as recited in claim 3 wherein the predetermined type of object includes people and vehicles.
6. The image stitching method based on image moment parallel processing according to claim 1, wherein the image stitching method is obtained by using a parallel processing methodAnd->
CN202410029459.8A 2024-01-09 2024-01-09 Image stitching method based on image moment parallel processing Active CN117544862B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410029459.8A CN117544862B (en) 2024-01-09 2024-01-09 Image stitching method based on image moment parallel processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410029459.8A CN117544862B (en) 2024-01-09 2024-01-09 Image stitching method based on image moment parallel processing

Publications (2)

Publication Number Publication Date
CN117544862A CN117544862A (en) 2024-02-09
CN117544862B true CN117544862B (en) 2024-03-29

Family

ID=89784641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410029459.8A Active CN117544862B (en) 2024-01-09 2024-01-09 Image stitching method based on image moment parallel processing

Country Status (1)

Country Link
CN (1) CN117544862B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101106716A (en) * 2007-08-21 2008-01-16 北京大学软件与微电子学院 A shed image division processing method
CN103279939A (en) * 2013-04-27 2013-09-04 北京工业大学 Image stitching processing system
CN113902657A (en) * 2021-08-26 2022-01-07 北京旷视科技有限公司 Image splicing method and device and electronic equipment
CN115456869A (en) * 2021-06-08 2022-12-09 杭州海康威视数字技术股份有限公司 Image splicing method and device and electronic equipment
WO2023011013A1 (en) * 2021-08-04 2023-02-09 北京旷视科技有限公司 Splicing seam search method and apparatus for video image, and video image splicing method and apparatus
CN116760937A (en) * 2023-08-17 2023-09-15 广东省科技基础条件平台中心 Video stitching method, device, equipment and storage medium based on multiple machine positions
CN117114997A (en) * 2023-10-23 2023-11-24 四川新视创伟超高清科技有限公司 Image stitching method and device based on suture line search algorithm

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2846532A1 (en) * 2013-09-06 2015-03-11 Application Solutions (Electronics and Vision) Limited System, device and method for displaying a harmonised combined image
WO2015039067A1 (en) * 2013-09-16 2015-03-19 Duke University Method for combining multiple image fields
TWI554976B (en) * 2014-11-17 2016-10-21 財團法人工業技術研究院 Surveillance systems and image processing methods thereof
US11722776B2 (en) * 2021-06-28 2023-08-08 nearmap australia pty ltd. Hyper camera with shared mirror

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101106716A (en) * 2007-08-21 2008-01-16 北京大学软件与微电子学院 A shed image division processing method
CN103279939A (en) * 2013-04-27 2013-09-04 北京工业大学 Image stitching processing system
CN115456869A (en) * 2021-06-08 2022-12-09 杭州海康威视数字技术股份有限公司 Image splicing method and device and electronic equipment
WO2023011013A1 (en) * 2021-08-04 2023-02-09 北京旷视科技有限公司 Splicing seam search method and apparatus for video image, and video image splicing method and apparatus
CN113902657A (en) * 2021-08-26 2022-01-07 北京旷视科技有限公司 Image splicing method and device and electronic equipment
CN116760937A (en) * 2023-08-17 2023-09-15 广东省科技基础条件平台中心 Video stitching method, device, equipment and storage medium based on multiple machine positions
CN117114997A (en) * 2023-10-23 2023-11-24 四川新视创伟超高清科技有限公司 Image stitching method and device based on suture line search algorithm

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
The Algorithm of Fast Image Stitching Based on Multi-feature Extraction;Fang, D等;6th International Conference on Computer-Aided Design, Manufacturing, Modeling and Simulation (CDMMS);20180713;全文 *
基于深度学习的汉字识别方法研究综述;曹昕妍等;微纳电子与智能制造;20200930;全文 *
基于相机平移模式下的图像拼接技术研究;丁晓娜;李静;雷鸣;;电子设计工程;20091105(11);全文 *
基于透视失真矫正的道路航拍图像拼接方法;王小龙等;施工技术(中英文);20231201;第52卷(第24期);正文第2节 *

Also Published As

Publication number Publication date
CN117544862A (en) 2024-02-09

Similar Documents

Publication Publication Date Title
CN108510485B (en) Non-reference image quality evaluation method based on convolutional neural network
CN108932536B (en) Face posture reconstruction method based on deep neural network
CN106920215B (en) Method for detecting registration effect of panoramic image
CN109034184B (en) Grading ring detection and identification method based on deep learning
CN111401324A (en) Image quality evaluation method, device, storage medium and electronic equipment
CN111127435B (en) No-reference image quality evaluation method based on double-current convolution neural network
CN110176023B (en) Optical flow estimation method based on pyramid structure
CN108171735B (en) Billion pixel video alignment method and system based on deep learning
CN109685772B (en) No-reference stereo image quality evaluation method based on registration distortion representation
CN111931686B (en) Video satellite target tracking method based on background knowledge enhancement
CN113744315B (en) Semi-direct vision odometer based on binocular vision
CN109949200B (en) Filter subset selection and CNN-based steganalysis framework construction method
CN112818850B (en) Cross-posture face recognition method and system based on progressive neural network and attention mechanism
CN115147418B (en) Compression training method and device for defect detection model
CN110910456B (en) Three-dimensional camera dynamic calibration method based on Harris angular point mutual information matching
CN113095358A (en) Image fusion method and system
CN112102192A (en) Image white balance method
US20130084025A1 (en) Method for Brightness Correction of Defective Pixels of Digital Monochrome Image
CN113935917A (en) Optical remote sensing image thin cloud removing method based on cloud picture operation and multi-scale generation countermeasure network
CN112330613B (en) Evaluation method and system for cytopathology digital image quality
CN117544862B (en) Image stitching method based on image moment parallel processing
CN112559791A (en) Cloth classification retrieval method based on deep learning
CN116681742A (en) Visible light and infrared thermal imaging image registration method based on graph neural network
CN115346091B (en) Method and device for generating Mura defect image data set
CN110717913A (en) Image segmentation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant