CN115131211A - Image synthesis method, image synthesis device, portable scanner and non-volatile storage medium - Google Patents

Image synthesis method, image synthesis device, portable scanner and non-volatile storage medium Download PDF

Info

Publication number
CN115131211A
CN115131211A CN202210779921.7A CN202210779921A CN115131211A CN 115131211 A CN115131211 A CN 115131211A CN 202210779921 A CN202210779921 A CN 202210779921A CN 115131211 A CN115131211 A CN 115131211A
Authority
CN
China
Prior art keywords
images
target
lens
image
lenses
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210779921.7A
Other languages
Chinese (zh)
Inventor
曲涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weihai Hualing Opto Electronics Co Ltd
Original Assignee
Weihai Hualing Opto Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weihai Hualing Opto Electronics Co Ltd filed Critical Weihai Hualing Opto Electronics Co Ltd
Priority to CN202210779921.7A priority Critical patent/CN115131211A/en
Publication of CN115131211A publication Critical patent/CN115131211A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image synthesis method, an image synthesis device, a portable scanner and a nonvolatile storage medium. Wherein, the method comprises the following steps: shooting a target object by adopting a plurality of target lenses to obtain a first image, wherein the target lenses are linearly arranged, and the first image comprises a plurality of images respectively shot by the target lenses; acquiring a first splicing parameter corresponding to a first image, wherein the first splicing parameter comprises the pixel width of an overlapping part between any two adjacent images in the first image; and synthesizing the first image into a target image comprising the target object according to the first splicing parameter. The invention solves the technical problem of low picture synthesis efficiency caused by large calculated amount in the image synthesis process in the related technology.

Description

Image synthesis method, image synthesis device, portable scanner and non-volatile storage medium
Technical Field
The invention relates to the field of image processing, in particular to an image synthesis method and device, a portable scanner and a nonvolatile storage medium.
Background
Under the background of the 4.0 era of intelligent manufacturing industry with high-end equipment manufacturing as a core, the development of industrial intelligence shows an explosive growth momentum, and a robot is helped to extract, process and understand the extracted information in the field of machine vision in a mode of simulating biological vision imaging and processing information, so that the robot can accurately, efficiently and safely perform automatic operation. Machine vision can carry out high accuracy monitoring and image recognition work, in industrial automation detection, the detection requirement to tiny defect is higher when improving efficiency.
In the prior art, a method for synthesizing a picture obtained by shooting a plurality of lenses into a picture includes: the method comprises the steps of using a plurality of lenses to focus and photograph different objects in the same scene respectively to obtain a plurality of images, selecting one image as a substrate, identifying the object with the highest definition in each of the other images through a software algorithm, intercepting the clearest target object to replace the target object in the substrate, and finally obtaining a substrate image which is a synthesized image. The data processing process of the method has large calculation amount, so that the picture synthesis efficiency is not high.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image synthesis method and device, a portable scanner and a nonvolatile storage medium, which are used for at least solving the technical problem of low image synthesis efficiency caused by large calculation amount in an image synthesis process in the related technology.
According to an aspect of an embodiment of the present invention, there is provided an image synthesizing method including: determining position parameters of a plurality of target lenses; shooting a target object by adopting a plurality of target lenses according to the position parameters to obtain a plurality of first images, and marking target point positions on the plurality of first images respectively to obtain a plurality of second images, wherein the plurality of target lenses are arranged linearly; determining splicing parameters corresponding to the plurality of second images according to the target point positions, wherein the splicing parameters comprise pixel widths of overlapping parts between adjacent images in the plurality of second images; and synthesizing the plurality of second images into a target image comprising the target object according to the splicing parameters.
Optionally, synthesizing the plurality of second images into a target image including the target object according to the stitching parameter, including: removing repeated parts in the second images according to the splicing parameters to obtain third images, wherein the pixel width of the repeated parts is the same as that of the overlapped parts; and synthesizing the plurality of third images into the target image according to the arrangement sequence of the plurality of target lenses.
Optionally, determining stitching parameters corresponding to the plurality of second images according to the target point location includes: overlapping the point positions representing the same physical position in the target point positions to partially overlap two adjacent images in the plurality of second images; and determining a splicing parameter according to the overlapping part of two adjacent images in the plurality of second images.
Optionally, determining the position parameters of the plurality of target shots comprises: acquiring lens parameters of a plurality of target lenses and a to-be-scanned area range of a target object; and determining the position parameters according to the lens parameters and the range of the area to be scanned.
Optionally, shooting the target object by using a plurality of target lenses according to the position parameter to obtain a plurality of first images, and marking target points on the plurality of first images to obtain a plurality of second images, including: under the condition that the position parameters comprise the arrangement space, the object distance and the imaging range of the target lenses, determining the range of an overlapping area in the imaging range according to the arrangement space; according to the position parameters and the range of the overlapping area, a plurality of target lenses are adopted to shoot a target object to obtain a plurality of first images, target point positions are marked on the plurality of first images to obtain a plurality of second images, wherein a geometric distance identification graph is drawn on the target object, and the minimum identification length of the geometric distance identification graph is not larger than the arrangement distance.
Optionally, shooting the target object by using a plurality of target lenses according to the position parameter and the overlapping area range to obtain a plurality of first images, and marking target points on the plurality of first images to obtain a plurality of second images, including: according to the position parameters, aligning a first boundary of a framing frame of a first lens in the target lenses with the scale of the target object, wherein the first lens is a lens located at the linear arrangement starting point in the target lenses, the first boundary is a boundary close to a second lens in the framing frame of the first lens, and the second lens is adjacent to the first lens; according to the overlapping area range, under the condition that the first boundary is aligned with the scale of the target object, shooting the target object by using a first lens to obtain a first image corresponding to the first lens, and marking a first point on the first image corresponding to the first lens, wherein the physical position represented by the first point is in the overlapping area range in the imaging range corresponding to each of the first lens and the second lens; moving the target object according to the minimum mark length and the overlapping area range, so that the scale of the target object is aligned with a second boundary of the framing frame of the second lens, wherein the moving direction of the target object is the direction from the first lens to the second lens, and the second boundary is the boundary close to the first lens in the framing frame of the second lens; under the condition that the scale on the target object is aligned with the second boundary, shooting the target object by using a second lens to obtain a first image corresponding to the second lens, and marking a second point location on the image shot by the second lens, wherein the second point location is the same as the physical position represented by the first point location, the image shot by the second lens is one of the first images, and the target point location comprises the first point location and the second point location; and moving the target object for multiple times according to the minimum mark length and the range of the overlapping area, so that the scales of the target object are sequentially aligned with the boundaries, close to the first lenses, in the respective viewing frames of the target lenses, sequentially shooting the target object by using each lens of the target lenses to obtain multiple first images, and marking target point positions on the multiple first images.
According to another aspect of the embodiments of the present invention, there is also provided an image synthesizing apparatus including: the first data processing module is used for determining position parameters of a plurality of target lenses; the shooting module is used for shooting a target object by adopting a plurality of target lenses according to the position parameters to obtain a plurality of first images, and marking target point positions on the plurality of first images respectively to obtain a plurality of second images, wherein the plurality of target lenses are linearly arranged; the second data processing module is used for determining splicing parameters corresponding to the plurality of second images according to the target point position, wherein the splicing parameters comprise the pixel width of the overlapping part between the adjacent images in the plurality of second images; and the synthesis module is used for synthesizing the plurality of second images into a target image comprising the target object according to the splicing parameters.
According to still another aspect of an embodiment of the present invention, there is also provided a portable scanner including: the system comprises a frame module, a plurality of linearly arranged target lenses, an FPGA chip and a storage chip; the frame module is used for bearing the plurality of linearly arranged target lenses, the FPGA chip and the storage chip; the plurality of linearly arranged target lenses are used for shooting a target object to obtain a plurality of first images; the FPGA chip is used for marking target point positions on the first images respectively to obtain second images, determining splicing parameters corresponding to the second images according to the target point positions, and synthesizing the second images into a target image comprising the target object according to the splicing parameters, wherein the splicing parameters comprise pixel widths of overlapping parts between adjacent images in the first images; the storage chip is used for storing the plurality of first images, the plurality of second images and the splicing parameter.
According to still another aspect of the embodiments of the present invention, there is provided a non-volatile storage medium, where the non-volatile storage medium includes a stored program, and when the program runs, a device in which the non-volatile storage medium is located is controlled to execute any one of the image synthesis methods.
According to a further aspect of the embodiments of the present invention, there is also provided a computer device, including a processor, configured to execute a program, where the program executes to perform any one of the image synthesis methods.
In the embodiment of the invention, the position parameters of a plurality of target lenses are determined; shooting a target object by adopting a plurality of target lenses according to the position parameters to obtain a plurality of first images, and marking target point positions on the plurality of first images respectively to obtain a plurality of second images, wherein the plurality of target lenses are arranged linearly; determining splicing parameters corresponding to the plurality of second images according to the target point position, wherein the splicing parameters comprise the pixel width of the overlapping part between the adjacent images in the plurality of second images; according to the splicing parameters, the plurality of second images are synthesized into the target image comprising the target object, so that the purpose of reducing the calculated amount in the image synthesis process is achieved, the technical effect of improving the image synthesis efficiency is achieved, and the technical problem that the image synthesis efficiency is low due to large calculated amount in the image synthesis process in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 shows a block diagram of a hardware configuration of a computer terminal for implementing an image synthesis method;
FIG. 2 is a schematic flow chart of an image synthesis method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a scanner provided in accordance with an alternative embodiment of the present invention;
FIG. 4 is a schematic diagram of the locations of target point markers provided in accordance with an alternative embodiment of the present invention;
FIG. 5 is a schematic illustration of a dot overlap pattern provided in accordance with an alternative embodiment of the present invention;
FIG. 6 is a schematic diagram of an image processing module provided in accordance with an alternative embodiment of the present invention;
fig. 7 is a block diagram of a configuration of an image synthesizing apparatus according to an embodiment of the present invention;
fig. 8 is a block diagram of a portable scanner provided in accordance with an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terms appearing in the description of the embodiments of the present application are applied to the following explanations:
the current-frequency conversion substrate is called I/F substrate for short, and the circuit in the substrate is used for converting the analog current signal into the frequency signal.
A Mobile Industry Processor Interface (MIPI) data format, which is a pixel data format for recording image data.
A Low-Voltage Differential Signaling (LVDS) format, which is a format of pixel data for recording image data.
A Field Programmable Gate Array (FPGA) chip solves the disadvantages of the custom circuit and overcomes the defect of the limited Gate circuit number of the original Programmable device.
In accordance with an embodiment of the present invention, there is provided a method embodiment of an image synthesis method, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 1 shows a block diagram of a hardware configuration of a computer terminal for implementing an image synthesis method. As shown in fig. 1, the computer terminal 10 may include one or more processors (shown as 102a, 102b, … …, 102 n) which may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, or the like, a memory 104 for storing data. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10. As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of variable resistance termination paths connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the image synthesis method in the embodiment of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory 104, so as to implement the image synthesis method of the application program. The memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with the user interface of the computer terminal 10.
Fig. 2 is a schematic flowchart of an image synthesis method provided in accordance with an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
in step S202, position parameters of a plurality of target lenses are determined.
In this step, the target object is photographed by using the plurality of target lenses, the arrangement of the plurality of target lenses may be determined before photographing, and the position parameter may be used to describe the arrangement of the plurality of target lenses. Preferably, in this embodiment, the plurality of target shots are linearly arranged in a line and are shots of the same model.
Step S204, shooting the target object by adopting a plurality of target lenses according to the position parameters to obtain a plurality of first images, and marking target point positions on the plurality of first images respectively to obtain a plurality of second images, wherein the plurality of target lenses are linearly arranged.
In this step, the plurality of target lenses may be arranged according to the arrangement mode indicated by the position parameter, each target lens of the plurality of target lenses captures a part of the target object, each target lens may obtain one first image through the capturing action, and the imaging range of each two adjacent target lenses has an overlapping region, so that the images captured by each two adjacent target lenses have a part repeatedly captured by the adjacent target lenses, and the obtained plurality of first images are a set of first images captured by all the target lenses capturing the target object; and marking target point positions on the plurality of first images respectively, wherein the plurality of first images marked with the target point positions are a plurality of second images. Optionally, the target object may be a strip-shaped object, the surface of the target object is flat, and the long side direction of the flat surface of the target object is the same as or approximately the same as the linear arrangement direction of the target lenses, in this step, the flat surface of the target object may be shot by the target lenses to obtain a plurality of first images, the plurality of first images are marked with the target points to obtain a plurality of second images, so that the plurality of second images are combined into one target image through subsequent steps, and the target image presents an image of the flat surface of the target object.
And S206, determining splicing parameters corresponding to the plurality of second images according to the target point positions, wherein the splicing parameters comprise the pixel widths of the overlapping parts between the adjacent images in the plurality of second images.
In this step, the adjacent images are two adjacent first images obtained by respectively shooting the target object by two adjacent target lenses, and because the arrangement mode of the target lenses is not changed when marking the point positions, the adjacent images in the plurality of first images are also adjacent images in the plurality of second images. When a plurality of target lenses are used for shooting a target object to obtain a plurality of first images, the imaging ranges of every two adjacent target lenses have overlapping areas, that is, every two adjacent target lenses can shoot the overlapping area range in the imaging ranges, so that parts which are repeatedly shot exist in every two adjacent images, the two parts can be overlapped, the corresponding physical positions of the overlapping parts are the overlapping area ranges in the imaging ranges of the two adjacent target lenses, and the splicing parameter is the pixel width of the overlapping parts between the adjacent images in the plurality of second images.
And S208, synthesizing the plurality of second images into a target image comprising the target object according to the splicing parameters.
In this step, according to the pixel widths of the overlapping portions of the plurality of second images described by the stitching parameters, the plurality of second images can be processed and stitched into a target image including the target object.
Optionally, when the plurality of target lenses in this embodiment are used and another object is photographed according to the position parameters in this embodiment to obtain another plurality of photos, the stitching parameters obtained in this embodiment may be directly applied to the stitching process of another plurality of images, and data algorithm processing such as image recognition and image matching is not required to be performed on the object image in the other plurality of images, so that the purpose of simplifying the image synthesis process can be achieved in specific application, and the image synthesis efficiency is improved.
Optionally, before image synthesis, preprocessing may be performed on the plurality of first images or the plurality of second images to realize image correction processing, including distortion data coefficient correction and position coefficient correction of the images, and specifically, correction may be performed according to a correction table of the lens and the position. The lens distortion is distortion caused by an optical lens, generally presents the result of image edge extrusion or stretching, and the more serious the distortion is at the image edge and the more the distortion is at the image center, the distortion data coefficient correction can be carried out on the image according to an algorithm, the distortion effect of the image is corrected, and the actual shape which can show a target object is obtained; the position coefficient correction can align a plurality of shot pictures and eliminate the dislocation condition among the pictures. According to the splicing parameters, the preprocessed multiple images can be cut and spliced to synthesize a target image comprising the target object.
Similarly, when a plurality of other lenses are used and arranged according to other position parameters and a target object is photographed to obtain a plurality of pictures for image synthesis, according to the method described in this embodiment, the target object is photographed and the plurality of pictures are punctuated, a matching parameter corresponding to each position parameter is determined, and the plurality of pictures can be synthesized according to the matching parameter without performing data algorithm processing such as image recognition, image matching and the like on the photographed pictures every time, so that the purpose of simplifying the image synthesis process can be achieved in specific application, and the image synthesis efficiency is improved.
Through the steps, the purpose of reducing the calculated amount in the image synthesis process can be achieved, the technical effect of improving the image synthesis efficiency is achieved, and the technical problem that the image synthesis efficiency is low due to large calculated amount in the image synthesis process in the related technology is solved.
As an alternative embodiment, synthesizing a plurality of second images into a target image including a target object according to the stitching parameter may be implemented by the following steps: removing repeated parts in the second images according to the splicing parameters to obtain third images, wherein the pixel width of the repeated parts is the same as that of the overlapped parts; and synthesizing the plurality of third images into the target image according to the arrangement sequence of the plurality of target lenses.
Optionally, the multiple second images are cropped according to the pixel width value of the overlapping portion of every two adjacent images in the multiple second images described by the stitching parameter, and an unnecessary portion in the direction perpendicular to the arrangement direction of the multiple target lenses can be cropped, and the repeated portion in the cropped images is removed to obtain multiple third images, wherein the repeated portion is one of the overlapping portions in the two adjacent images, the overlapping portion is two image portions obtained by shooting a target object within an imaging range, for the image content, the two image portions are repeated, when the multiple images are stitched to obtain a complete image, one of the two repeated image portions can be removed, the one is the repeated portion, and the other portion is left to participate in the stitching process of the complete target image.
As an optional embodiment, determining the stitching parameters corresponding to the plurality of second images according to the target point location may be implemented by the following steps: overlapping the point positions representing the same physical position in the target point positions to partially overlap two adjacent images in the plurality of second images; and determining a splicing parameter according to the overlapping part of two adjacent images in the plurality of second images.
Optionally, a plurality of point locations in the target point locations represent the same physical location, and when the target point locations are marked, the point locations in the two adjacent images are marked, and it is respectively determined that at least two marked point locations in the two adjacent images represent the same physical location, and the physical location is in the overlapping area range of the two target lenses corresponding to the two adjacent images. In the splicing stage, the point locations representing the same physical location in the target point location may be overlapped, so that two adjacent images in the plurality of second images have an overlapping portion, and the splicing parameter is obtained by determining the pixel width of the overlapping portion.
It should be noted that, in the above description, the overlapping region range and the overlapping portion in the adjacent image are mentioned multiple times, the overlapping region range and the overlapping portion in the adjacent image are the object image corresponding relationship, when imaging is performed by the objective lens, the overlapping region range belongs to the object space, and after imaging is performed by the objective lens, the overlapping region range corresponds to the overlapping portion in the adjacent image in the image space, so the overlapping region range refers to the range size of a specific physical position, and the overlapping portion in the adjacent image refers to the pixel range on the image.
As an alternative embodiment, determining the position parameters of the multiple target lenses may be implemented by: acquiring lens parameters of a plurality of target lenses and a to-be-scanned area range of a target object; and determining the position parameters according to the lens parameters and the range of the area to be scanned.
In this optional embodiment, the optimal arrangement method of the multiple target lenses may be obtained through calculation according to the focal length of the target lenses and the range of the area to be scanned, and at this time, the optimal arrangement method of the multiple target lenses is described by using the position parameters. For example, when the object distances of the target lenses and the target object are focal lengths, the best imaging effect can be obtained; when the arrangement distance of the target lenses is a proper value, the imaging ranges of the target lenses can cover the whole range of the area to be scanned but do not exceed the area to be scanned too much, a plurality of images with proper size and definition can be obtained, and the target image which has a good imaging effect and contains the target object can be obtained finally.
As an optional embodiment, according to the position parameter, a plurality of target lenses are used to shoot the target object to obtain a plurality of first images, and target point locations are marked on the plurality of first images to obtain a plurality of second images, which can be implemented by the following steps: under the condition that the position parameters comprise the arrangement space, the object distance and the imaging range of the target lenses, determining the range of an overlapping area in the imaging range according to the arrangement space; according to the position parameters and the range of the overlapping area, a plurality of target lenses are adopted to shoot a target object to obtain a plurality of first images, target point positions are marked on the plurality of first images to obtain a plurality of second images, wherein a geometric distance identification graph is drawn on the target object, and the minimum identification length of the geometric distance identification graph is not larger than the arrangement distance.
Optionally, a plurality of target lenses arranged according to the position parameter are used to shoot the target object, so as to obtain a plurality of first images, the position parameter includes an arrangement distance of the plurality of target lenses and an imaging range of each target lens, so as to obtain an overlapping area range in the imaging ranges of the plurality of target lenses, the overlapping area range is an area range repeatedly shot on the target object, according to the overlapping area range and the minimum identification length of the target object, an approximate position of the overlapping area range on the image can be found, a point location is marked in the overlapping area range on the image, and a target point location can be obtained. The identification patterns of the geometric distance drawn by the target object can be grids, stripes or points, the distance between each two identification patterns is the minimum identification length and is also fixed, and the distance is not more than the arrangement distance between a plurality of target lenses, so that the identification patterns can be ensured to be arranged in the imaging range of each target lens, and the target object can play a role in identifying the physical distance. Optionally, after the plurality of second images are obtained, each of the plurality of second images may be preprocessed, where the preprocessing may include distortion data coefficient correction and position coefficient correction of the image, and is beneficial to subsequently synthesizing the image.
As an optional embodiment, according to the position parameter and the overlapping area range, a plurality of target lenses are used to shoot a target object to obtain a plurality of first images, and target point locations are marked on the plurality of first images to obtain a plurality of second images, which can be implemented by the following steps: according to the position parameters, aligning a first boundary of a framing frame of a first lens in the target lenses with the scale of the target object, wherein the first lens is a lens located at the linear arrangement starting point in the target lenses, the first boundary is a boundary close to a second lens in the framing frame of the first lens, and the second lens is adjacent to the first lens; according to the overlapping area range, under the condition that the first boundary is aligned with the scale of the target object, shooting the target object by using a first lens to obtain a first image corresponding to the first lens, and marking a first point on the first image corresponding to the first lens, wherein the physical position represented by the first point is in the overlapping area range in the imaging range corresponding to each of the first lens and the second lens; moving the target object according to the minimum mark length and the overlapping area range, so that the scale of the target object is aligned with a second boundary of a framing frame of the second lens, wherein the moving direction of the target object is the direction from the first lens to the second lens, and the second boundary is the boundary, close to the first lens, in the framing frame of the second lens; under the condition that the scale on the target object is aligned with the second boundary, shooting the target object by using a second lens to obtain a first image corresponding to the second lens, and marking a second point location on the image shot by the second lens, wherein the second point location is the same as the physical position represented by the first point location, the image shot by the second lens is one of the first images, and the target point location comprises the first point location and the second point location; and moving the target object for multiple times according to the minimum mark length and the range of the overlapping area, so that the scales of the target object are sequentially aligned with the boundaries, close to the first lenses, in the respective viewing frames of the target lenses, sequentially shooting the target object by using each lens of the target lenses to obtain multiple first images, and marking target point positions on the multiple first images.
Optionally, the following method may be used to obtain a plurality of first images by shooting through a plurality of target lenses and to mark points in each image: according to the position parameters, a plurality of first images are obtained by sequentially shooting through a plurality of target lenses, when the first images are obtained through first shooting, a first lens located at a linear arrangement starting point in the plurality of target lenses is used, when the scale on the target object is aligned with a framing frame of the first lens in the target lenses and the first boundary is a boundary close to a second lens in the framing frame of the first lens, the target object is shot, a first point location is marked in the obtained first image corresponding to the first lens, and when the first point location is marked, the first point location can be marked on the image according to an overlapping area range and represents a physical position in the overlapping area range; according to the minimum mark length and the range of the overlapping area, the target object can be moved to advance a small distance, the moving direction is the same as the direction from the first lens to the second lens, so that the scales on the target object are aligned with the second boundary of the framing frame of the second lens, the second boundary is the boundary, close to the first lens, in the framing frame of the second lens, the target object is shot by using the second lens at the moment, a first image corresponding to the second lens is obtained, a point corresponding to the physical position corresponding to the first point is found in the first image corresponding to the second lens and is marked as a second point, and the target point comprises a first point and a second point. Through the steps, a plurality of point positions in the target point positions can represent the same physical position, and the represented physical position is within the range of the overlapping area and is imaged in the overlapping part of the adjacent images. Similarly, the target object may be sequentially moved, the scale of the target object may be sequentially aligned with the boundary of the framing frame of the target lens, which is close to the first lens, the target object may be sequentially photographed by using a plurality of target lenses, and the target point location may be obtained by performing a punctuation on the obtained image. It should be noted that, in this embodiment, the scale on the target object is not the same scale on the target object, an identification pattern for identifying a geometric distance is drawn on the target object, and the identification pattern may be a grid or a stripe.
Fig. 3 is a schematic structural diagram of an image scanning apparatus according to an alternative embodiment of the present invention, and as shown in fig. 3, the image scanning apparatus includes a frame 1, where the frame 1 is used to carry a target lens, a light source and a photoelectric conversion module inside the apparatus. Regarding the imaging mode of the image scanning apparatus, 6 target lenses are taken as an example in the present embodiment for explanation. The 6 target lenses 110 and 115 are sequentially and linearly arranged along the scanning direction, and the imaging modes of the 6 target lenses are reduced images according to a certain proportion. The photoelectric conversion module includes a chip substrate 3 for converting an optical signal obtained by imaging into a digital signal to be output, and 6 photoelectric conversion chips 120 and 125 are mounted thereon. The photoelectric conversion chips and the target lenses are oppositely arranged at intervals in a preset direction and are in one-to-one correspondence, and any two adjacent photoelectric conversion chips have an overlapping area in an imaging area in a scanning direction, wherein the preset direction is the extending direction of optical axes of the lenses of the target lenses. The detected object plane area detected by each target lens is 130-135 respectively, that is, the area of the imaging range of each target lens mentioned above, which includes the overlapping area D11-D15, the overlapping area is removed from all the imaging ranges of the target lenses, and the rest is the area range to be scanned where the target object is located. The first photoelectric conversion chip 120 scans the corresponding detection object plane 130 area under the imaging of the corresponding lens 110, the corresponding second photoelectric conversion chip 121 scans the corresponding detection object plane 131 area under the imaging of the corresponding target lens 111, and the overlapping area of the scanning object planes 130 and 131 is D11. By analogy, the photoelectric conversion chips 120 and 125 scan the corresponding detection object plane 130 and 135 regions under the imaging of the corresponding target lens, and the overlapping ranges are respectively D11-D15.
In the present specific embodiment, when describing the size of each region, the length in the lens linear arrangement direction is mainly considered, and for the length perpendicular to the lens linear arrangement direction, the values of all ranges in the present embodiment are 12.9mm, and since the specific values thereof do not affect subsequent calculations in the lens linear arrangement direction, they are not mentioned in the following when referring to the calculations in the lens linear arrangement direction. In the linear lens arrangement direction, the specific values in this embodiment are: the length of the region range to be scanned is 73.4mm, the focal length of the target lens is 14mm, and according to calculation, when the position parameters of the target lens are respectively the following data, a better effect can be achieved after scanning and imaging the target object: the scanning area of each target lens, namely the imaging range 130-131-133-134-135-12.9 mm; the overlapped part of the imaging ranges of two adjacent target lenses D11 ═ D12 ═ D13 ═ D14 ═ D15 ═ 0.4 mm; the effective reading area of 6 images is 73.4 mm; the distance from the target lens to the working object plane is 14 mm.
In this embodiment, a proof having a specific black-and-white grid pattern may be selected as the target object, and since the size of the overlapping portion of the imaging ranges of two adjacent target lenses is determined to be 0.4mm, the size of a single black-and-white grid may be selected to be 1mm × 1 mm; the selected first target lens for taking pictures is positioned at the leftmost side of all the target lenses, so that the grid proof is moved to the right, and the next target lens for taking pictures is positioned at the right side of the first target lens. Firstly, the grid sample is moved to the right, the right boundary of the framing frame of the first lens is adjusted to be positioned at the central position of the grid, the distance between the right boundary of the framing frame of the first lens and the nearest grid frame is 0.5mm, the target object is photographed at the moment, and the first image is obtained, namely, the single grid at the boundary part of the first image represents the size of 1mm and 0.5 mm. At this time, since the length of the overlapping portion of the imaging ranges of two adjacent target lenses is 0.8mm, the specific physical position of the point marked nearby on the frame of the grid, which is closest to the right boundary of the frame of the first lens at this time, is within the range of the overlapping portion and is close to the middle portion of the overlapping portion, which is beneficial to accurately identifying the overlapping area range in two adjacent images in the subsequent step. FIG. 4 is a schematic diagram of target point location marker locations provided in accordance with an alternative embodiment of the present invention, as shown in FIG. 4, when 4 dot locations, labeled A1-A4, are marked on the grid border of the first image closest to the left and right boundaries as markers of positioning coordinates for subsequent processing of the image. As shown in fig. 4, the x-axis direction is a direction in which the lenses are linearly arranged, the movement along the x-axis direction is a rightward movement, and the y-axis direction is a direction perpendicular to the linear arrangement of the lenses.
Next, the distance between the first image and the second image is adjusted so that the image lattices at the boundary portion of the first image and the second image are equal in size. Specifically, the left boundary of the framing frame of the second lens is adjusted to be located at the center of the grid by moving the grid sample by a distance of 0.2mm to the right, the distance from the left boundary of the framing frame of the second lens to the nearest grid frame is 0.5mm, the target object is photographed at this time, a second image is obtained, a single grid at the boundary part of the second image represents a size of 1mm × 0.5mm, and the image grids at the boundary part of the first image and the second image are equal in size. As shown in fig. 4, on the obtained second image, four dot positions are marked on the lattice frame closest to the left and right boundaries, which are marked as B1-B4, wherein the physical positions indicated by B1 and B2 of the left boundary marker are the same as the physical positions indicated by A3 and a4, respectively, as exemplified by A3 and B1, it should be emphasized that although the dots indicated by A3 and B1 are the same dot on the target object, since the target object moves by a distance of 0.2mm between the first image and the second image, the physical position corresponding to the dot indicated by A3 and B1 actually has an error of 0.2mm, and the error of the physical position corresponding to the dot due to the movement of the target object in this embodiment is acceptable in consideration of the error in practical application including the shooting error existing in the target lens itself.
And secondly, sequentially adjusting the positions of a plurality of subsequent images and acquiring the images of the marked points. Specifically, in this embodiment, taking 6 images as an example, grid samples are sequentially adjusted to move rightward by 0.1mm, and other target lenses are sequentially used to take pictures in sequence, so as to obtain the subsequent 4 images, where each grid at the boundary of each image represents a size of 1mm × 0.5 mm. As shown in FIG. 4, 4 dot positions are marked at four specific positions in sequence in each subsequent image, and the dot positions represent the fixed positions of the grid, and are marked as B1-B4, C1-C4, D1-D4, E1-E4 and F1-F4.
Next, a plurality of images are preprocessed. The preprocessing mode comprises distortion data coefficient correction and position coefficient correction of the image, and the specific method is that correction is carried out according to a correction table of a lens and a position, so that a picture which is more in line with the actual image of a target object can be obtained.
Next, the overlapping and redundant portions of the plurality of images are processed. Because each image is independent and the imaging area of the imaging lens is larger than the actually required image area, overlapping parts exist between adjacent images in the scanning direction, namely the x-axis direction, as shown in fig. 4, the dot positions indicated by the arrows are point position marks of the two images at the same position of the overlapping area, and the overlapping parts of the images can be processed according to the point position marks. In the x direction, taking the images of the two adjacent display areas 130 and 131 as an example, the areas displayed in the two images have an overlapping portion D11, the point A3 of the first image 130 and the point B1 of the second image 131 are the same image point at the same position in the overlapping area D11, and the point a4 and the point B2 are the same image point at the same position in the overlapping area D11, so that the mark points at the same position can be overlapped, the pixel width of the overlapping area in the two adjacent images can be identified, the matching parameters corresponding to the multiple images can be obtained, and one of the two partial images representing the area of D11 in the overlapping area can be removed. Meanwhile, an area having an unnecessary image area in the sub-scanning direction y is also removed, and an image other than the dotted line shown in fig. 4 is removed by taking 4 dots of one image as an example and 4 dot coordinates as reference positions. Preferably, the actual area represented by the images remaining after the single image in this embodiment has been removed of overlapping and redundant portions is 12.9mm by 12.5 mm.
And secondly, synthesizing the plurality of images to obtain a complete image. Fig. 5 is a schematic diagram of a point overlapping manner according to an alternative embodiment of the present invention, as shown in fig. 5, for example, when images of the first image 130 and the second image 131 are synthesized, the dot marks B1 and B2 of the second image are respectively moved to the corresponding dots A3 and a4 of the first image, thereby completing seamless splicing of the first image and the second image, sequentially completing splicing of subsequent images, and completing synthesis of the whole image.
Finally, after the stitching parameters corresponding to the position parameters of the target lens are determined according to the grid proof, the stitching parameters determined according to the embodiment can be directly read to cut and stitch a plurality of images obtained by shooting other target objects according to the target lens corresponding to the position parameters determined according to the embodiment, calculation according to the position parameters after the plurality of images are obtained through shooting each time is not needed, and the image synthesis efficiency in the application process is improved.
Fig. 6 is a schematic diagram of an image processing module according to an alternative embodiment of the present invention, as shown in fig. 6, the image processing module includes an I/F substrate 4, which is used to implement data conversion and transmission functions, specifically, signals generated by the photoelectric conversion chips 120 and 125 are passed through a circuit inside the I/F substrate, so that signals in the MIPI data format can be converted into signals in the LVDS data format, and the signals are transmitted inside the FPGA to complete the data transmission function. The data processing module comprises an image processing substrate 5, wherein an FPGA chip 6 is carried on the image processing substrate, the FPGA can be used for reading image signals, the period of a time sequence is controlled through the setting of an internal register, feedback information is transmitted to the image processing module, meanwhile, the FPGA also has an image preprocessing function, and the preprocessing function comprises an image distortion correcting coefficient and an image position correcting coefficient. And after preprocessing, obtaining an image which is easy to distinguish by human eyes, and then removing the overlapped part and the redundant part of the image and splicing the multiple images so as to realize the function of correcting and synthesizing the multiple images. After the images are synthesized, the images are transmitted to an external display module PC end through a serial port of the image processing substrate, the serial port carries a FULL mode of the image processing substrate, the highest transmission speed reaches 680Mbyte/s, and the images are displayed in a whole frame.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art will appreciate that the embodiments described in this specification are presently preferred and that no acts or modules are required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the image synthesis method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
According to an embodiment of the present invention, there is also provided an apparatus for implementing the image synthesis method described above, and fig. 7 is a block diagram of a configuration of the image synthesis apparatus according to an embodiment of the present invention, as shown in fig. 7, the image synthesis apparatus including: the first data processing module 72, the photographing module 74, the second data processing module 76, and the synthesizing module 78, and the image synthesizing apparatus will be described below.
A first data processing module 72 for determining position parameters of a plurality of target shots.
And the shooting module 74 is connected to the first data processing module 72, and is configured to shoot the target object using a plurality of target lenses according to the position parameters to obtain a plurality of first images, and mark target point locations on the plurality of first images respectively to obtain a plurality of second images, where the plurality of target lenses are arranged linearly.
And the second data processing module 76 is connected to the shooting module 74, and is configured to determine stitching parameters corresponding to the multiple second images according to the target point location, where the stitching parameters include pixel widths of overlapping portions between adjacent images in the multiple second images.
And the synthesizing module 78 is connected to the second data processing module 76 and is configured to synthesize the plurality of second images into a target image including the target object according to the matching parameters.
It should be noted here that the first data processing module 72, the shooting module 74, the second data processing module 76 and the synthesizing module 78 correspond to steps S202 to S208 in the embodiment, and the four modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in the embodiment. It should be noted that the above modules as a part of the apparatus may be run in the computer terminal 10 provided in the embodiment.
According to an embodiment of the present invention, there is also provided a portable scanner for implementing the image synthesis method, and fig. 8 is a block diagram of a structure of the portable scanner according to an embodiment of the present invention, as shown in fig. 8, the portable scanner includes: a frame module 82, a plurality of linearly arranged object lenses 84, an FPGA chip 86, and a memory chip 88, which will be described below.
And the frame module 82 is used for bearing a plurality of linearly arranged target lenses, an FPGA chip and a memory chip.
And a plurality of linearly arranged object lenses 84 for photographing an object, wherein the plurality of object lenses are linearly arranged.
The FPGA chip 86 is configured to mark target point locations on the plurality of first images respectively to obtain a plurality of second images, the plurality of first images are obtained by being shot by the plurality of target lenses respectively, determine stitching parameters corresponding to the plurality of second images according to the target point locations, and synthesize the plurality of second images into a target image including a target object according to the stitching parameters, where the stitching parameters include pixel widths of overlapping portions between adjacent images in the first images.
And the storage chip 88 is used for storing a plurality of first images, a plurality of second images and splicing parameters.
An embodiment of the present invention may provide a computer device, and optionally, in this embodiment, the computer device may be located in at least one network device of a plurality of network devices of a computer network. The computer device includes a memory and a processor.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the image synthesis method and apparatus in the embodiments of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the image synthesis method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the computer terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: determining position parameters of a plurality of target lenses; shooting a target object by adopting a plurality of target lenses according to the position parameters to obtain a plurality of first images, and marking target point positions on the plurality of first images respectively to obtain a plurality of second images, wherein the plurality of target lenses are arranged linearly; determining splicing parameters corresponding to the plurality of second images according to the target point positions, wherein the splicing parameters comprise pixel widths of overlapping parts between adjacent images in the plurality of second images; and synthesizing the plurality of second images into a target image comprising the target object according to the splicing parameters.
Optionally, the processor may further execute the program code of the following steps: synthesizing the plurality of second images into a target image including the target object according to the splicing parameters, including: removing repeated parts in the second images according to the splicing parameters to obtain third images, wherein the pixel width of the repeated parts is the same as that of the overlapped parts; and synthesizing the third images into a target image according to the arrangement sequence of the target lenses.
Optionally, the processor may further execute the program code of the following steps: determining splicing parameters corresponding to the plurality of second images according to the target point positions, wherein the determining comprises the following steps: overlapping the point positions representing the same physical position in the target point positions to partially overlap two adjacent images in the plurality of second images; and determining a splicing parameter according to the overlapping part of two adjacent images in the plurality of second images.
Optionally, the processor may further execute the program code of the following steps: determining position parameters of a plurality of target lenses, comprising: acquiring lens parameters of a plurality of target lenses and a to-be-scanned area range of a target object; and determining the position parameters according to the lens parameters and the range of the area to be scanned.
Optionally, the processor may further execute the program code of the following steps: according to the position parameter, adopt a plurality of target camera lenses to shoot the target object, obtain a plurality of first images to mark the target point position on a plurality of first images, obtain a plurality of second images, include: under the condition that the position parameters comprise the arrangement space, the object distance and the imaging range of the target lenses, determining the range of an overlapping area in the imaging range according to the arrangement space; according to the position parameters and the range of the overlapping area, a plurality of target lenses are adopted to shoot a target object to obtain a plurality of first images, target point positions are marked on the plurality of first images to obtain a plurality of second images, wherein a geometric distance identification graph is drawn on the target object, and the minimum identification length of the geometric distance identification graph is not more than the arrangement distance.
Optionally, the processor may further execute the program code of the following steps: according to the position parameter and the range of the overlapping area, a plurality of target lenses are adopted to shoot a target object to obtain a plurality of first images, target point positions are marked on the plurality of first images to obtain a plurality of second images, and the method comprises the following steps: according to the position parameters, aligning a first boundary of a framing frame of a first lens in the target lenses with the scale of the target object, wherein the first lens is a lens located at the linear arrangement starting point in the target lenses, the first boundary is a boundary close to a second lens in the framing frame of the first lens, and the second lens is adjacent to the first lens; according to the overlapping area range, under the condition that the first boundary is aligned with the scale of the target object, shooting the target object by using a first lens to obtain a first image corresponding to the first lens, and marking a first point on the first image corresponding to the first lens, wherein the physical position represented by the first point is in the overlapping area range in the imaging range corresponding to each of the first lens and the second lens; moving the target object according to the minimum mark length and the overlapping area range, so that the scale of the target object is aligned with a second boundary of the framing frame of the second lens, wherein the moving direction of the target object is the direction from the first lens to the second lens, and the second boundary is the boundary close to the first lens in the framing frame of the second lens; under the condition that the scale on the target object is aligned with the second boundary, shooting the target object by using a second lens to obtain a first image corresponding to the second lens, and marking a second point location on the image shot by the second lens, wherein the second point location is the same as the physical position represented by the first point location, the image shot by the second lens is one of the first images, and the target point location comprises the first point location and the second point location; and moving the target object for multiple times according to the minimum mark length and the range of the overlapping area, so that the scales of the target object are sequentially aligned with the boundaries, close to the first lenses, in the respective viewing frames of the target lenses, sequentially shooting the target object by using each lens of the target lenses to obtain multiple first images, and marking target point positions on the multiple first images.
The embodiment of the invention provides an image synthesis scheme. Determining a plurality of position parameters of a target lens; shooting a target object by adopting a plurality of target lenses according to the position parameters to obtain a plurality of first images, and marking target point positions on the plurality of first images respectively to obtain a plurality of second images, wherein the plurality of target lenses are arranged linearly; determining splicing parameters corresponding to the plurality of second images according to the target point position, wherein the splicing parameters comprise the pixel width of the overlapping part between the adjacent images in the plurality of second images; and synthesizing the plurality of second images into a target image comprising the target object according to the splicing parameters, so that the purpose of reducing the calculated amount in the image synthesis process is achieved, the technical effect of improving the image synthesis efficiency is realized, and the technical problem of low image synthesis efficiency caused by large calculated amount in the image synthesis process in the related technology is solved.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a non-volatile storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Embodiments of the present invention also provide a non-volatile storage medium. Alternatively, in this embodiment, the nonvolatile storage medium may be configured to store the program code executed by the image synthesis method provided in the embodiment.
Optionally, in this embodiment, the nonvolatile storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Optionally, in this embodiment, the non-volatile storage medium is configured to store program code for performing the following steps: determining position parameters of a plurality of target lenses; shooting a target object by adopting a plurality of target lenses according to the position parameters to obtain a plurality of first images, and marking target point positions on the plurality of first images respectively to obtain a plurality of second images, wherein the plurality of target lenses are arranged linearly; determining splicing parameters corresponding to the plurality of second images according to the target point positions, wherein the splicing parameters comprise pixel widths of overlapping parts between adjacent images in the plurality of second images; and synthesizing the plurality of second images into a target image comprising the target object according to the splicing parameters.
Optionally, in this embodiment, the non-volatile storage medium is configured to store program code for performing the following steps: synthesizing the plurality of second images into a target image including the target object according to the splicing parameters, including: removing repeated parts in the second images according to the splicing parameters to obtain third images, wherein the pixel width of the repeated parts is the same as that of the overlapped parts; and synthesizing the plurality of third images into the target image according to the arrangement sequence of the plurality of target lenses.
Optionally, in this embodiment, the non-volatile storage medium is configured to store program code for performing the following steps: determining splicing parameters corresponding to the plurality of second images according to the target point positions, wherein the determining comprises the following steps: overlapping the point positions representing the same physical position in the target point positions to partially overlap two adjacent images in the plurality of second images; and determining a splicing parameter according to the overlapping part of two adjacent images in the plurality of second images.
Optionally, in this embodiment, the non-volatile storage medium is configured to store program code for performing the following steps: determining position parameters of a plurality of target lenses, comprising: acquiring lens parameters of a plurality of target lenses and a to-be-scanned area range of a target object; and determining the position parameters according to the lens parameters and the range of the area to be scanned.
Optionally, in this embodiment, the non-volatile storage medium is configured to store program code for performing the following steps: according to the position parameters, shooting a target object by adopting a plurality of target lenses to obtain a plurality of first images, and marking target point positions on the plurality of first images to obtain a plurality of second images, wherein the method comprises the following steps: under the condition that the position parameters comprise the arrangement space, the object distance and the imaging range of the target lenses, determining the range of an overlapping area in the imaging range according to the arrangement space; according to the position parameters and the range of the overlapping area, a plurality of target lenses are adopted to shoot a target object to obtain a plurality of first images, target point positions are marked on the plurality of first images to obtain a plurality of second images, wherein a geometric distance identification graph is drawn on the target object, and the minimum identification length of the geometric distance identification graph is not more than the arrangement distance.
Optionally, in this embodiment, the non-volatile storage medium is configured to store program code for performing the following steps: according to the position parameter and the range of the overlapping area, a plurality of target lenses are adopted to shoot a target object to obtain a plurality of first images, target point positions are marked on the plurality of first images to obtain a plurality of second images, and the method comprises the following steps: according to the position parameters, aligning a first boundary of a framing frame of a first lens in the target lenses with the scale of the target object, wherein the first lens is a lens located at the linear arrangement starting point in the target lenses, the first boundary is a boundary close to a second lens in the framing frame of the first lens, and the second lens is adjacent to the first lens; according to the overlapping area range, under the condition that the first boundary is aligned with the scale of the target object, shooting the target object by using a first lens to obtain a first image corresponding to the first lens, and marking a first point on the first image corresponding to the first lens, wherein the physical position represented by the first point is in the overlapping area range in the imaging range corresponding to each of the first lens and the second lens; moving the target object according to the minimum mark length and the overlapping area range, so that the scale of the target object is aligned with a second boundary of a framing frame of the second lens, wherein the moving direction of the target object is the direction from the first lens to the second lens, and the second boundary is the boundary, close to the first lens, in the framing frame of the second lens; under the condition that the scale on the target object is aligned with the second boundary, shooting the target object by using a second lens to obtain a first image corresponding to the second lens, and marking a second point location on the image shot by the second lens, wherein the second point location is the same as the physical position represented by the first point location, the image shot by the second lens is one of the first images, and the target point location comprises the first point location and the second point location; and moving the target object for multiple times according to the minimum mark length and the range of the overlapping area, so that the scales of the target object are sequentially aligned with the boundaries, close to the first lenses, in the respective viewing frames of the target lenses, sequentially shooting the target object by using each lens of the target lenses to obtain multiple first images, and marking target point positions on the multiple first images.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented as a software functional unit and sold or used as a stand-alone product, may be stored in a non-volatile memory storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An image synthesis method, comprising:
determining position parameters of a plurality of target lenses;
shooting a target object by adopting the target lenses according to the position parameters to obtain a plurality of first images, and marking target point positions on the first images respectively to obtain a plurality of second images, wherein the target lenses are arranged linearly;
determining splicing parameters corresponding to the plurality of second images according to the target point position, wherein the splicing parameters comprise the pixel width of the overlapping part between the adjacent images in the plurality of second images;
and synthesizing the plurality of second images into a target image comprising the target object according to the splicing parameters.
2. The method according to claim 1, wherein synthesizing the plurality of second images into a target image including the target object according to the stitching parameter comprises:
removing repeated parts in the second images according to the splicing parameters to obtain third images, wherein the pixel width of the repeated parts is the same as that of the overlapped parts;
and synthesizing the plurality of third images into the target image according to the arrangement sequence of the plurality of target lenses.
3. The method according to claim 1, wherein the determining the stitching parameters corresponding to the plurality of second images according to the target point location comprises:
overlapping two adjacent images in the plurality of second images by overlapping the point positions representing the same physical position in the target point position;
and determining the splicing parameters according to the overlapping parts of two adjacent images in the second images.
4. The method of claim 1, wherein determining the position parameters of the plurality of target shots comprises:
acquiring lens parameters of the target lenses and a to-be-scanned area range of the target object;
and determining the position parameters according to the lens parameters and the range of the area to be scanned.
5. The method according to claim 1, wherein the capturing the target object using the target lenses according to the position parameters to obtain a plurality of first images, and marking target points on the plurality of first images to obtain a plurality of second images comprises:
under the condition that the position parameters comprise the arrangement space, the object distance and the imaging range of the target lenses, determining the range of the overlapping area in the imaging range according to the arrangement space;
and shooting the target object by adopting the target lenses according to the position parameters and the overlapping area range to obtain a plurality of first images, and marking target point positions on the plurality of first images to obtain a plurality of second images, wherein a geometric distance identification graph is drawn on the target object, and the minimum identification length of the geometric distance identification graph is not more than the arrangement distance.
6. The method according to claim 5, wherein the capturing the target object using the plurality of target lenses according to the position parameter and the overlapping area range to obtain the plurality of first images, and marking target points on the plurality of first images to obtain the plurality of second images comprises:
according to the position parameters, aligning a first boundary of a framing frame of a first lens in the target lenses with a scale of the target object, wherein the first lens is a lens located at a linear arrangement starting point in the target lenses, the first boundary is a boundary close to a second lens in the framing frame of the first lens, and the second lens is adjacent to the first lens;
according to the overlapping area range, when the first boundary is aligned with the scale of the target object, shooting the target object by using the first lens to obtain a first image corresponding to the first lens, and marking a first point position on the first image corresponding to the first lens, wherein the physical position represented by the first point position is within the overlapping area range in the imaging range corresponding to each of the first lens and the second lens;
moving the target object according to the minimum mark length and the overlapping area range, so that the scale of the target object is aligned with a second boundary of a framing frame of a second lens, wherein the moving direction of the target object is a direction pointing to the second lens from the first lens, and the second boundary is a boundary, close to the first lens, in the framing frame of the second lens;
under the condition that the scale on the target object is aligned with the second boundary, shooting the target object by using the second lens to obtain a first image corresponding to the second lens, and marking a second point location on the image shot by the second lens, wherein the second point location is the same as the physical position represented by the first point location, the image shot by the second lens is one of the first images, and the target point location comprises the first point location and the second point location;
and moving the target object for multiple times according to the moving direction according to the minimum mark length and the overlapping area range, so that the scales of the target object are sequentially aligned with the boundaries, close to the first lenses, in the respective viewing frames of the target lenses, sequentially shooting the target object by using each lens of the target lenses to obtain the first images, and marking the target point positions on the first images.
7. An image synthesizing apparatus, comprising:
the first data processing module is used for determining position parameters of a plurality of target lenses;
the shooting module is used for shooting a target object by adopting the target lenses according to the position parameters to obtain a plurality of first images, and marking target point positions on the first images respectively to obtain a plurality of second images, wherein the target lenses are linearly arranged;
the second data processing module is used for determining splicing parameters corresponding to the plurality of second images according to the target point position, wherein the splicing parameters comprise pixel widths of overlapping parts between adjacent images in the plurality of second images;
and the synthesis module is used for synthesizing the plurality of second images into a target image comprising the target object according to the splicing parameters.
8. A portable scanner, comprising: the system comprises a frame module, a plurality of linearly arranged target lenses, an FPGA chip and a storage chip; wherein the content of the first and second substances,
the frame module is used for bearing the plurality of linearly arranged target lenses, the FPGA chip and the storage chip;
the plurality of linearly arranged target lenses are used for shooting a target object to obtain a plurality of first images;
the FPGA chip is used for marking target point positions on the first images respectively to obtain second images, determining splicing parameters corresponding to the second images according to the target point positions, and synthesizing the second images into a target image comprising the target object according to the splicing parameters, wherein the splicing parameters comprise pixel widths of overlapping parts between adjacent images in the first images;
the storage chip is used for storing the plurality of first images, the plurality of second images and the splicing parameter.
9. A non-volatile storage medium, comprising a stored program, wherein a device on which the non-volatile storage medium is located is controlled to perform the image synthesis method according to any one of claims 1 to 6 when the program is executed.
10. A computer device, characterized in that the computer device comprises a processor for executing a program, wherein the program is executed to perform the image synthesis method according to any one of claims 1 to 6.
CN202210779921.7A 2022-07-04 2022-07-04 Image synthesis method, image synthesis device, portable scanner and non-volatile storage medium Pending CN115131211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210779921.7A CN115131211A (en) 2022-07-04 2022-07-04 Image synthesis method, image synthesis device, portable scanner and non-volatile storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210779921.7A CN115131211A (en) 2022-07-04 2022-07-04 Image synthesis method, image synthesis device, portable scanner and non-volatile storage medium

Publications (1)

Publication Number Publication Date
CN115131211A true CN115131211A (en) 2022-09-30

Family

ID=83382243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210779921.7A Pending CN115131211A (en) 2022-07-04 2022-07-04 Image synthesis method, image synthesis device, portable scanner and non-volatile storage medium

Country Status (1)

Country Link
CN (1) CN115131211A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09304739A (en) * 1996-05-17 1997-11-28 Canon Inc Stereoscopic picture display method and stereoscopic picture display device using it
JP2003209727A (en) * 2002-01-11 2003-07-25 Fuji Photo Film Co Ltd Digital camera
CN105447877A (en) * 2015-12-13 2016-03-30 大巨龙立体科技有限公司 Parallel dual-camera stereo calibration method
CN106875339A (en) * 2017-02-22 2017-06-20 长沙全度影像科技有限公司 A kind of fish eye images joining method based on strip scaling board
CN111833250A (en) * 2020-07-13 2020-10-27 北京爱笔科技有限公司 Panoramic image splicing method, device, equipment and storage medium
CN112468683A (en) * 2019-09-09 2021-03-09 北京小米移动软件有限公司 Camera module and mobile terminal with same
CN113269671A (en) * 2021-04-09 2021-08-17 浙江省交通运输科学研究院 Bridge apparent panorama generation method based on local and global features
JP2021183937A (en) * 2020-05-22 2021-12-02 株式会社キーエンス Imaging apparatus and image inspection system
CN113989392A (en) * 2021-11-26 2022-01-28 深圳市同为数码科技股份有限公司 Color chessboard calibration method and device of splicing camera and camera
WO2022048303A1 (en) * 2020-09-02 2022-03-10 Oppo广东移动通信有限公司 Panoramic image photographing method and system, electronic device, and readable storage medium
JP2022072613A (en) * 2020-10-30 2022-05-17 国立大学法人千葉大学 Imaging method and imaging device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09304739A (en) * 1996-05-17 1997-11-28 Canon Inc Stereoscopic picture display method and stereoscopic picture display device using it
JP2003209727A (en) * 2002-01-11 2003-07-25 Fuji Photo Film Co Ltd Digital camera
CN105447877A (en) * 2015-12-13 2016-03-30 大巨龙立体科技有限公司 Parallel dual-camera stereo calibration method
CN106875339A (en) * 2017-02-22 2017-06-20 长沙全度影像科技有限公司 A kind of fish eye images joining method based on strip scaling board
CN112468683A (en) * 2019-09-09 2021-03-09 北京小米移动软件有限公司 Camera module and mobile terminal with same
JP2021183937A (en) * 2020-05-22 2021-12-02 株式会社キーエンス Imaging apparatus and image inspection system
CN111833250A (en) * 2020-07-13 2020-10-27 北京爱笔科技有限公司 Panoramic image splicing method, device, equipment and storage medium
WO2022048303A1 (en) * 2020-09-02 2022-03-10 Oppo广东移动通信有限公司 Panoramic image photographing method and system, electronic device, and readable storage medium
JP2022072613A (en) * 2020-10-30 2022-05-17 国立大学法人千葉大学 Imaging method and imaging device
CN113269671A (en) * 2021-04-09 2021-08-17 浙江省交通运输科学研究院 Bridge apparent panorama generation method based on local and global features
CN113989392A (en) * 2021-11-26 2022-01-28 深圳市同为数码科技股份有限公司 Color chessboard calibration method and device of splicing camera and camera

Similar Documents

Publication Publication Date Title
JP6347675B2 (en) Image processing apparatus, imaging apparatus, image processing method, imaging method, and program
EP1841207B1 (en) Imaging device, imaging method, and imaging device design method
KR100891919B1 (en) Electronic device and a method in an electronic device for forming image information, and a corresponding program product
EP3073733A1 (en) Method for generating picture and twin-lens device
KR20070102734A (en) Image processing apparatus and image processing method
CN109409147B (en) Bar code recognition method and device
KR20080007433A (en) Image processing apparatus and image processing method
US20080226278A1 (en) Auto_focus technique in an image capture device
CN105701809B (en) A kind of method for correcting flat field based on line-scan digital camera scanning
US11310409B2 (en) Focusing method, device, and mobile terminal
EP3627822A1 (en) Focus region display method and apparatus, and terminal device
KR101178777B1 (en) Image processing apparatus, image processing method and computer readable-medium
CN112087571A (en) Image acquisition method and device, electronic equipment and computer readable storage medium
WO2021134219A1 (en) Parameter calibration method and apapratus
KR20090108495A (en) Panorama Image Generating Method for Portable Terminal
CN111343360A (en) Correction parameter obtaining method
CN111866369B (en) Image processing method and device
CN115131211A (en) Image synthesis method, image synthesis device, portable scanner and non-volatile storage medium
JP2005151317A (en) Distortion aberration changeable photographing apparatus
CN106556961A (en) Camera head and its method of operating
US20220309659A1 (en) Method for fitting image
US20210281742A1 (en) Document detections from video images
CN111681164B (en) Device and method for cruising panoramic image in partial end-to-end connection mode
CN109584313B (en) Camera calibration method and device, computer equipment and storage medium
CN109379521B (en) Camera calibration method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination