CN114727019A - Image splicing method and device - Google Patents

Image splicing method and device Download PDF

Info

Publication number
CN114727019A
CN114727019A CN202210359649.7A CN202210359649A CN114727019A CN 114727019 A CN114727019 A CN 114727019A CN 202210359649 A CN202210359649 A CN 202210359649A CN 114727019 A CN114727019 A CN 114727019A
Authority
CN
China
Prior art keywords
image
area
sub
region
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210359649.7A
Other languages
Chinese (zh)
Inventor
李�浩
陈培煜
李富强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202210359649.7A priority Critical patent/CN114727019A/en
Publication of CN114727019A publication Critical patent/CN114727019A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • G06T3/14
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Abstract

The embodiment of the application discloses an image splicing method and device. According to the technical scheme provided by the embodiment of the application, the target area is divided to obtain at least two sub-areas; respectively carrying out image acquisition on each sub-area through a zooming camera to obtain an area image of each sub-area, wherein different sub-areas correspond to different focal lengths; and splicing the area images to obtain a target image. The technical scheme provided by the embodiment of the application can solve the problem that all pictures cannot be clearly obtained, improves the integrity and the definition of the obtained pictures and improves the user experience.

Description

Image splicing method and device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an image splicing method and device.
Background
Along with the popularization of teaching electronization, more and more students use family education machines to assist in learning. When students use the family education machine to assist learning, the students need to click, read, ask and answer, take pictures, search questions and the like by means of the front-facing camera on the family education machine. However, when the student answers in a large scene area, such as an a3 test paper, the current home teaching machine front camera view cannot be completely covered.
The prior proposal uses a fixed-focus front camera to fix the visual field and range thereof so as to capture a clear visual field picture, and for answering areas which are not in the visual field range, the user needs to be prompted in an interactive mode to reasonably put an answering exercise book. The use is focused leading camera and is leaded to the field of vision of camera comparatively fixed, can't clearly acquire the region far away from the focus, and the interactive suggestion user through the interaction end for user experience feels relatively poor.
Disclosure of Invention
The embodiment of the application provides an image splicing method and device, which can solve the problem that all pictures cannot be clearly obtained, improve the integrity and the definition of the obtained pictures, and improve the user experience.
In a first aspect, an embodiment of the present application provides an image stitching method, including:
dividing a target area to obtain at least two sub-areas;
respectively carrying out image acquisition on each sub-area through a zooming camera to obtain an area image of each sub-area, wherein different sub-areas correspond to different focal lengths;
and splicing the area images to obtain a target image.
Further, the acquiring images of each sub-region by the zoom camera to obtain the region image of each sub-region includes:
detecting whether an obstruction exists in each of the sub-regions;
and when the blocking object is detected to leave the sub-area, acquiring an image of the corresponding sub-area through the zooming camera to obtain a corresponding area image.
Further, the acquiring images of each sub-region by the zoom camera to obtain the region image of each sub-region includes:
detecting whether finger and/or pen occlusion exists in each sub-region;
and if the finger and/or pen is blocked, acquiring the image of the sub-area through the zoom camera after the finger and/or pen is detected to leave the sub-area, and obtaining a corresponding area image.
Further, the acquiring images of each sub-region by the zoom camera to obtain the region image of each sub-region includes:
and adjusting the focal length through the zoom camera to acquire an image of each sub-area unit, so as to obtain a region image of each sub-area unit, wherein the area of a view picture of the region image of each sub-area unit is larger than that of the corresponding sub-area.
Further, the adjusting the focal length through the zoom camera to acquire an image of each sub-area unit includes:
determining the focal length of the zooming camera according to the position of the sub-region to be image-acquired, wherein the farther the position of the sub-region to be image-acquired is from the zooming camera, the larger the focal length is, and the closer the position of the sub-region to be image-acquired is from the zooming camera, the smaller the focal length is;
and controlling the zoom camera to shoot the corresponding area image according to the determined focal length.
Further, the stitching the area images includes:
reading at least two area images, identifying the area images, and acquiring image characteristics;
performing feature matching on the acquired image features to obtain feature matching points;
performing camera parameter optimization and area image mapping processing according to the matching result, and determining a seam line according to the feature matching point;
and fusing and splicing the mapped area images according to the seam lines.
Further, the performing of camera parameter optimization and area image mapping processing according to the matching result includes:
calibrating the camera parameters of the area images to be spliced according to the feature matching result;
optimizing camera parameters of the area images to be spliced according to a light beam adjustment method;
and mapping the area images to be spliced according to the optimized camera parameters to obtain the area images after mapping.
Further, after the region images after the map processing are merged and spliced according to the seam line, the method further includes:
reserving a region with a preset distance at the position of the seam line, and carrying out seam optimization processing on the region with the preset distance.
Further, before the splicing the region images to obtain the target image, the method further includes:
and for the sub-area which has been subjected to image acquisition, if the existence of the blocking object is detected again, acquiring the image of the corresponding sub-area again through the zoom camera after the blocking object is detected to leave the sub-area, and updating to obtain the corresponding area image.
Further, the dividing the target region to obtain at least two sub-regions includes:
acquiring an image through a front camera, and identifying the region boundary of a target image;
and dividing the target area into at least two sub-areas according to the area boundary of the target image and a preset division rule to obtain the identifier of each sub-area.
Further, after the splicing is performed on the region images to obtain the target image, the method further includes:
and uploading the spliced target image to a background server, so that the background server can call the target image in response to a request of a user side, and the user can examine and approve the target image.
In a second aspect, an embodiment of the present application provides an image stitching apparatus, including:
the region dividing unit is used for dividing the target region to obtain at least two sub-regions;
the area image acquisition unit is used for respectively acquiring images of the sub-areas through the zoom camera to obtain an area image of each sub-area, wherein different sub-areas correspond to different focal lengths;
and the image splicing unit is used for splicing the area images to obtain a target image.
Further, the area image acquiring unit is further configured to detect whether an obstruction exists in each of the sub-areas;
and when the blocking object is detected to leave the sub-area, acquiring an image of the corresponding sub-area through the zooming camera to obtain a corresponding area image.
Further, the area image acquiring unit is further configured to detect whether each of the sub-areas is occluded by a finger and/or a pen;
and if the finger and/or pen is blocked, acquiring the image of the sub-area through the zoom camera after the finger and/or pen is detected to leave the sub-area, and obtaining a corresponding area image.
Further, the area image acquiring unit is further configured to perform image acquisition on each sub-area unit by adjusting a focal length through the zoom camera to obtain an area image of each sub-area unit, where a view frame area of the area image of each sub-area unit is larger than a corresponding sub-area.
Further, the area image acquiring unit is further configured to determine a focal length of the zoom camera according to a position of a sub-area to be image-acquired, where the farther the position of the sub-area to be image-acquired is from the zoom camera, the larger the focal length is, and the closer the position of the sub-area to be image-acquired is from the zoom camera, the smaller the focal length is;
and controlling the zoom camera to shoot the corresponding area image according to the determined focal length.
Further, the image splicing unit is further configured to read at least two area images, identify the area images, and acquire image characteristics;
performing feature matching on the acquired image features to obtain feature matching points;
performing camera parameter optimization and area image mapping processing according to the matching result, and determining a seam line according to the feature matching point;
and fusing and splicing the mapped area images according to the seam lines.
Furthermore, the image splicing unit is also used for calibrating the camera parameters of the area images to be spliced according to the feature matching result;
optimizing the camera parameters of the area images to be spliced according to a light beam adjustment method;
and mapping the area images to be spliced according to the optimized camera parameters to obtain the area images after mapping.
Further, the device further comprises an optimization unit;
the optimizing unit is used for reserving a region with a preset distance at the seam line and optimizing the seam of the region with the preset distance.
Further, the area image acquiring unit is further configured to acquire an image of the sub-area through the zoom camera again after the blocking object is detected to leave the sub-area if the presence of the blocking object is detected again, and update the image of the corresponding area to obtain the corresponding area image.
Further, the device also comprises an image area acquisition unit;
the image area acquisition unit is used for acquiring images through a front camera and identifying the area boundary of a target image;
the region dividing unit is further configured to divide the target region into at least two sub-regions according to the region boundary of the target image and a preset dividing rule, so as to obtain an identifier of each sub-region.
Further, the device also comprises an image uploading unit;
the image uploading unit is used for uploading the spliced target image to a background server so that the background server can call the target image in response to a request of a user side and the user can examine and approve the target image.
In a third aspect, an embodiment of the present application provides an image stitching apparatus, including:
a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the image stitching method of the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium storing computer-executable instructions for performing the image stitching method according to the first aspect when executed by a computer processor.
The method comprises the steps that a target area is divided to obtain at least two sub-areas; respectively carrying out image acquisition on each sub-area through a zoom camera to obtain an area image of each sub-area, wherein different sub-areas correspond to different focal lengths; and splicing the area images to obtain a target image. By adopting the technical means, the target area is divided into the sub-areas, the sub-areas are subjected to image acquisition through the zoom camera, clear area images at different positions can be captured, and the definition of the acquired images is improved. Meanwhile, the spliced target images are spliced by the clear area images obtained by the zooming camera, so that the overall definition of the picture is improved, the watching experience of a user is improved when the spliced target images are checked by the user, and the experience of the user is improved.
Drawings
Fig. 1 is a flowchart of an image stitching method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a target image area of a response scene according to an embodiment of the present application;
fig. 3 is a schematic diagram of region division provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a region image capture provided by an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating stitching of two images according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a spliced plurality of images according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating optimization of seam lines after image stitching according to an embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of an image stitching apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an image stitching apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, specific embodiments of the present application will be described in detail with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some but not all of the relevant portions of the present application are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The image splicing method and the image splicing device aim at acquiring images, after at least two sub-areas are obtained by dividing a target area, the image acquisition is respectively carried out on each sub-area through the zooming camera so as to acquire clear area images at different positions, and the definition of acquired images is improved. And moreover, the obtained clear area images are spliced to obtain a clear target image so as to ensure the integrity of the obtained image, and meanwhile, the spliced target image is spliced by the clear area images obtained by the zoom camera so as to improve the overall definition of a picture, so that the watching feeling of a user is improved and the experience of the user is improved when the user views the spliced target image. For traditional capture image mode, its focus camera carries out image capture usually, uses the focus camera can lead to the field of vision of camera comparatively fixed, can't clearly acquire the region far away from the nodical, needs the mutual information through the interactive end to remind the user to remove the camera that corresponds or remove the position of waiting to catch regional article that corresponds for user's experience is felt relatively poorly. Therefore, the image stitching method provided by the embodiment of the application is provided to solve the problem that all pictures cannot be clearly acquired in the existing image capturing process.
Fig. 1 shows a flowchart of an image stitching method according to an embodiment of the present application, where the image stitching method provided in this embodiment may be executed by an image stitching device, the image stitching device may be implemented in a software and/or hardware manner, and the image stitching device may be formed by two or more physical entities or may be formed by one physical entity. Generally, the image stitching device may be a terminal device, such as a computer, a tablet, or a mobile phone.
The following description will be given taking a flat panel apparatus as an example of a main body for executing the image stitching method. Referring to fig. 1, the image stitching method specifically includes:
s101, dividing the target area to obtain at least two sub-areas.
And acquiring an image through a front camera, identifying the region boundary of the target image, and obtaining the region boundary of the target image and the boundary coordinate identification of the target image. And dividing the target area into at least two sub-areas according to the area boundary of the target image and a preset division rule to obtain the identifier of each sub-area.
In one embodiment, the front camera is used for image acquisition, the area boundary of the target image is recognized, and the area boundary of the target image and the boundary coordinate identification of the target image are obtained. And dividing the target region into at least two sub-regions according to the region boundary of the target image and a preset division rule to obtain the identifier of each sub-region, wherein the preset division rule can be an equal-area frame, and performing equal-area division on the target image region to obtain at least two sub-regions with equal areas. The identifier of each sub-region may be a symbol identifier or a coordinate identifier.
S102, respectively carrying out image acquisition on each sub-area through a zoom camera to obtain an area image of each sub-area, wherein different sub-areas correspond to different focal lengths.
Detecting whether a shelter exists in each sub-area, wherein the shelter can be a hand, a pen or the like. And when the blocking object is detected to leave the sub-area, acquiring an image of the corresponding sub-area through the zooming camera to obtain a corresponding area image. And adjusting the focal length through the zooming camera to acquire images of each sub-area. Determining the focal length of the zooming camera according to the position of the sub-region to be image-acquired, wherein the farther the position of the sub-region to be image-acquired from the zooming camera is, the larger the focal length is, and the closer the position of the sub-region to be image-acquired from the zooming camera is, the smaller the focal length is. And controlling the zoom camera to shoot corresponding area images according to the determined focal length to obtain the area image of each sub-area. The zooming camera can be a zooming camera carried by an external holder device, and can also be a front-facing camera. The zoom camera can capture a far-end text area and ensure that the acquired image is clearer.
In an embodiment, it is detected whether a blocking object is present in each of the sub-areas, wherein the blocking object may be a hand and/or a pen, etc. And when the blocking object is detected to leave the sub-area, acquiring the image of the corresponding sub-area unit through the zooming camera to obtain the area image of the corresponding sub-area unit. And adjusting the focal length through the zooming camera to acquire images of each subarea. Determining the focal length of the zooming camera according to the position of the sub-region to be image-acquired, wherein the farther the position of the sub-region to be image-acquired from the zooming camera is, the larger the focal length is, and the closer the position of the sub-region to be image-acquired from the zooming camera is, the smaller the focal length is. And controlling the zoom camera to shoot a corresponding area image according to the determined focal length to obtain an area image of each sub-area unit. The visual field picture area of the area image of each sub-area unit is larger than the area of the corresponding sub-area. The area of the visual field picture of the area image of each sub-area unit is larger than that of the corresponding sub-area unit, so that an intersection picture exists between every two adjacent area images obtained through collection, and subsequent feature matching can be carried out. The zooming camera can be a zooming camera carried by an external holder device, and can also be a front-facing camera. The zoom camera can capture a far-end text area and ensure that the acquired image is clearer.
In an embodiment, for a sub-region that has already been subjected to image acquisition, if it is detected that a blocking object exists again, after it is detected that the blocking object leaves the sub-region, an image of the corresponding sub-region is acquired again through the zoom camera, and a corresponding region image is obtained by updating.
S103, splicing the area images to obtain a target image.
Reading at least two area images, identifying the area images, obtaining image characteristics, and performing characteristic matching on the obtained image characteristics to obtain characteristic matching points. And carrying out camera parameter optimization and regional image mapping processing according to the matching result. Calibrating the camera parameters of the area images to be spliced according to the characteristic matching result, optimizing the camera parameters of the area images to be spliced according to a beam adjustment method, and mapping the area images to be spliced according to the optimized camera parameters to obtain the area images after mapping. Determining a seam line according to the characteristic matching points, and fusing and splicing the mapped region images according to the seam line. Reserving a region with a preset distance at the position of the seam line, and carrying out seam optimization processing on the region with the preset distance. And uploading the spliced target image to a background server, so that the background server can call the target image in response to a request of a user side, and the user can examine and approve the target image.
In an embodiment, fig. 2 is a schematic view of a target image area of a response scene provided in an embodiment of the present application, and referring to fig. 2, the image stitching method is applied to a scene in which an answer sheet needs to be uploaded when a homework response or an examination is performed in a home education machine. When the user needs to take a job answer or an examination, the answer sheet is an a3 test sheet as an example. The user opens the camera of the family education machine in advance, and then prepares the corresponding A3 answer sheet in the area facing the front camera, so that the image obtained by shooting the A3 answer sheet is the target image, and the target image is the image of the complete A3 answer sheet. And acquiring images through a front camera, and identifying the region boundary of the complete A3 answer sheet image to obtain the image region of the complete A3 answer sheet. And dividing the complete A3 answer sheet image area according to a preset area division frame to obtain at least two sub-areas. The number of the divided regions can be set according to the actual situation.
For example, fig. 3 is a schematic diagram of region division provided in the embodiment of the present application, and referring to fig. 3, a front camera of a family education machine is used to capture a complete image of an a3 answer sheet as a target image in advance, and then the target image is divided into K × K sub-regions, where K may be set according to actual conditions. Wherein each sub-area can be accurately calculated with coordinates. For the A3 question book target image to divide into 25 equal sub-regions, the splicing effect is ideal.
In one embodiment, referring to fig. 2, the front camera is used for image acquisition, and the region boundary of the target image is identified, so as to obtain the region boundary coordinate identifications (0,0), (a,0), (a, b) and (0, b). According to the region boundary of the target image and a preset division rule, assuming that the preset division rule is equal-area division, dividing the target region into K x K sub-regions, wherein K can be specifically set according to the actual situation. And each sub-region can accurately calculate the coordinates to obtain the identifier of the sub-region. The sub-region identifier may be a symbol identifier, such as (K1, K1), or a coordinate identifier, such as (0, b-K1), (K1, b-K1), (K1, b), and (0, b). For example, the coordinate designations of the sub-regions (K1, K1) are found as (0, b-K1), (K1, b-K1), (K1, b), and (0, b). And when splicing subsequent images, splicing in sequence can be carried out according to the corresponding identifiers. For example, sequentially according to the order of (K1, K1), (K1, K2), or sequentially according to the coordinate designation.
In an embodiment, fig. 4 is a schematic diagram of collecting an area image according to an embodiment of the present application, and referring to fig. 3 and fig. 4, by moving a zoom camera in an area image of divided sub-areas, whether a blocking object exists in each sub-area is detected in real time, where the blocking object is a hand or a pen. When a user answers a question, the user usually enters a corresponding area with hands and a pen to answer the question and write the question, and then leaves the area with hands and the pen to reach an area corresponding to a next question to answer the question after the question is answered. Therefore, when the hand and the pen leave a certain subarea, the subarea is considered to be completely answered, and the image acquisition is carried out on the subarea to acquire an image after answering. When the condition that the hands and the pens of the user leave a certain sub-area is detected, the focal length is adjusted through the zoom camera, so that the focal point is aligned with the sub-area to acquire an image, a clear view picture of the sub-area is obtained, and a corresponding area image is obtained. For example, it is detected that the hand and the pen of the user answer in the sub-area (K1, K1), when it is detected that the hand and the pen of the user leave the sub-area (K1, K1), the focal length is adjusted by the zoom camera so as to focus on the sub-area (K1, K1), and image acquisition is performed, so that an area image with a larger area of the visual field picture (1,1) than the corresponding sub-area (K1, K1) is obtained. Similarly, when the situation that the hands and the pens of the user leave the sub-areas (K1, K2) is detected, the focal length is adjusted through the zoom camera, so that the focal points are aligned to the sub-areas (K1, K2), image acquisition is carried out, and a region image with the area of the visual field picture (1,2) larger than that of the corresponding sub-areas (K1, K2) is obtained. And when the hand and the pen of the user move to the subarea which is subjected to image acquisition before, for example, move to (K1, K1) again, for example, in response, when the hand and the pen of the user are detected to leave the subarea (K1, K1), the focal length is adjusted by the zoom camera so as to focus on the subarea (K1, K1), and the image acquisition is performed again, so that an area image with a visual field picture area larger than that of the corresponding subarea (K1, K1) is obtained, and the newly acquired area image of the subarea (K1, K1) is replaced by the area image of the subarea (K1, K1) which is acquired before, and the area image is updated and stored. And after the user answers, acquiring images of all divided sub-regions to obtain all region images, and splicing the region images. When the images are spliced, the images are spliced in sequence according to the corresponding identifiers, so that when the sub-regions acquired by the previous images are acquired for multiple times, the images can be spliced in sequence according to the positions of the regions, splicing errors caused by image splicing according to the acquisition sequence are avoided, and the working efficiency of image splicing is improved. The zooming camera can be a zooming camera carried by an external holder device, and can also be a front-facing camera. The zoom camera can capture a far-end text area and ensure that the acquired image is clearer.
In an embodiment, fig. 5 is a schematic diagram of stitching two images provided in the embodiment of the present application, and referring to fig. 5, two adjacent area images are read, the area images are identified to obtain image features, and the obtained image features are subjected to feature matching to obtain feature matching points. Wherein, the feature acquisition and matching can be performed based on the SI FT feature method or the SURF feature method. Two or more area images may be read. Fig. 6 is a schematic diagram of a spliced plurality of images provided in the embodiment of the present application, and referring to fig. 6, camera parameter optimization and area image mapping processing are performed according to a matching result. Calibrating the camera parameters of the area images to be spliced according to the characteristic matching result, optimizing the camera parameters of the area images to be spliced according to a light beam adjustment method, and mapping the area images to be spliced according to the optimized camera parameters to obtain the area images after mapping. Determining a seam line according to the characteristic matching points, and fusing and splicing the mapped region images according to the seam line. Fig. 7 is a schematic diagram of seam line optimization after image stitching according to an embodiment of the present application, and referring to fig. 7, a region with a preset distance is reserved at a seam line, and seam optimization processing is performed on the region with the preset distance, where the seam optimization processing may be seam line fusion degree optimization processing or seam line smoothness optimization processing. And uploading the spliced target image to a background server, so that the background server can call the target image in response to a request of a user side, and the user can examine and approve the target image. The image splicing method provided by the embodiment can be applied to examination or homework of students, image splicing can be automatically completed by the student at the background server after answering, a complete answering test paper image is obtained, no additional interactive prompt is needed, when a user (a teacher or a parent) needs to check or approve the answering test paper, the background server can call the spliced complete answering test paper image according to the request of the user side, so that the user (the teacher or the parent) can check or approve the answering test paper at the client side, and the experience of the user is greatly improved.
In an embodiment, referring to fig. 5, the area images of two adjacent sub-area units are read, the area images of the sub-area units are identified to obtain image features, and the obtained image features are subjected to feature matching to obtain feature matching points. Wherein regions (K1, K1) and (K1, K2) represent sub-regions and regions (1,1) and (1,2) represent sub-region units. Wherein, the feature acquisition and matching can be performed based on the SI FT feature method or the SURF feature method. Two or more sub-area unit area images may be read. Therefore, when the images of the area images of the two adjacent sub-area units are acquired, the area of the acquired view picture needs to be larger than the area of the pre-divided sub-area, so that the intersection exists between the area images of the two adjacent sub-area units, and therefore matched feature points can exist between the two adjacent images during splicing. Referring to fig. 6, camera parameter optimization and area image mapping processing are performed according to the matching result. Calibrating the camera parameters of the area images to be spliced according to the characteristic matching result, optimizing the camera parameters of the area images to be spliced according to a light beam adjustment method, and mapping the area images to be spliced according to the optimized camera parameters to obtain the area images after mapping. Determining a seam line according to the characteristic matching points, and fusing and splicing the mapped region images according to the seam line. Referring to fig. 7, a region with a preset distance is reserved at the seam line, and the optimization process of the seam is performed on the region with the preset distance, wherein the optimization process can be the optimization process of the fusion degree of the seam line or the optimization process of the smoothness degree of the seam line. And uploading the spliced target image to a background server, so that the background server can call the target image in response to a request of a user side, and the user can examine and approve the target image. The image splicing method provided by the embodiment can be applied to examination or homework of students, image splicing can be automatically completed by the student at the background server after answering, a complete answering test paper image is obtained, no additional interactive prompt is needed, when a user (a teacher or a parent) needs to check or approve the answering test paper, the background server can call the spliced complete answering test paper image according to the request of the user side, so that the user (the teacher or the parent) can check or approve the answering test paper at the client side, and the experience of the user is greatly improved.
In an embodiment, when the image of the region image of the sub-region unit is acquired, the area of the acquired view field image needs to be larger than the area of the pre-divided sub-region, so that an intersection exists between the region images of two adjacent sub-region units, and thus, a matched feature point can exist between the two adjacent images during splicing. Preferably, the area of the overlapping part between the area images of two adjacent sub-area units accounts for about 25% -50% of the area images, so that the overlapping part exists between two adjacent pictures during splicing, and thus matched feature points exist between the two adjacent pictures. Theoretically, the larger the area of the overlapping portion between the area images of two adjacent sub-area units is, the higher the accuracy of feature point matching is, but the larger the area of the overlapping portion is, the slower the operation speed when the feature points are matched is. Therefore, based on the double consideration of the operation speed and the accuracy, the area of the overlapping part between the area images of the two adjacent sub-area units is set to be about 25% -50% of the area image, and the operation speed of feature point matching can be improved while the accuracy is guaranteed.
In the above, at least two sub-regions are obtained by dividing the target region; respectively carrying out image acquisition on each sub-area through a zooming camera to obtain an area image of each sub-area, wherein different sub-areas correspond to different focal lengths; and splicing the area images to obtain a target image. By adopting the technical means, the target area is divided into the sub-areas, the sub-areas are subjected to image acquisition through the zoom camera, clear area images at different positions can be captured, and the definition of the acquired images is improved. Meanwhile, the spliced target images are spliced by the clear area images obtained by the zooming camera, so that the overall definition of the picture is improved, the watching experience of a user is improved when the spliced target images are checked by the user, and the experience of the user is improved.
On the basis of the foregoing embodiment, fig. 8 is a schematic structural diagram of an image stitching device according to an embodiment of the present application. Referring to fig. 8, the image stitching apparatus provided in this embodiment specifically includes: an area dividing unit 21, an area image acquiring unit 22, and an image stitching unit 23.
The region dividing unit 21 is configured to divide a target region to obtain at least two sub-regions;
the area image acquiring unit 22 is configured to acquire an image of each sub-area through a zoom camera, so as to obtain an area image of each sub-area, where different sub-areas correspond to different focal lengths;
and the image splicing unit 23 is configured to splice the area images to obtain a target image.
Further, the area image acquiring unit 22 is further configured to detect whether an obstruction exists in each of the sub-areas;
and when the blocking object is detected to leave the sub-area, acquiring an image of the corresponding sub-area through the zooming camera to obtain a corresponding area image.
Further, the area image obtaining unit 22 is further configured to detect whether finger and/or pen occlusion exists in each of the sub-areas;
and if the finger and/or pen is blocked, acquiring the image of the sub-area through the zoom camera after the finger and/or pen is detected to leave the sub-area, and obtaining a corresponding area image.
Further, the area image acquiring unit 22 is further configured to perform image acquisition on each sub-area unit by adjusting a focal length through the zoom camera, so as to obtain an area image of each sub-area unit, where a view frame area of the area image of each sub-area unit is larger than an area of a corresponding sub-area.
Further, the area image obtaining unit 22 is further configured to determine a focal length of the zoom camera according to a position of a sub area to be image-acquired, where a focal length of the sub area to be image-acquired, which is farther from the zoom camera, is larger, and a focal length of the sub area to be image-acquired, which is closer to the zoom camera, is smaller;
and controlling the zoom camera to shoot the corresponding area image according to the determined focal length.
Further, the image stitching unit 23 is further configured to read at least two area images, identify the area images, and obtain image features;
performing feature matching on the acquired image features to obtain feature matching points;
performing camera parameter optimization and area image mapping processing according to the matching result, and determining a seam line according to the feature matching point;
and fusing and splicing the mapped area images according to the seam lines.
Further, the image stitching unit 23 is further configured to calibrate a camera parameter of the area image to be stitched according to the feature matching result;
optimizing the camera parameters of the area images to be spliced according to a light beam adjustment method;
and mapping the area images to be spliced according to the optimized camera parameters to obtain the area images after mapping.
Further, the device further comprises an optimization unit;
the optimizing unit is used for reserving a region with a preset distance at the seam line and optimizing the seam of the region with the preset distance.
Further, the area image acquiring unit 22 is further configured to acquire, for a sub-area that has already been subjected to image acquisition, if it is detected that the blocking object exists again, an image of the corresponding sub-area is acquired again through the zoom camera after it is detected that the blocking object leaves the sub-area, and the corresponding area image is obtained through updating.
Further, the device also comprises an image area acquisition unit;
the image area acquisition unit is used for acquiring images through a front camera and identifying the area boundary of a target image;
the region dividing unit is further configured to divide the target region into at least two sub-regions according to the region boundary of the target image and a preset dividing rule, so as to obtain an identifier of each sub-region.
Further, the device also comprises an image uploading unit;
the image uploading unit is used for uploading the spliced target image to a background server so that the background server can call the target image in response to a request of a user side and the user can examine and approve the target image.
In the above, at least two sub-regions are obtained by dividing the target region; respectively carrying out image acquisition on each sub-area through a zoom camera to obtain an area image of each sub-area, wherein different sub-areas correspond to different focal lengths; and splicing the area images to obtain a target image. By adopting the technical means, the target area is divided into the sub-areas, the sub-areas are subjected to image acquisition through the zoom camera, clear area images at different positions can be captured, and the definition of the acquired images is improved. Meanwhile, the spliced target images are spliced by the clear area images obtained by the zooming camera, so that the overall definition of the picture is improved, the watching experience of a user is improved when the spliced target images are checked by the user, and the experience of the user is improved.
The image stitching device provided by the embodiment of the application can be used for executing the image stitching method provided by the embodiment, and has corresponding functions and beneficial effects.
An embodiment of the present application provides an image stitching device, and with reference to fig. 9, the image stitching device includes: a processor 31, a memory 32, a communication module 33, an input device 34, and an output device 35. The number of processors in the image stitching device may be one or more, and the number of memories in the image stitching device may be one or more. The processor, the memory, the communication module, the input device and the output device of the image stitching device can be connected through a bus or other modes.
The memory 32 is used as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the image stitching method according to any of the embodiments of the present application (for example, the region dividing unit, the region image acquiring unit, and the image stitching unit in the image stitching device). The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system and an application program required by at least one function; the storage data area may store data created according to use of the device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The communication module 33 is used for data transmission.
The processor 31 executes various functional applications of the device and data processing by running software programs, instructions and modules stored in the memory, that is, implements the image stitching method described above.
The input device 34 may be used to receive entered numeric or character information and to generate key signal inputs relating to user settings and function controls of the apparatus. The output device 35 may include a display device such as a display screen.
The image stitching equipment provided by the embodiment can be used for executing the image stitching method provided by the embodiment, and has corresponding functions and beneficial effects.
Embodiments of the present application also provide a storage medium storing computer-executable instructions, which when executed by a computer processor, are configured to perform an image stitching method, the image stitching method including: dividing a target area to obtain at least two sub-areas; respectively carrying out image acquisition on each sub-area through a zoom camera to obtain an area image of each sub-area, wherein different sub-areas correspond to different focal lengths; and splicing the area images to obtain a target image.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet). The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media residing in different locations, e.g., in different computer systems connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium storing the computer-executable instructions provided in the embodiments of the present application is not limited to the image stitching method described above, and may also perform related operations in the image stitching method provided in any embodiment of the present application.
The image stitching device, the storage medium, and the image stitching apparatus provided in the above embodiments may execute the image stitching method provided in any embodiment of the present application, and reference may be made to the image stitching method provided in any embodiment of the present application without detailed technical details described in the above embodiments.
The foregoing is considered as illustrative of the preferred embodiments of the invention and the technical principles employed. The present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the claims.

Claims (14)

1. An image stitching method, comprising:
dividing a target area to obtain at least two sub-areas;
respectively carrying out image acquisition on each sub-area through a zoom camera to obtain an area image of each sub-area, wherein different sub-areas correspond to different focal lengths;
and splicing the area images to obtain a target image.
2. The image stitching method according to claim 1, wherein the acquiring the image of each sub-region by the zoom camera to obtain the area image of each sub-region comprises:
detecting whether an obstruction exists in each of the sub-regions;
and when the blocking object is detected to leave the sub-area, acquiring an image of the corresponding sub-area through the zooming camera to obtain a corresponding area image.
3. The image stitching method according to claim 1, wherein the acquiring the image of each sub-region by the zoom camera to obtain the area image of each sub-region comprises:
detecting whether finger and/or pen occlusion exists in each sub-region;
and if the finger and/or pen is blocked, acquiring the image of the sub-area through the zoom camera after the finger and/or pen is detected to leave the sub-area, and obtaining a corresponding area image.
4. The image stitching method according to claim 1, wherein the acquiring the image of each sub-region by the zoom camera to obtain the area image of each sub-region comprises:
and adjusting the focal length through the zoom camera to acquire an image of each sub-area unit, so as to obtain an area image of each sub-area unit, wherein the area of a view picture of the area image of each sub-area unit is larger than that of the corresponding sub-area.
5. The image stitching method according to claim 4, wherein the adjusting the focal length by the zoom camera performs image acquisition on each sub-area unit, and comprises:
determining the focal length of the zooming camera according to the position of the sub-region to be image-acquired, wherein the farther the position of the sub-region to be image-acquired is from the zooming camera, the larger the focal length is, and the closer the position of the sub-region to be image-acquired is from the zooming camera, the smaller the focal length is;
and controlling the zoom camera to shoot the corresponding area image according to the determined focal length.
6. The image stitching method according to claim 1, wherein the stitching the area images comprises:
reading at least two area images, identifying the area images, and acquiring image characteristics;
performing feature matching on the acquired image features to obtain feature matching points;
performing camera parameter optimization and area image mapping processing according to the matching result, and determining a seam line according to the characteristic matching point;
and fusing and splicing the mapped area images according to the seam lines.
7. The image stitching method according to claim 6, wherein the performing of the camera parameter optimization and the region image mapping processing according to the matching result comprises:
calibrating the camera parameters of the area images to be spliced according to the feature matching result;
optimizing camera parameters of the area images to be spliced according to a light beam adjustment method;
and mapping the area images to be spliced according to the optimized camera parameters to obtain the area images subjected to mapping.
8. The image stitching method according to claim 6, further comprising, after the stitching the mapped region images according to the seam line fusion,:
reserving a preset distance area at the seam line, and carrying out seam optimization treatment on the preset distance area.
9. The image stitching method according to claim 2, wherein before the stitching the area images to obtain the target image, the method further comprises:
and if the existence of the blocking object is detected again in the sub-area which is subjected to image acquisition, acquiring the image of the corresponding sub-area again through the zoom camera after the blocking object is detected to leave the sub-area, and updating to obtain the corresponding area image.
10. The image stitching method according to claim 1, wherein the dividing the target region into at least two sub-regions comprises:
acquiring an image through a front camera, and identifying the region boundary of a target image;
and dividing the target area into at least two sub-areas according to the area boundary of the target image and a preset division rule to obtain the identifier of each sub-area.
11. The image stitching method according to claim 1, wherein after the stitching the area images to obtain the target image, the method further comprises:
and uploading the spliced target image to a background server, so that the background server can call the target image in response to a request of a user side, and the user can examine and approve the target image.
12. An image stitching device, comprising:
the region dividing unit is used for dividing the target region to obtain at least two sub-regions;
the area image acquisition unit is used for respectively acquiring images of the sub-areas through the zoom camera to obtain area images of the sub-areas, wherein different sub-areas correspond to different focal lengths;
and the image splicing unit is used for splicing the area images to obtain a target image.
13. An image stitching device, characterized by comprising:
a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-11.
14. A storage medium storing computer-executable instructions for performing the method of any one of claims 1-11 when executed by a computer processor.
CN202210359649.7A 2022-04-06 2022-04-06 Image splicing method and device Pending CN114727019A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210359649.7A CN114727019A (en) 2022-04-06 2022-04-06 Image splicing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210359649.7A CN114727019A (en) 2022-04-06 2022-04-06 Image splicing method and device

Publications (1)

Publication Number Publication Date
CN114727019A true CN114727019A (en) 2022-07-08

Family

ID=82242158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210359649.7A Pending CN114727019A (en) 2022-04-06 2022-04-06 Image splicing method and device

Country Status (1)

Country Link
CN (1) CN114727019A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103491298A (en) * 2013-09-13 2014-01-01 惠州Tcl移动通信有限公司 Multi-region real-time synthesis photographing method and touch control terminal
CN106572305A (en) * 2016-11-03 2017-04-19 乐视控股(北京)有限公司 Image shooting method, image processing method, apparatuses and electronic device
US20180063426A1 (en) * 2016-08-31 2018-03-01 Nokia Technologies Oy Method, apparatus and computer program product for indicating a seam of an image in a corresponding area of a scene
CN111027354A (en) * 2019-02-27 2020-04-17 广东小天才科技有限公司 Learning content acquisition method and learning equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103491298A (en) * 2013-09-13 2014-01-01 惠州Tcl移动通信有限公司 Multi-region real-time synthesis photographing method and touch control terminal
US20180063426A1 (en) * 2016-08-31 2018-03-01 Nokia Technologies Oy Method, apparatus and computer program product for indicating a seam of an image in a corresponding area of a scene
CN106572305A (en) * 2016-11-03 2017-04-19 乐视控股(北京)有限公司 Image shooting method, image processing method, apparatuses and electronic device
CN111027354A (en) * 2019-02-27 2020-04-17 广东小天才科技有限公司 Learning content acquisition method and learning equipment

Similar Documents

Publication Publication Date Title
CN111935532B (en) Video interaction method and device, electronic equipment and storage medium
WO2019052534A1 (en) Image stitching method and device, and storage medium
CN103209291A (en) Method, apparatus and device for controlling automatic image shooting
CN112000226B (en) Human eye sight estimation method, device and sight estimation system
CN105094675A (en) Man-machine interaction method and touch screen wearable device
CN114120163A (en) Video frame processing method and device, and related equipment and storage medium thereof
US20220165070A1 (en) Method for determining correct scanning distance using augmented reality and machine learning models
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
CN108780568A (en) A kind of image processing method, device and aircraft
CN112529548B (en) Online education platform cloud resource management system
JP6283329B2 (en) Augmented Reality Object Recognition Device
CN114529621A (en) Household type graph generation method and device, electronic equipment and medium
CN114168019A (en) Multi-application window level management method and device
CN113934297A (en) Interaction method and device based on augmented reality, electronic equipment and medium
CN110858814A (en) Control method and device for intelligent household equipment
CN114727019A (en) Image splicing method and device
KR102518075B1 (en) Terminal for providing AR electron microscope
CN114863448A (en) Answer statistical method, device, equipment and storage medium
CN111860074A (en) Target object detection method and device and driving control method and device
CN115033128A (en) Electronic whiteboard control method based on image recognition, electronic whiteboard and readable medium
CN110636204A (en) Face snapshot system
CN112565586A (en) Automatic focusing method and device
CN114567731A (en) Target shooting method and device, terminal equipment and storage medium
CN115278108A (en) Writing shooting method and device, learning machine and storage medium
CN114937143A (en) Rotary shooting method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination