CN114119370A - Image processing method and device, storage medium and electronic device - Google Patents

Image processing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN114119370A
CN114119370A CN202111408896.3A CN202111408896A CN114119370A CN 114119370 A CN114119370 A CN 114119370A CN 202111408896 A CN202111408896 A CN 202111408896A CN 114119370 A CN114119370 A CN 114119370A
Authority
CN
China
Prior art keywords
image
picture
determining
occluded
spliced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111408896.3A
Other languages
Chinese (zh)
Inventor
陈明珠
谢南菊
王松
蒋玉平
郑雪红
陈腊荣
潘霄凌
颜江南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202111408896.3A priority Critical patent/CN114119370A/en
Publication of CN114119370A publication Critical patent/CN114119370A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the invention provides an image processing method and device, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring a first image in a target area through first camera equipment; under the condition that the first image comprises a blocked picture, a second image of a blocking area corresponding to the blocked picture is obtained through second camera equipment, wherein the shooting magnification of the first camera equipment is linked with that of the second camera equipment; splicing the first image and the second image according to the first image characteristic point in the first image and the second image characteristic point in the second image to determine a spliced image; and performing fusion processing on the spliced image to determine a target image. According to the invention, the problem of image splicing in the related technology is solved, and the effects of reducing fragmentation of scenes, improving the visualization effect and improving the video processing capability in an intelligent scene are achieved.

Description

Image processing method and device, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of images, in particular to an image processing method and device, a storage medium and an electronic device.
Background
Video monitoring is limited by the view field angle and the shielding problem of the cameras, and multiple paths of cameras are often needed for networking monitoring. However, multiple channels require multiple screen displays, which are often difficult for the audience to observe and lack visual continuity. And the information fragmentation display mode brings some inconvenience to behavior analysis and judgment on the premise of intellectualization.
For the problem of image stitching, no effective solution is proposed in the related art.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, a storage medium and an electronic device, which are used for at least solving the problem of image splicing in the related technology.
According to an embodiment of the present invention, there is provided an image processing method including: acquiring a first image in a target area through first camera equipment; under the condition that the first image comprises a blocked picture, acquiring a second image of a blocking area corresponding to the blocked picture through second image pickup equipment, wherein the shooting magnification of the first image pickup equipment is linked with that of the second image pickup equipment; stitching the first image and the second image according to the first image characteristic point in the first image and the second image characteristic point in the second image to determine a stitched image; and performing fusion processing on the spliced image to determine a target image.
According to another embodiment of the present invention, there is provided an image processing apparatus including: the first acquisition module is used for acquiring a first image in the target area through first camera equipment; a second obtaining module, configured to obtain, by a second image capturing device, a second image of a blocked area corresponding to a blocked picture when the first image includes the blocked picture, where a shooting magnification of the first image capturing device and a shooting magnification of the second image capturing device are linked with each other; the first splicing module is used for splicing the first image and the second image according to a first image characteristic point in the first image and a second image characteristic point in the second image to determine a spliced image; and the first fusion module is used for carrying out fusion processing on the spliced images and determining a target image.
In an exemplary embodiment, the second obtaining module includes: a first determining unit configured to determine an occlusion object in the occlusion region corresponding to an occluded picture when the occluded picture is included in the first image; a first setting unit configured to set the second image pickup apparatus on an opposite surface of the blocking object; and a first acquisition unit configured to acquire the second image by capturing an image of an opposite surface of the shielding object by the second image capturing apparatus at a same capturing magnification as that of the first image capturing apparatus.
In an exemplary embodiment, the first splicing module includes: a first conversion unit, configured to convert the first image feature point and the second image feature point into a preset coordinate system; a first calibration unit configured to calibrate a matching point between the first image feature point and the second image feature point; and the first splicing unit is used for splicing the first image and the second image according to the matching points and determining the spliced image.
In an exemplary embodiment, the first calibration unit includes: the first setting unit is used for setting a first preset graph in a blocked picture in the first image; a second determining unit, configured to determine a first pixel coordinate corresponding to the first preset pattern in the first image; a second setting unit configured to set a second preset pattern in the second image; a third determining unit, configured to determine a second pixel coordinate corresponding to the second preset pattern in the second image; and the second calibration unit is used for calibrating the matching points between the first image characteristic points and the second image characteristic points according to the first pixel point coordinates and the second pixel point coordinates.
In an exemplary embodiment, the first splicing unit includes: a first determining subunit, configured to determine a mapping relationship between the matching points; and the first splicing subunit is used for splicing the first image and the second image based on the mapping relation and determining the spliced image.
In an exemplary embodiment, the first fusion module includes: a fourth determining unit, configured to determine an image fusion region in the stitched image; and the first fusion unit is used for synthesizing pixel points in the image fusion area according to a preset weight value and determining the target image.
In an exemplary embodiment, the apparatus further includes: a first determining module, configured to determine, when the first image includes a blocked image, a shooting magnification of the first image and a shooting magnification of a second image after acquiring, by a second imaging device, the second image of a blocked area corresponding to the blocked image; and the first calibration module is used for calibrating the first image and the second image according to the linkage relation between the shooting magnification of the first image and the shooting magnification of the second image.
In an exemplary embodiment, the apparatus further includes: a second determining module, configured to determine object information included in the first image and the second image after acquiring, by a second imaging device, the second image of an occlusion region corresponding to the occluded picture when the first image includes the occluded picture; and a third determining module, configured to determine a display ratio of the first image and the second image according to the object information.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, a first image in a target area is acquired through a first camera device; under the condition that the first image comprises a blocked picture, a second image of a blocking area corresponding to the blocked picture is obtained through second camera equipment, wherein the shooting magnification of the first camera equipment is linked with that of the second camera equipment; splicing the first image and the second image according to the first image characteristic point in the first image and the second image characteristic point in the second image to determine a spliced image; and performing fusion processing on the spliced image to determine a target image. The purpose of splicing the multiple shot images is achieved. Therefore, the problem of image splicing in the related technology can be solved, fragmentation of scenes is reduced, the visualization effect is improved, and the video processing capacity under an intelligent scene is improved.
Drawings
Fig. 1 is a block diagram of a hardware configuration of a mobile terminal of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a video surveillance networking according to an embodiment of the invention;
FIG. 4 is a schematic diagram of image stitching according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a magnification interlock control according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a video surveillance networking system according to an embodiment of the invention;
FIG. 7 is a flow diagram of image stitching according to an embodiment of the present invention;
FIG. 8 is a schematic illustration of image overlay fusion according to an embodiment of the present invention;
fig. 9 is a block diagram of the structure of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal of an image processing method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of an application software and a module, such as a computer program corresponding to the image processing method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In the present embodiment, an image processing method is provided, and fig. 2 is a flowchart of an image processing method according to an embodiment of the present invention, and as shown in fig. 2, the flowchart includes the steps of:
step S202, acquiring a first image in a target area through first camera equipment;
step S204, under the condition that the first image comprises a shielded picture, acquiring a second image of a shielded area corresponding to the shielded picture through second camera equipment, wherein the shooting magnification of the first camera equipment is linked with the shooting magnification of the second camera equipment;
in the present embodiment, the magnifications of the first image pickup apparatus and the second image pickup apparatus are linked with each other by a unified magnification control method. For example, when the magnification of the first image pickup device is changed, the magnification of the second image pickup device is updated synchronously to ensure the same object magnification of the spliced multi-channel video.
Step S206, splicing the first image and the second image according to the first image characteristic point in the first image and the second image characteristic point in the second image, and determining a spliced image;
in this embodiment, the first image Feature point and the second image Feature point are extracted by using a common Scale-invariant Feature Transform (SIFT for short), Speeded Up Robust Features (SURF for short), Harris corner points, and a Feature extraction algorithm (ORB for short) to perform Feature point matching.
And S208, fusing the spliced images to determine a target image.
The execution subject of the above steps may be a terminal, but is not limited thereto.
The present embodiment includes, but is not limited to, application in a scene in which a target area is monitored. As shown in fig. 3, a plurality of zoom cameras are arranged at different positions, so as to realize 360-degree range monitoring and form a video monitoring networking.
Through the steps, a first image in the target area is obtained through the first camera device; under the condition that the first image comprises a blocked picture, a second image of a blocking area corresponding to the blocked picture is obtained through second camera equipment, wherein the shooting magnification of the first camera equipment is linked with that of the second camera equipment; splicing the first image and the second image according to the first image characteristic point in the first image and the second image characteristic point in the second image to determine a spliced image; and performing fusion processing on the spliced image to determine a target image. The purpose of splicing the multiple shot images is achieved. Therefore, the problem of image splicing in the related technology can be solved, fragmentation of scenes is reduced, the visualization effect is improved, and the video processing capacity under an intelligent scene is improved.
In an exemplary embodiment, in a case where the first image includes an occluded picture, acquiring, by the second image capturing apparatus, a second image of an occlusion region corresponding to the occluded picture includes:
s1, when the first image includes the occluded picture, determining an occlusion object in an occlusion area corresponding to the occluded picture;
s2, arranging a second camera on the opposite surface of the shielding object;
and S3, shooting the opposite surface of the shielding object by the second image pickup device according to the same shooting magnification as that of the first image pickup device, and acquiring a second image.
In this embodiment, for example, when a building or other obstruction appears in the monitoring range of a certain camera, the monitoring of the area cannot acquire complete information, and a camera is installed on the back side of the obstruction (i.e., the opposite side of the obstruction object), so that the monitoring picture of the camera can be displayed in a panoramic video in an overlapping manner according to different specific requirements.
In an exemplary embodiment, stitching the first image and the second image according to the first image feature point in the first image and the second image feature point in the second image, and determining a stitched image includes:
s1, converting the first image feature point and the second image feature point into a preset coordinate system;
s2, calibrating the matching points between the first image characteristic points and the second image characteristic points;
and S3, splicing the first image and the second image according to the matching points, and determining a spliced image.
In this embodiment, the conversion of the first image feature point and the second image feature point into the predetermined coordinate system is to convert the first image and the second image into the same coordinate system. The RANSAC algorithm can be used for screening the matching point with the minimum error to determine a projective transformation matrix between the first image and the second image.
In one exemplary embodiment, calibrating the matching points between the first image feature points and the second image feature points comprises:
s1, setting a first preset graph in the blocked picture in the first image;
s2, determining the corresponding first pixel point coordinate of the first preset graph in the first image;
s3, setting a second preset graph in a second image;
s4, determining the corresponding second pixel point coordinate of the second preset graph in the second image;
and S5, calibrating the matching point between the first image characteristic point and the second image characteristic point according to the first pixel point coordinate and the second pixel point coordinate.
In this embodiment, for example, as shown in fig. 4, a checkerboard is first placed in a picture to be spliced, a SIFT algorithm is used to extract corner features of the checkerboard, and pixel coordinates of points corresponding to the checkerboard are obtained. And splicing the pixels to be spliced by utilizing the mapping relation between the pixels corresponding to the checkerboard.
In one exemplary embodiment, stitching the first image and the second image according to the matching points and determining a stitched image comprises:
s1, determining the mapping relation between the matching points;
and S2, splicing the first image and the second image based on the mapping relation, and determining a spliced image.
In this embodiment, the pixels corresponding to the checkerboard are spliced by using the mapping relationship between the pixels.
In an exemplary embodiment, the fusion processing of the spliced images and the determination of the target image comprises:
s1, determining an image fusion area in the spliced image;
and S2, synthesizing pixel points in the image fusion area according to the preset weight value, and determining the target image.
In this embodiment, a gap or a light may be left at the boundary between two mosaic images, so that the transition between the two mosaic images is unnatural, and a weighted fusion method is adopted, that is, pixel values of the overlapping regions of the images are added according to a certain weight to synthesize a new image, thereby eliminating the influence of the mosaic gap and the like on the images.
In an exemplary embodiment, in a case that the first image includes the occluded picture, after acquiring, by the second image capturing apparatus, a second image of an occlusion region corresponding to the occluded picture, the method further includes:
s1, determining the shooting magnification of the first image and the shooting magnification of the second image;
and S2, calibrating the first image and the second image according to the linkage relation between the shooting magnification of the first image and the shooting magnification of the second image.
In this embodiment, a unified magnification control method is adopted, and when the magnification of one device is changed, the magnifications of other devices are updated synchronously, so that the spliced multi-channel videos are ensured to ensure the same object magnification ratio. And the method of real multiplying power plus digital multiplying power is adopted for the multiplying power in the picture to realize the quantization processing of the multiplying power. Three groups of real multiplying power, namely a large multiplying power, a medium multiplying power and a small multiplying power are set. Therefore, the continuous magnification scene can be conveniently quantized into three visual angles, only three groups of data need to be calibrated during calibration splicing, and the complex calibration process is reduced. A digital magnification form is adopted among the three magnifications, and interpolation processing is correspondingly performed, where the digital magnification fD is calculated in a manner of fD ═ F/fA, where F is used to represent a target magnification, and fA is used to represent a real magnification, as shown in fig. 5 specifically.
In an exemplary embodiment, in a case that the first image includes the occluded picture, after acquiring, by the second image capturing apparatus, a second image of an occlusion region corresponding to the occluded picture, the method further includes:
s1, determining object information included in the first image and the second image;
s2, the display scale of the first image and the second image is determined according to the object information.
In this embodiment, the size of the detection area can be flexibly adjusted according to the requirement of the monitoring target, for example, the human face needs to be detected, and the size of the detection picture can be adjusted according to the display size of the human face, so that the human face is always the most suitable size to be displayed in the picture; when the face is detected to be too small, the camera magnification can be adjusted, so that the picture is enlarged until the size of the detected face is proper, and the picture can be reduced until the picture is proper when the face is detected to be too large.
The invention is illustrated below with reference to specific examples:
the present embodiment takes a scene of image stitching in a video surveillance networking as an example for explanation. As shown in fig. 6, the multi-camera directly adopts a magnification linkage mode to realize the zooming function of the monitored video, and utilizes an image splicing algorithm to realize the fusion of video data of multiple pictures and output panoramic video monitoring; when a special scene is met, for example, when a building or other shelters appear in the monitoring range of a certain camera, the monitoring of the area cannot acquire complete information, and then the camera is installed on the back of the shelter, so that the monitoring picture of the camera can be overlaid and displayed in the panoramic video according to different specific requirements.
The flow chart of the multi-camera panoramic stitching is shown in fig. 7, and comprises the following steps:
s701, extracting the feature points of the images to be spliced, extracting feature factors by adopting commonly used algorithms such as SIFT, SURF, Harris corner points, ORB and the like, and matching the feature points;
s702, according to the obtained feature point matching point sets of the two images to be spliced, image registration is performed, namely the two images are converted into the same coordinate, and the RANSAC algorithm is selected to screen the matching point with the minimum error, so that an accurate projection transformation matrix between the two images is obtained.
S703, a gap or a ray reason may be left at the junction of the two mosaic images to make the transition between the two mosaic images unnatural, and a weighted fusion method can be adopted, i.e. pixel values of the overlapping area of the images are added according to a certain weight to synthesize a new image, thereby eliminating the influence of the mosaic gap and the like on the images.
In this embodiment, when a blocked area exists in a picture of a key monitoring area or multiple surfaces of the area need to be monitored, a camera needs to be networked at the rear position of the area, then corresponding positions of a video picture of the camera and a spliced picture of the camera are calibrated, and a corresponding ROI (region corresponding to monitoring key) is calibrated and extracted for video superposition and fusion. And a virtual-real combination method is adopted for displaying, and fusion control of different proportions is carried out on corresponding areas of the two pictures, so that the effect of virtual-real combination is achieved. And setting corresponding priority according to the use requirement of the actual monitoring scene to adjust the virtual and real degree. For example, in the actual monitoring scene, a high priority is set for the portrait, when it is detected that there is a portrait in the picture, the display ratio of the portrait layer is set to 0.7, and a foreground image layer is used for picture display in the portrait-free scene, as shown in fig. 8 below.
In conclusion, the embodiment uses the multi-shot images for splicing, so that the fragmentation of the scene is reduced, the visualization effect is improved, and the video processing capability under the intelligent scene is improved. Moreover, through reasonable camera layout design of the shielding areas in the panoramic video and the images, information superposition and fusion can be carried out on the shielding areas appearing in the pictures, the picture display effect can be judged and replaced according to different scenes, and the maximum useful information is reserved. By adopting the zooming scheme, the sizes of the visual field and the detection area of the picture can be freely adjusted, and the corresponding contents are synchronously updated by the spliced part and the shielded part.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, an image processing apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of the apparatus is omitted for brevity. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 9 is a block diagram of the structure of an image processing apparatus according to an embodiment of the present invention, as shown in fig. 9, the apparatus including:
a first acquiring module 92, configured to acquire a first image in the target area through a first image capturing apparatus;
a second obtaining module 94, configured to obtain, by a second image capturing device, a second image of a blocked area corresponding to a blocked picture when the first image includes the blocked picture, where a shooting magnification of the first image capturing device and a shooting magnification of the second image capturing device are linked with each other;
the first stitching module 96 is configured to stitch the first image and the second image according to the first image feature point in the first image and the second image feature point in the second image, and determine a stitched image;
and the first fusion module 98 is configured to perform fusion processing on the spliced image to determine a target image.
In an exemplary embodiment, the second obtaining module includes:
a first determining unit configured to determine an occlusion object of the occlusion region corresponding to an occluded picture when the occluded picture is included in the first image;
a first setting unit configured to set the second image pickup apparatus on an opposite surface of the blocking object;
and a first acquisition unit configured to acquire the second image by capturing an image of an opposite surface of the shielding object by the second image capturing apparatus at a same capturing magnification as that of the first image capturing apparatus.
In an exemplary embodiment, the first splicing module includes:
a first conversion unit, configured to convert the first image feature point and the second image feature point into a preset coordinate system;
a first calibration unit configured to calibrate a matching point between the first image feature point and the second image feature point;
and the first splicing unit is used for splicing the first image and the second image according to the matching points and determining the spliced image.
In an exemplary embodiment, the first calibration unit includes:
the first setting unit is used for setting a first preset graph in a blocked picture in the first image;
a second determining unit, configured to determine a first pixel coordinate corresponding to the first preset pattern in the first image;
a second setting unit configured to set a second preset pattern in the second image;
a third determining unit, configured to determine a second pixel coordinate corresponding to the second preset pattern in the second image;
and the second calibration unit is used for calibrating the matching points between the first image characteristic points and the second image characteristic points according to the first pixel point coordinates and the second pixel point coordinates.
In an exemplary embodiment, the first splicing unit includes:
a first determining subunit, configured to determine a mapping relationship between the matching points;
and the first splicing subunit is used for splicing the first image and the second image based on the mapping relation and determining the spliced image.
In an exemplary embodiment, the first fusion module includes:
a fourth determining unit, configured to determine an image fusion region in the stitched image;
and the first fusion unit is used for synthesizing pixel points in the image fusion area according to a preset weight value and determining the target image.
In an exemplary embodiment, the apparatus further includes:
a first determining module, configured to determine, when the first image includes a blocked image, a shooting magnification of the first image and a shooting magnification of a second image after acquiring, by a second imaging device, the second image of a blocked area corresponding to the blocked image;
and the first calibration module is used for calibrating the first image and the second image according to the linkage relation between the shooting magnification of the first image and the shooting magnification of the second image.
In an exemplary embodiment, the apparatus further includes:
a second determining module, configured to determine object information included in the first image and the second image after acquiring, by a second imaging device, the second image of an occlusion region corresponding to the occluded picture when the first image includes the occluded picture;
and a third determining module, configured to determine a display ratio of the first image and the second image according to the object information.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for executing the above steps.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
In an exemplary embodiment, the processor may be configured to execute the above steps by a computer program.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. An image processing method, comprising:
acquiring a first image in a target area through first camera equipment;
under the condition that the first image comprises a blocked picture, a second image of a blocking area corresponding to the blocked picture is obtained through second image pickup equipment, wherein the shooting magnification of the first image pickup equipment is linked with that of the second image pickup equipment;
splicing the first image and the second image according to the first image characteristic point in the first image and the second image characteristic point in the second image to determine a spliced image;
and performing fusion processing on the spliced image to determine a target image.
2. The method according to claim 1, wherein in a case that the first image includes an occluded picture, acquiring, by a second image capturing apparatus, a second image of an occluded area corresponding to the occluded picture includes:
under the condition that the first image comprises an occluded picture, determining an occlusion object in the occlusion area corresponding to the occluded picture;
arranging the second camera equipment on the opposite surface of the shielding object;
and shooting the opposite surface of the shielding object by the second camera equipment according to the same shooting magnification as that of the first camera equipment to obtain the second image.
3. The method of claim 1, wherein stitching the first image and the second image according to a first image feature point in the first image and a second image feature point in the second image to determine a stitched image comprises:
converting the first image feature point and the second image feature point into a preset coordinate system;
calibrating a matching point between the first image characteristic point and the second image characteristic point;
and splicing the first image and the second image according to the matching points to determine the spliced image.
4. The method of claim 3, wherein calibrating the matching points between the first image feature points and the second image feature points comprises:
setting a first preset graph in a blocked picture in the first image;
determining a first pixel point coordinate corresponding to the first preset graph in the first image;
setting a second preset graph in the second image;
determining a second pixel point coordinate corresponding to the second preset graph in the second image;
and calibrating a matching point between the first image characteristic point and the second image characteristic point according to the first pixel point coordinate and the second pixel point coordinate.
5. The method of claim 4, wherein stitching the first image and the second image according to the matching points and determining the stitched image comprises:
determining a mapping relation between the matching points;
and splicing the first image and the second image based on the mapping relation, and determining the spliced image.
6. The method according to claim 1, wherein performing fusion processing on the stitched image to determine a target image comprises:
determining an image fusion area in the spliced image;
and synthesizing pixel points in the image fusion area according to a preset weight value, and determining the target image.
7. The method according to claim 1, wherein when the first image includes an occluded picture, after acquiring a second image of an occluded area corresponding to the occluded picture by a second image capturing apparatus, the method further comprises:
determining the shooting magnification of the first image and the shooting magnification of the second image;
and calibrating the first image and the second image according to the linkage relation between the shooting magnification of the first image and the shooting magnification of the second image.
8. The method according to claim 1, wherein when the first image includes an occluded picture, after acquiring a second image of an occluded area corresponding to the occluded picture by a second image capturing apparatus, the method further comprises:
determining object information included in the first image and the second image;
and determining the display scale of the first image and the second image according to the object information.
9. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring a first image in the target area through first camera equipment;
the second acquisition module is used for acquiring a second image of a shielding area corresponding to a shielded picture through second camera equipment under the condition that the first image comprises the shielded picture, wherein the shooting magnification of the first camera equipment is linked with the shooting magnification of the second camera equipment;
the first splicing module is used for splicing the first image and the second image according to a first image characteristic point in the first image and a second image characteristic point in the second image to determine a spliced image;
and the first fusion module is used for carrying out fusion processing on the spliced image and determining a target image.
10. A computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, carries out the method of any one of claims 1 to 8.
11. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 8.
CN202111408896.3A 2021-11-24 2021-11-24 Image processing method and device, storage medium and electronic device Pending CN114119370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111408896.3A CN114119370A (en) 2021-11-24 2021-11-24 Image processing method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111408896.3A CN114119370A (en) 2021-11-24 2021-11-24 Image processing method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN114119370A true CN114119370A (en) 2022-03-01

Family

ID=80372674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111408896.3A Pending CN114119370A (en) 2021-11-24 2021-11-24 Image processing method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN114119370A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272085A (en) * 2022-09-28 2022-11-01 北京闪马智建科技有限公司 Panoramic image determination method and device, storage medium and electronic device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272085A (en) * 2022-09-28 2022-11-01 北京闪马智建科技有限公司 Panoramic image determination method and device, storage medium and electronic device
CN115272085B (en) * 2022-09-28 2023-09-22 北京闪马智建科技有限公司 Panoramic image determining method and device, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN109474780B (en) Method and device for image processing
CN108833785B (en) Fusion method and device of multi-view images, computer equipment and storage medium
US9313400B2 (en) Linking-up photographing system and control method for linked-up cameras thereof
WO2017016050A1 (en) Image preview method, apparatus and terminal
CN106920221B (en) Take into account the exposure fusion method that Luminance Distribution and details are presented
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN111770273B (en) Image shooting method and device, electronic equipment and readable storage medium
Popovic et al. Multi-camera platform for panoramic real-time HDR video construction and rendering
CN111935398B (en) Image processing method and device, electronic equipment and computer readable medium
CN106709894B (en) Image real-time splicing method and system
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN114040169A (en) Information processing apparatus, information processing method, and storage medium
CN113496474A (en) Image processing method, device, all-round viewing system, automobile and storage medium
CN113159229B (en) Image fusion method, electronic equipment and related products
CN114119370A (en) Image processing method and device, storage medium and electronic device
CN113079369B (en) Method and device for determining image pickup equipment, storage medium and electronic device
CN103179333A (en) Method and device for surrounding browsing of images of 360-degree panoramic camera
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
Brzeszcz et al. Real‐time construction and visualisation of drift‐free video mosaics from unconstrained camera motion
CN112037127A (en) Privacy shielding method and device for video monitoring, storage medium and electronic device
WO2020259444A1 (en) Image processing method and related device
CN116055891A (en) Image processing method and device
CN110602410A (en) Image processing method and device, aerial camera and storage medium
KR20190064540A (en) Apparatus and method for generating panorama image
CN113965687A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination