WO2022141512A1 - 一种图像拼接方法、装置和计算机可读介质 - Google Patents

一种图像拼接方法、装置和计算机可读介质 Download PDF

Info

Publication number
WO2022141512A1
WO2022141512A1 PCT/CN2020/142401 CN2020142401W WO2022141512A1 WO 2022141512 A1 WO2022141512 A1 WO 2022141512A1 CN 2020142401 W CN2020142401 W CN 2020142401W WO 2022141512 A1 WO2022141512 A1 WO 2022141512A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
camera
exposure compensation
parameter
projection transformation
Prior art date
Application number
PCT/CN2020/142401
Other languages
English (en)
French (fr)
Inventor
王博文
Original Assignee
西门子股份公司
西门子(中国)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西门子股份公司, 西门子(中国)有限公司 filed Critical 西门子股份公司
Priority to EP20967850.7A priority Critical patent/EP4273790A1/en
Priority to PCT/CN2020/142401 priority patent/WO2022141512A1/zh
Priority to CN202080107492.8A priority patent/CN116490894A/zh
Priority to US18/259,342 priority patent/US20240064265A1/en
Publication of WO2022141512A1 publication Critical patent/WO2022141512A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • Embodiments of the present invention relate to the technical field of image processing, and in particular, to an image stitching (image stitching) method, apparatus, and computer-readable medium.
  • a closed circuit television (Closed Circuit Television, CCTV) system is a video data transmission system, in which the video data transmission is transmitted in a fixed loop.
  • CCTV Camera Television
  • cameras, monitors and recording equipment are directly connected.
  • the security camera system is the security camera system, which can be widely referenced in such as retail stores, banks and government organizations, and even in the home environment.
  • FIG. 3 shows the flow of a current image stitching method
  • FIG. 4 shows the results of image processing at each stage.
  • the process mainly includes two stages of processing: image registration stage 10 (image registration) and image composition stage 20 (image composition).
  • image registration stage 10 mainly includes step S101 feature point calculation, step S102 feature point matching and step S103 homography estimation, and the obtained homography matrix is used as a projection transformation parameter in the image synthesis stage.
  • image synthesis stage 202 mainly includes step S201 exposure estimation and step S202 image fusion.
  • the projection transformation parameters calculated in the image registration stage 10 are used to perform projection transformation on the images to be spliced.
  • the flow shown in FIG. 3 requires a lot of computation, especially the image registration stage 10 has the highest computational complexity and the longest computation time, which cannot meet the needs of real-time video processing.
  • the embodiments of the present invention provide an image stitching method, device, and computer-readable medium, which improve the current image stitching process, and can greatly shorten the calculation time to meet the needs of real-time video processing.
  • an image stitching method which can be performed by an edge processing device connected to both a first camera and a second camera, the method may include: acquiring a first picture of a current frame captured by the first camera and captured by the second camera The second picture of the current frame of When a frame of pictures is taken, the relative positional relationship between the first camera and the second camera remains unchanged; if the first condition is satisfied, the first camera of the previous frame captured by the first camera is obtained.
  • the second picture is fused, wherein the first picture is projectively transformed using the first projective transformation parameter, and the second picture is projectively transformed using the second projective transformation parameter.
  • an image stitching apparatus including modules for performing the steps in the method provided in the first aspect.
  • a third aspect provides an image stitching apparatus, comprising: at least one memory configured to store computer-readable codes; at least one processor configured to invoke the computer-readable codes to perform the steps provided in the first aspect .
  • a computer-readable medium storing computer-readable instructions on the computer-readable medium, the computer-readable instructions, when executed by a processor, cause the processor to execute the method provided in the first aspect. step.
  • the projection transformation parameters of the previous frame can be reused, which avoids the delay caused by recalculation, and greatly reduces the calculation complexity.
  • the pictures taken by different cameras with overlapping areas can be stitched into a picture with a wider field of view, and the multi-camera tracking problem can be converted into a relatively mature single-camera tracking problem, which reduces the technical difficulty.
  • the pictures taken by the cameras in the same group can be spliced together in real time to obtain a single spliced picture or video stream, which can reduce the number of input channels and reduce the workload of manual monitoring.
  • whether the first condition is satisfied may be specifically determined by the following method: if the positions of the first camera and the second camera are both maintained when taking pictures of the current frame and the previous frame If it does not change, it is determined that the relative positional relationship between the first camera and the second camera remains unchanged. Provides a simple way to determine that the positional relationship between cameras remains unchanged.
  • the number of feature points whose gradient direction changes in the first picture relative to the third picture accounts for a first proportion of all the feature points; If the first proportional threshold is set, it is determined that the position of the first camera remains unchanged when the current frame and the previous frame of pictures are captured; the feature points of the gradient direction change of the second picture relative to the fourth picture are determined.
  • the number accounts for a second proportion of the number of all feature points; if the second proportion is less than a preset second proportion threshold, it is determined that the position of the second camera remains unchanged when the current frame and the previous frame are captured.
  • the number of feature points moved by the first picture relative to the third picture accounts for a third proportion of the number of all feature points; if the third proportion is less than a preset third proportion threshold, then When taking pictures of the current frame and the previous frame, the position of the first camera remains unchanged; it is determined that the number of feature points moved by the second picture relative to the fourth picture accounts for a fourth ratio of the number of all feature points; If the fourth ratio is smaller than the preset fourth ratio threshold, it is determined that the position of the second camera remains unchanged when the current frame and the previous frame are captured.
  • the second condition includes: the brightness change of the first picture relative to the third picture is less than a preset first brightness change threshold , and the brightness change of the second picture relative to the fourth picture is less than a preset second brightness change threshold; if the second condition is satisfied, obtain the third picture used when performing exposure compensation on the third picture. an exposure compensation parameter and a second exposure compensation parameter used when performing exposure compensation on the fourth picture; when merging the first picture and the second picture, the A second picture is fused, wherein exposure compensation is performed on the first picture using the first exposure compensation parameter and exposure compensation is performed on the second picture using the second exposure compensation parameter.
  • the exposure compensation parameters of the previous frame can be reused, which further reduces the duration of image stitching.
  • the first condition is not satisfied, calculating the third projective transformation parameter that needs to be used to perform the projective transformation on the first picture and the third projective transformation parameter that needs to be used to perform the projective transformation on the second picture.
  • the fourth projection transformation parameter of the The parameter performs a projective transformation on the second picture.
  • the second condition is not satisfied, calculate a third exposure compensation parameter that needs to be used to perform exposure compensation on the first picture and a fourth exposure compensation parameter that needs to be used to perform exposure compensation on the second picture.
  • the third exposure compensation parameter may be used to perform exposure compensation on the first picture and the fourth exposure compensation parameter may be used to compensate the second
  • the picture is subjected to exposure compensation; wherein, the steps of calculating the third projection transformation parameter and the fourth projection transformation parameter and the steps of calculating the third exposure compensation parameter and the fourth exposure compensation parameter are performed in parallel.
  • the estimation of exposure and the calculation of projection transformation parameters are serial, but here the two are executed in parallel, even if the parameters of the previous frame cannot be reused, the execution process of the entire image stitching method can be shortened.
  • Figure 1 is a schematic diagram of a monitoring wall in a CCTV system.
  • FIG. 2 is a schematic diagram of stitching together pictures taken by different cameras.
  • FIG. 3 shows the flow of a current image stitching method.
  • FIG. 4 shows a picture obtained by stitching the image stitching method shown in FIG. 3 .
  • FIG. 5 is a flowchart of an image stitching method provided by an embodiment of the present invention.
  • FIG. 6 shows the connection relationship between the camera and the edge processing device when the image stitching method provided by the embodiment of the present invention is adopted.
  • FIG. 7 is a schematic structural diagram of an image stitching apparatus provided by an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of an embodiment of the present invention applied to a CCTV system.
  • FIG. 9 is a schematic diagram of an embodiment of the present invention applied to a traffic monitoring system.
  • 10A to 10C illustrate the deployment manner of the camera in the embodiment of the present invention.
  • the term “including” and variations thereof represent open-ended terms meaning “including but not limited to”.
  • the term “based on” means “based at least in part on”.
  • the terms “one embodiment” and “an embodiment” mean “at least one embodiment.”
  • the term “another embodiment” means “at least one other embodiment.”
  • the terms “first”, “second”, etc. may refer to different or the same objects. Other definitions, whether explicit or implicit, may be included below. The definition of a term is consistent throughout the specification unless the context clearly dictates otherwise.
  • the image registration stage takes the longest time due to the high computational complexity.
  • the following table is a comparison of the computational complexity of each stage in the image stitching process:
  • the computational complexity of the image registration stage can be reduced, the time required for the image registration stage can be reduced, thereby reducing the duration of the entire image stitching process.
  • the embodiment of the present invention considering a scenario, that is, the relative positional relationship between the cameras that shoot each picture remains unchanged when the pictures of the previous frame and the current frame are taken, then between the two frames of pictures, the same camera will The angle of the photographed object has not changed. Therefore, it can be considered that the parameters of the projection transformation have not changed.
  • the projection transformation parameters of the previous frame can be reused to process the current frame picture without recalculation, which greatly reduces the computational complexity. degree, shortening the processing delay.
  • the embodiments of the present invention will be described in detail below. For the sake of clarity, the splicing of two pictures is taken as an example, and the splicing principle of multiple pictures is the same.
  • the image stitching method 50 may include the following steps:
  • S500 Acquire two pictures of the current frame to be spliced: a first picture and a second picture.
  • the first picture is shot by the first camera
  • the second picture is shot by the second camera.
  • the positions of the two cameras remain the same when taking pictures continuously, and the fields of view of the two cameras overlap, it can be considered that the two cameras are between the areas captured by the two pictures obtained when taking the same frame of pictures. There is overlap.
  • S501 Determine whether the first condition is satisfied.
  • the first condition includes: when taking pictures of the current frame and the previous frame, the relative positional relationship between the first camera and the second camera remains unchanged; if the first condition is satisfied, execute step S502; otherwise, execute step S502 S506.
  • step S501 when judging whether the first condition is satisfied in step S501, it can be considered that if the positions of the first camera and the second camera remain unchanged when the pictures of the current frame and the previous frame are taken, the first camera can be determined. The relative positional relationship with the second camera remains unchanged.
  • various solutions including the following optional solution 1 and optional solution 2 can be adopted.
  • the number of feature points whose gradient direction changes in the first picture relative to the third picture accounts for the first proportion of all the feature points. If the first proportion is less than the preset first proportion threshold, it is determined that the first camera is shooting. The position remains the same between the current frame and the previous frame. Similarly, it is determined that the number of feature points where the gradient direction of the second picture changes relative to the fourth picture accounts for a second proportion of the number of all feature points. If the second proportion is less than the preset second proportion threshold, it is determined that the second camera is in The position remains the same when taking the current frame and the previous frame. Optionally, the first proportional threshold and the second proportional threshold are equal.
  • first projection transformation parameters used when performing projection transformation on the picture of the previous frame captured by the first camera (referred to as “third image”) and the image captured by the second camera.
  • Projection transformation parameters used when the picture of the previous frame (referred to as “fourth picture") is subjected to projective transformation.
  • the projection transformation parameter can be a homography matrix.
  • S506 Calculate a third projection transformation parameter that needs to be used to perform the projection transformation on the first picture and a fourth projection transformation parameter that needs to be used to perform the projection transformation on the second picture. Since the projection transformation parameters of the previous frame cannot be used, only the projection transformation parameters of the current frame can be recalculated. Optionally, the overlapping area between the two pictures and the spatial mapping relationship between the pictures can be found first, and then the relationship between the group of pictures and the spliced pictures can be obtained. The feature points of each picture in the group of pictures are calculated separately, and the methods that can be used include but are not limited to: Scale-Invariant Feature Transform (SIFT), Speed Up Robust Feature (SURF) and Oriented Fast and Rotated Brief (ORB).
  • SIFT Scale-Invariant Feature Transform
  • SURF Speed Up Robust Feature
  • ORB Oriented Fast and Rotated Brief
  • These feature points are independent of the size and angle of the picture and can be used as reference points in the picture. Match the feature points of each image to find the coincident area between the images and a set of related points, and use these related points to calculate the homography matrix between the two images.
  • the calculated homography matrix is used as a projection transformation parameter to perform projection transformation.
  • homogeneity is a projective transformation used in projective geometry, which describes the change in the position of an object perceived by the observer when the angle changes.
  • exposure compensation When performing image fusion, optionally, exposure compensation also needs to be performed. Therefore, optionally, exposure compensation parameters can also be obtained through the following steps.
  • the second condition is used to determine that the brightness changes of the two frames before and after are sufficiently small, and if so, the current frame can reuse the exposure compensation parameters of the previous frame.
  • the second condition may include: the brightness change of the first picture relative to the third picture is less than a preset first brightness change threshold, and the brightness change of the second picture relative to the fourth picture is less than a preset second brightness change threshold.
  • the first brightness change threshold is equal to the second brightness change threshold.
  • the brightness of each frame of a video stream will be compared at the position of the previously recorded feature points. If the degree of brightness change exceeds the threshold, it indicates that the exposure of the camera that shoots the video stream may have been If changes occur, the exposure compensation parameters need to be re-estimated.
  • step S505 is performed, otherwise, step S507 is performed.
  • S505 Acquire a first exposure compensation parameter used when performing exposure compensation on the third picture of the previous frame and a second exposure compensation parameter used when performing exposure compensation on the fourth picture of the previous frame.
  • S507 Calculate a third exposure compensation parameter that needs to be used for exposure compensation for the first picture and a fourth exposure compensation parameter that needs to be used for exposure compensation to the second picture. Since the exposure compensation parameters of the previous frame cannot be used, only the exposure compensation parameters of the current frame can be recalculated.
  • the first image and the second image are fused through the following steps.
  • the projection transformation parameters obtained in the previous steps are used to perform projection transformation on the two pictures respectively, and the exposure compensation parameters obtained in the previous steps are used to perform exposure compensation on the two pictures respectively. Then, the first picture and the second picture after projective transformation and exposure compensation are fused. The two pictures are stitched together into a larger complete picture.
  • a method such as multi-band blending can be used to perform image fusion, for example, exposure compensation is an optional processing step.
  • the steps of acquiring the projection transformation parameters and acquiring the exposure compensation parameters are performed in parallel.
  • the current frame picture can be directly processed by using the projection transformation parameters of the previous frame.
  • the calculation of the whole process is complicated. The degree of processing is greatly reduced, and the processing time is reduced.
  • the researchers of the present invention found that the calculation of the projection transformation parameters and the calculation of the exposure compensation parameters are relatively independent, so the two processes are creatively executed in parallel, Even if the aforementioned first condition is not satisfied, the step of calculating the projection transformation parameter and the step of obtaining the exposure compensation parameter can still be performed in parallel, which can partially shorten the entire image stitching compared with the serial processing method in the previous image stitching method. The processing time of the process.
  • the image stitching method provided by the embodiment of the present invention includes three processing steps: image registration, exposure estimation, and image fusion. Usually, all three steps only need to be performed in the first frame, and only image fusion can be performed subsequently, unless the positional relationship between the cameras and the exposure compensation parameters have changed, they need to be recalculated.
  • the image stitching method provided by the embodiment of the present invention can achieve a higher computing speed than the traditional image stitching method. 60 times higher. It only takes 30 ms or less to run the image stitching method provided by the embodiment of the present invention on the edge processing device. On the contrary, it takes about 2000ms to perform image stitching on the same edge processing device using the traditional method.
  • An optional implementation manner is shown in FIG. 6 , each camera 61, 62, .
  • the image stitching method 50 stitches pictures from various cameras.
  • the method provided by the embodiment of the present invention can be applied to various scenarios where image stitching needs to be performed. The following are respectively explained:
  • MTMC Multi-Target Multi-Camera Tracking
  • the technical difficulty in the MTMC system is how to find the same target from different camera visions. Due to different shooting angles and camera parameters, the characteristics of the same target captured by different cameras are not the same. The technology of tracking multiple targets in pictures captured by the same camera is relatively mature.
  • the image splicing method 50 provided by the embodiment of the present invention performs real-time image splicing, splicing pictures taken by multiple cameras at the same location, and the field of view of the cameras is expanded, thus overcoming the technical difficulty of target tracking across cameras. As shown in FIG.
  • the image stitching method 50 provided by the embodiment of the present invention can stitch pictures with a certain field of view and overlapping areas taken by different cameras into a picture with a wider field of view, and then the real-time collection of frame by frame A video composed of pictures forms a video with a wider field of view.
  • the position of each camera reasonably, it covers the entire monitoring area. Therefore, the problem of multi-camera tracking is transformed into the problem of single-camera tracking with relatively mature technology.
  • pictures taken by cameras in adjacent areas can be stitched together.
  • the surveillance cameras can be grouped according to regions, and the surveillance regions of the cameras in the same group are adjacent and overlap each other.
  • Pictures taken by cameras in the same group are stitched together in real time to obtain a more natural stitched picture or video data stream.
  • FIG. 8 several cameras 61, 62, .
  • the spliced video stream (which can be passed through the gateway) is sent to the router 80 and the Digital Video Recorder (DVR) 90, and the DVR 90 finally presents the video stream to the user or displays the video stream on the monitoring wall.
  • DVR Digital Video Recorder
  • the number of input channels can be reduced; and the spatial positional relationship between cameras is integrated, and the areas captured by multiple cameras are integrated into a complete area, which is suitable for target recognition and tracking system applications; in addition, The workload of manual monitoring will also be greatly reduced.
  • the method provided by the embodiment of the present invention is adopted, and the cameras are assembled into groups through picture stitching and picture fusion, which are compatible with different types. , Cameras with different fields of view, to achieve monitoring effects that cannot be achieved by traditional camera equipment. As shown in FIG. 10A , FIG. 10B and FIG. 10C , various methods can be used to deploy cameras to meet the requirements of different application scenarios.
  • the cameras in the same direction can be deployed in the horizontal direction or the vertical direction or both directions, which can expand the width and height of the monitoring picture and realize the control of a larger viewing angle.
  • the monitoring area can be controlled from multiple angles in the manner shown in FIG. 6B to reduce the influence of blind spots and blind spots.
  • the same function as the panoramic camera or the wide-angle camera can be implemented in the manner shown in FIG. 6C .
  • the field of view can be achieved by incorporating existing cameras according to actual needs.
  • the image stitching method 50 provided by the embodiment of the present invention can flexibly adjust the angle and direction of the cameras according to actual requirements, as long as the areas captured by the cameras in the same group overlap.
  • the pictures of video streams with different angles are spliced together, and the shape of the spliced pictures is determined according to the field of view of the combined cameras, so that respective surveillance pictures with special shapes can be generated.
  • the functions of a wide-angle camera or even a panoramic camera can be realized by using the existing camera, and the deployment of the camera is flexible, and the angle desired by the user can be obtained instead of a fixed angle.
  • the angle of the camera is fixed, and different application scenarios require different numbers of cameras.
  • the picture splicing method 50 provided by the embodiment of the present invention, the integration of video streams can be realized, and the flexibility of the application can be improved.
  • the existing "one camera, one screen” approach can be transformed into a "multiple cameras, one screen” approach.
  • the embodiment of the present invention further provides an image stitching apparatus 70, which can execute the aforementioned image stitching method 50.
  • the image stitching apparatus 70 may be implemented as a network of computer processors to perform processing on the image stitching apparatus 70 side in the image stitching method 50 in the embodiment of the present invention.
  • the image stitching device 70 can also be a single computer, a single-chip microcomputer or a processing chip as shown in FIG. 3 , and can be deployed in the aforementioned edge processing device 70 .
  • Image stitching device 70 may include at least one memory 7001, which includes a computer-readable medium, such as random access memory (RAM).
  • Image stitching apparatus 70 also includes at least one processor 7002 coupled with at least one memory 7001 .
  • Computer-executable instructions are stored in at least one memory 7001 and, when executed by at least one processor 7002, can cause at least one processor 7002 to perform the steps described herein.
  • the at least one memory 7001 shown in FIG. 3 may contain an image stitching program 701, so that the at least one processor 7002 executes the processing on the image stitching apparatus 70 side in the image stitching method 50 described in the embodiment of the present invention.
  • Image stitching procedure 701 may include:
  • a picture acquisition module 7011 configured to acquire a first picture of the current frame taken by the first camera and a second picture of the current frame taken by the second camera, wherein there is a gap between the areas taken by the first picture and the second picture overlapping;
  • a first judgment module 7012 configured to judge whether a first condition is satisfied, the first condition includes: when taking pictures of the current frame and the previous frame, the relative positional relationship between the first camera and the second camera remains unchanged ;
  • a first parameter acquisition module 7013 configured to acquire, if the first condition is satisfied, acquire the first projection transformation parameters used when performing projection transformation on the third picture of the previous frame captured by the first camera, and the parameters captured by the second camera.
  • a picture processing module 7014 configured to fuse the first picture and the second picture, wherein the first picture is projectively transformed using the first projective transformation parameters, and the second picture is projectively transformed using the second projective transformation parameters .
  • the first judging module 7012 is specifically configured to: if the positions of the first camera and the second camera remain unchanged when taking pictures of the current frame and the previous frame, then determine the first camera and the second camera. The relative positional relationship between them remains unchanged.
  • the first judging module 7012 is specifically configured to: determine that the number of feature points where the gradient direction of the first picture changes relative to the third picture accounts for a first proportion of the number of all feature points; if the first proportion is less than a preset
  • the first proportional threshold is to determine that the position of the first camera remains unchanged when the current frame and the previous frame of pictures are captured; it is determined that the number of feature points whose gradient direction changes in the second picture relative to the fourth picture accounts for the third of all feature points.
  • the first judging module 7012 is specifically configured to: determine a third ratio of the number of feature points moved by the first picture relative to the third picture to the number of all feature points; if the third ratio is less than a preset third ratio If the threshold value is set, it is determined that the position of the first camera remains unchanged when the current frame and the previous frame of pictures are taken; it is determined that the number of feature points moved by the second picture relative to the fourth picture accounts for the fourth proportion of the number of all feature points; if If the fourth ratio is smaller than the preset fourth ratio threshold, it is determined that the position of the second camera remains unchanged when the current frame and the previous frame are captured.
  • the image stitching apparatus 70 further includes a second judgment module 7015, configured to: judge whether a second condition is satisfied, and the second condition includes: the brightness change of the first picture relative to the third picture is smaller than the preset first brightness change threshold, and the brightness change of the second picture relative to the fourth picture is smaller than the preset second brightness change threshold; and also includes a second parameter obtaining module 7016, configured to: if the second condition is satisfied, obtain the third The first exposure compensation parameter used when performing exposure compensation on the picture and the second exposure compensation parameter used when performing exposure compensation on the fourth picture.
  • the picture processing module 7014 is specifically configured to: fuse the first picture and the second picture, wherein, use the first exposure compensation parameter to perform exposure compensation on the first picture and use the second exposure compensation parameter to perform exposure compensation on the second picture Perform exposure compensation.
  • the image stitching device 70 further includes a first parameter calculation module 7017, which is configured to calculate a third projection transformation parameter that needs to be used to perform projection transformation on the first picture and perform projection transformation on the second picture if the first condition is not satisfied.
  • the fourth projection transformation parameter that needs to be used for projection transformation the picture processing module 7014 is specifically configured to fuse the first picture and the second picture, wherein the third projection transformation parameter is used to perform projection transformation on the first picture and the fourth The projection transformation parameter performs projection transformation on the second picture;
  • the image stitching device 70 further includes a second parameter calculation module 7018, which is configured to calculate a third exposure compensation that needs to be used to perform exposure compensation on the first picture if the second condition is not satisfied parameters and the fourth exposure compensation parameter that needs to be used to perform exposure compensation on the second picture;
  • the picture processing module 7014 is specifically configured to fuse the first picture and the second picture, wherein the third exposure compensation parameter is used for the first picture.
  • the first parameter calculation module 7017 calculates the operation of the third projection transformation parameter and the fourth projection transformation parameter and the second parameter calculation module 7018 calculates the third The operations of the exposure compensation parameter and the fourth exposure compensation parameter are performed in parallel.
  • the above modules can also be regarded as various functional modules implemented by hardware, which are used to realize various functions involved in the image splicing apparatus 70 when executing the image splicing method, such as the control logic of each process involved in the method in advance. Burned into a chip such as a Field-Programmable Gate Array (FPGA) or a Complex Programmable Logic Device (CPLD), and these chips or devices perform the functions of the above modules.
  • FPGA Field-Programmable Gate Array
  • CPLD Complex Programmable Logic Device
  • the image stitching apparatus 70 may further include a communication module 7003, which is used for communication between the image stitching apparatus 70 and other devices, such as communication with a camera.
  • embodiments of the present invention may include apparatuses having architectures different from those shown in FIG. 7 .
  • the above structure is only exemplary, and is used to explain the image stitching method 50 provided by the embodiment of the present invention.
  • the at least one processor 7002 described above may include a microprocessor, an application specific integrated circuit (ASIC), a digital signal processor (DSP), a central processing unit (CPU), a graphics processing unit (GPU), a state machine, and the like.
  • Examples of computer readable media include, but are not limited to, floppy disks, CD-ROMs, magnetic disks, memory chips, ROM, RAM, ASICs, configured processors, all-optical media, all magnetic tapes or other magnetic media, or from which a computer processor can Any other medium from which to read instructions.
  • various other forms of computer-readable media can transmit or carry instructions to a computer, including routers, private or public networks, or other wired and wireless transmission devices or channels. Instructions can include code in any computer programming language, including C, C++, C, Visual Basic, java, and JavaScript.
  • the embodiments of the present invention further provide a computer-readable medium, where computer-readable instructions are stored on the computer-readable medium, and when the computer-readable instructions are executed by the processor, the processor executes the foregoing image stitching method.
  • Examples of computer-readable media include floppy disks, hard disks, magneto-optical disks, optical disks (eg, CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW), magnetic tape, non- Volatile memory cards and ROMs.
  • the computer readable instructions may be downloaded from a server computer or cloud over a communications network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

本发明实施例涉及图像处理技术,尤其涉及一种图像拼接方法、装置和计算机可读介质。方法包括:获取第一摄像头拍摄的当前帧的第一图片和第二摄像头拍摄的当前帧的第二图片,第一图片和第二图片所拍摄的区域之间有重叠;判断是否满足第一条件:在拍摄当前帧与前一帧的图片时,第一摄像头和第二摄像头之间的相对位置关系保持不变;若满足,则获取对第一摄像头拍摄的前一帧的第三图片进行投影变换时使用的第一投影变换参数以及对第二摄像头拍摄的前一帧的第四图片进行投影变换时使用的第二投影变换参数;对第一图片和第二图片进行融合,其中,使用第一投影变换参数对第一图片进行投影变换,以及使用第二投影变换参数对第二图片进行投影变换。

Description

一种图像拼接方法、装置和计算机可读介质 技术领域
本发明实施例涉及图像处理技术领域,尤其涉及一种图像拼接(image stitching)方法、装置和计算机可读介质。
背景技术
闭路电视(Closed Circuit Television,CCTV)系统,是一种视频数据传输系统,其中的视频数据传输在固定回路中传输。闭路电视系统中,摄像头、显示器和录制设备是直接连接的。CCTV系统最常见的应用之一是安全摄像系统(security camera system),可广泛引用于诸如零售店、银行和政府组织,甚至是家居环境中。
但是,在CCTV系统中,多路视频会同时显示在监控墙上(如图1所示),需要投入大量的人力对每一个监控屏幕逐一监视。因此,若能将不同路设备的图像拼接在一起(如图2所示),便于监视,可极大减少人力投入。
目前的一些图像拼接方法可以将同一摄像头拍摄的不同图片拼接到一起,形成一个具有更大视野范围的完整的图片,但通常比较耗时,无法满足实时视频流处理的需要。图3示出了目前的一种图像拼接方法的流程,图4示出了各阶段步骤图像处理的结果。该流程主要包括两阶段的处理:图像注册阶段10(image registration)和图像合成阶段20(image composition)。其中图像注册阶段10主要包括步骤S101特征点计算,步骤S102特征点匹配和步骤S103单应性估计,得到的单应性矩阵作为投影变换参数在图像合成阶段使用。图像合成阶段202主要包括步骤S201曝光度估计和步骤S202图片融合,在图片融合时使用图像注册阶段10中计算得到的投影变换参数对待拼接的图片进行投影变换。图3所示的流程需要进行大量计算,特别是图像注册阶段10的计算复杂度最高,计算所需时间最长,无法满足实时视频处理的需要。
发明内容
本发明实施例实施例提供了一种图像拼接方法、装置和计算机可读介质,对目前的图像拼接的流程进行改进,可极大缩短计算时长,以满足实时视频处理的需要。
第一方面,提供一种图像拼接方法,可由与第一摄像头和第二摄像头均连接的边缘处理设备执行,该方法可包括:获取第一摄像头拍摄的当前帧的第一图片和第二摄像头拍摄的当 前帧的第二图片,其中,所述第一图片和所述第二图片所拍摄的区域之间有重叠;判断第一条件是否满足,所述第一条件包括:在拍摄当前帧与前一帧的图片时,所述第一摄像头和所述第二摄像头之间的相对位置关系保持不变;若所述第一条件满足,则获取对所述第一摄像头拍摄的前一帧的第三图片进行投影变换时使用的第一投影变换参数以及对所述第二摄像头拍摄的前一帧的第四图片进行投影变换时使用的第二投影变换参数;对所述第一图片和所述第二图片进行融合,其中,使用所述第一投影变换参数对所述第一图片进行投影变换,以及使用所述第二投影变换参数对所述第二图片进行投影变换。
第二方面,提供一种图像拼接装置,包括用于执行第一方面提供的方法中各步骤的模块。
第三方面,提供一种图像拼接装置,包括:至少一个存储器,被配置为存储计算机可读代码;至少一个处理器,被配置为调用所述计算机可读代码,执行第一方面所提供的步骤。
第四方面,一种计算机可读介质,所述计算机可读介质上存储有计算机可读指令,所述计算机可读指令在被处理器执行时,使所述处理器执行第一方面所提供的步骤。
采用本发明实施例,在摄像头之间的位置关系不变的情况下,可重用上一帧的投影变换参数,避免了再次计算造成延时,极大降低计算复杂度。对于目标跟踪的场景,可将不同摄像头拍摄且拍摄的区域相互重叠的图片拼接成具有更广阔视野的图片,将多摄像头跟踪问题转换为技术相对成熟的单摄像头跟踪问题,降低了技术难度。若应用在CCTV系统中,同一组内的摄像头所拍摄的图片可实时拼接在一起获得单一的拼接后的图片或视频流,可减少输入通道的数量,减少人工监视的工作量。
对于上述任一方面,可选地,具体可通过如下方式判断第一条件是否满足:若在拍摄当前帧与前一帧的图片时,所述第一摄像头和所述第二摄像头的位置均保持不变,则确定所述第一摄像头和所述第二摄像头之间的相对位置关系保持不变。提供了一种实现简单的确定摄像头之间位置关系不变的方式。
在判断第一条件是否满足时,可确定所述第一图片相对于所述第三图片发生梯度方向变化的特征点的数量占所有特征点数量的第一比例;若所述第一比例小于预设的第一比例阈值,则确定所述第一摄像头在拍摄当前帧和前一帧图片时位置保持不变;确定所述第二图片相对于所述第四图片发生梯度方向变化的特征点的数量占所有特征点数量的第二比例;若所述第二比例小于预设的第二比例阈值,则确定所述第二摄像头在拍摄当前帧和前一帧图片时位置保持不变。或者,可确定所述第一图片相对于所述第三图片移动的特征点的数量占所有特征点数量的第三比例;若所述第三比例小于预设的第三比例阈值,则确定在拍摄当前帧和前一帧图片时,所述第一摄像头的位置保持不变;确定所述第二图片相对于所述第四图片移动的特征点的数量占所有特征点数量的第四比例;若所述第四比例小于预设的第四比例阈值,则 确定在拍摄当前帧和前一帧图片时,所述第二摄像头的位置保持不变。
对于上述任一方面,可选地,还可判断第二条件是否满足,所述第二条件包括:所述第一图片相对于所述第三图片的亮度变化小于预设的第一亮度变化阈值,且所述第二图片相对于所述第四图片的亮度变化小于预设的第二亮度变化阈值;若所述第二条件满足,则获取对所述第三图片进行曝光补偿时使用的第一曝光补偿参数和对所述第四图片进行曝光补偿时使用的第二曝光补偿参数;在对所述第一图片和所述第二图片进行融合时,可对所述第一图片和所述第二图片进行融合,其中,使用所述第一曝光补偿参数对所述第一图片进行曝光补偿以及使用所述第二曝光补偿参数对所述第二图片进行曝光补偿。其中,当曝光度不变的情况下,可重用上一帧的曝光补偿参数,进一步减少了图片拼接的时长。
对于上述任一方面,可选地,若所述第一条件不满足,则计算对所述第一图片进行投影变换需要使用的第三投影变换参数和对所述第二图片进行投影变换需要使用的第四投影变换参数;在对所述第一图片和所述第二图片进行融合时,可使用所述第三投影变换参数对所述第一图片进行投影变换以及使用所述第四投影变换参数对所述第二图片进行投影变换。可选地,若所述第二条件不满足,则计算对所述第一图片进行曝光补偿需要使用的第三曝光补偿参数和对所述第二图片进行曝光补偿需要使用的第四曝光补偿参数;在对所述第一图片和所述第二图片进行融合时,可使用所述第三曝光补偿参数对所述第一图片进行曝光补偿以及使用所述第四曝光补偿参数对所述第二图片进行曝光补偿;其中,计算所述第三投影变换参数和所述第四投影变换参数的步骤和计算所述第三曝光补偿参数和所述第四曝光补偿参数的步骤是并行执行的。以往的图像拼接方法中,曝光度的估计和投影变换参数的计算是串行的,而这里两者并行执行,即使不能重用上一帧的参数,也能够缩短整个图像拼接方法的执行过程。
附图说明
图1为CCTV系统中监控墙的示意图。
图2为不同摄像头拍摄的图片拼接到一起的示意图。
图3示出了目前一种图像拼接方法的流程。
图4示出了图3所示的图像拼接方法所拼接得到的图片。
图5为本发明实施例提供的图像拼接方法的流程图。
图6示出了采用本发明实施例提供的图像拼接方法时,摄像头与边缘处理处理设备之间的连接关系。
图7为本发明实施例提供的图像拼接装置的结构示意图。
图8为本发明实施例应用在CCTV系统中的示意图。
图9为本发明实施例应用在交通监控系统中的示意图。
图10A~图10C示出本发明实施例中摄像头的部署方式。
附图标记列表:
10:图像注册阶段
S101:特征点计算
S102:特征点匹配
S103:单应性估计
20:图像合成阶段
S201:曝光度估计
S202:图片融合
50:本发明实施例提供的图像拼接方法
S500~S507,S5071~S5073:方法步骤
61、62,……,6n:摄像头
70:边缘处理设备
7001:存储器
7002:处理器
701:图像拼接程序
7011~7018:程序模块
7003:通信模块
80:路由器
90:DVR
具体实施方式
现在将参考示例实施方式讨论本文描述的主题。应该理解,讨论这些实施方式只是为了使得本领域技术人员能够更好地理解从而实现本文描述的主题,并非是对权利要求书中所阐述的保护范围、适用性或者示例的限制。可以在不脱离本发明实施例内容的保护范围的情况下,对所讨论的元素的功能和排列进行改变。各个示例可以根据需要,省略、替代或者添加各种过程或组件。例如,所描述的方法可以按照与所描述的顺序不同的顺序来执行,以及各个步骤可以被添加、省略或者组合。另外,相对一些示例所描述的特征在其它例子中也可以 进行组合。
如本文中使用的,术语“包括”及其变型表示开放的术语,含义是“包括但不限于”。术语“基于”表示“至少部分地基于”。术语“一个实施例”和“一实施例”表示“至少一个实施例”。术语“另一个实施例”表示“至少一个其他实施例”。术语“第一”、“第二”等可以指代不同的或相同的对象。下面可以包括其他的定义,无论是明确的还是隐含的。除非上下文中明确地指明,否则一个术语的定义在整个说明书中是一致的。
如前所述,目前的图像拼接方法中,图像注册阶段由于计算复杂度较高,所需时间最长,下表是对图像拼接过程中各个阶段计算复杂度的比较:
Figure PCTCN2020142401-appb-000001
因此,如果能降低图像注册阶段的计算复杂度,就能减少图像注册阶段所需时长,进而能够减少整个图像拼接过程的时长。本发明实施例中,考虑到一种场景,即拍摄各个图片的摄像头在拍摄前一帧和当前帧的图片时的相对位置关系保持不变,则前后两帧图片之间,同一个摄像头对被拍摄物体的角度没有发生变化,因此,可视为投影变换的参数未发生变化,这时就可以重用前一帧的投影变换参数对当前帧图片进行处理,无需重新计算,这大大降低了计算复杂度,缩短了处理时延。下面对本发明各实施例进行详细说明,其中为了描述清晰,以两个图片的拼接为例,多张图片的拼接原理相同。
如图5所示,本发明实施例提供的图像拼接方法50可包括如下步骤:
S500:获取待拼接的当前帧的两个图片:第一图片和第二图片。
其中,第一图片和第二图片所拍摄的区域之间有重叠,第一图片由第一摄像头拍摄,第二图片由第二摄像头拍摄。比如:两个摄像头在连续拍摄图片时位置均保持不变,且两个摄像头的视野范围有重叠,则可认为这两个摄像头在拍摄同一帧图片时得到的两个图片所拍摄的区域之间有重叠。
S501:判断第一条件是否满足。其中,第一条件包括:在拍摄当前帧与前一帧的图片时,第一摄像头和第二摄像头之间的相对位置关系保持不变;若第一条件满足,则执行步骤S502,否则执行步骤S506。
其中,步骤S501在判断第一条件是否满足时,可考虑若在拍摄当前帧与前一帧的图片时, 第一摄像头和第二摄像头的位置均保持不变,则可确定所述第一摄像头和所述第二摄像头之间的相对位置关系保持不变,具体实现时,可采用包括下列可选方案一和可选方案二的多种方案。
可选方案一
可首先确定第一图片相对于第三图片发生梯度方向变化的特征点的数量占所有特征点数量的第一比例,若第一比例小于预设的第一比例阈值,则确定第一摄像头在拍摄当前帧和前一帧图片时位置保持不变。类似地,确定第二图片相对于第四图片发生梯度方向变化的特征点的数量占所有特征点数量的第二比例,若第二比例小于预设的第二比例阈值,则确定第二摄像头在拍摄当前帧和前一帧图片时位置保持不变。可选地,第一比例阈值和第二比例阈值相等。具体地,为了能够快速且有效地判断摄像头的位置是否发生变化,那么对于一个视频流,计算该视频流的首帧图片的特征点并记录这些特征点的位置;对于后续的每一帧,都会到这些特征点处比较图片的梯度方向是否发生变化,如果变化量在预设的范围内(比如:发生梯度变化的特征点的数量占所有特征点数量的前述第一比例小于预设的第一比例阈值),则确定图片没有发生位置变化或旋转。
可选方案二
首先,确定第一图片相对于第三图片移动的特征点的数量占所有特征点数量的第三比例,若第三比例小于预设的第三比例阈值,则确定在拍摄当前帧和前一帧图片时,第一摄像头的位置保持不变。类似地,确定第二图片相对于第四图片移动的特征点的数量占所有特征点数量的第四比例,若第四比例小于预设的第四比例阈值,则确定在拍摄当前帧和前一帧图片时,第二摄像头的位置保持不变。可选地,第三比例阈值和第四比例阈值相等。
S502:获取对第一摄像头拍摄的前一帧的图片(记为“第三图片”)进行投影变换时使用的投影变换参数(记为“第一投影变换参数”)以及对第二摄像头拍摄的前一帧的图片(记为“第四图片”)进行投影变换时使用的投影变换参数(记为“第二投影变换参数”)。这里投影变换参数可为单应性矩阵。
S506:计算对第一图片进行投影变换需要使用的第三投影变换参数和对第二图片进行投影变换需要使用的第四投影变换参数。由于无法使用前一帧的投影变换参数,只能重新计算当前帧的投影变换参数。可选地,可首先找到两个图片之间的重合区域以及图片间的空间映射关系,进而获得该组图片与拼接后的图片之间的关系。分别计算该组图片中每一张图片的特征点,可采用的方法包括但不限于:尺度不变特征变换(Scale-Invariant Feature Transform,SIFT)、加速鲁棒特征(Speed Up Robust Feature,SURF)和面向快速和旋转指示(Oriented Fast and Rotated Brief,ORB)。这些特征点与图片的尺寸和角度无关,可以用作图片中的参考点。 将个图片的特征点匹配,以找到图片间的重合区域和一组相关的点,利用这些相关的点计算两个图片之间的单应性矩阵(homography matrix)。计算得到的单应性矩阵作为投影变换参数,进行投影变换。其中,齐次性(homogeneity)是一种投影变换,用于投影几何(projective geometry)中,其描述了当观察者的角度发生变化时,其感受到的物体位置的变化。
在进行图片融合时,可选地,还需要进行曝光补偿。因此,可选地,还可通过下面的步骤获取曝光补偿参数。
S504:判断第二条件是否满足。第二条件用于判断前后两帧图片的亮度变化足够小,若是,则当前帧可复用前一帧的曝光补偿参数。具体地,第二条件可包括:第一图片相对于第三图片的亮度变化小于预设的第一亮度变化阈值,且第二图片相对于第四图片的亮度变化小于预设的第二亮度变化阈值。可选地,第一亮度变化阈值与第二亮度变化阈值相等。
具体地,与第一条件的判断类似,一个视频流的每一帧会在之前记录的特征点位置进行亮度比较,如果亮度变化程度超过阈值,则表明拍摄该视频流的摄像头的曝光度可能已经发生变化,则需要重新估计曝光补偿参数。
若第二条件满足,则执行步骤S505,否则,执行步骤S507。
S505:获取对前一帧的第三图片进行曝光补偿时使用的第一曝光补偿参数和对前一帧的第四图片进行曝光补偿时使用的第二曝光补偿参数。
S507:计算对第一图片进行曝光补偿需要使用的第三曝光补偿参数和对第二图片进行曝光补偿需要使用的第四曝光补偿参数。由于无法使用前一帧的曝光补偿参数,只能重新计算当前帧的曝光补偿参数。
在通过前面的步骤获得投影变换参数和曝光补偿参数后,接下来通过下面的步骤对第一图片和第二图片进行融合。
S503:对所述第一图片和所述第二图片进行融合。
其中,使用前面步骤得到的投影变换参数对两个图片分别进行投影变换,以及使用前面步骤得到的曝光补偿参数对两个图片分别进行曝光补偿。然后,将投影变换和曝光补偿后的第一图片和第二图片进行融合。两个图片被拼接成更大的完整的图片。其中,可使用多波段融合(Multi-band blending)等方法进行图片融合,比如:其中,曝光补偿为可选的处理步骤。
前述过程中,如图5所示,获取投影变换参数和获取曝光补偿参数的步骤是并行执行的。一方面,若前述第一条件满足,即可以直接使用前一帧的投影变换参数处理当前帧图片,则整体图像拼接过程中,由于无需计算当前帧的投影变换参数,而使得整个过程的计算复杂度大大降低,处理时长减小。另一方面,通过对投影变换过程和曝光补偿过程的大量研究和实验,本发明的研究人员发现,投影变换参数的计算和曝光补偿参数的计算相对独立,因此创 造性地将两个过程并行执行,即若前述第一条件不满足,仍可将计算投影变换参数的步骤和获取曝光补偿参数的步骤并行执行,与以往的图片拼接方法中串行处理的方式相比,也能够部分缩短整个图像拼接过程的处理时长。
综上,本发明实施例提供的图片拼接方法中包括:图像注册、曝光度估计和图片融合三个处理步骤。通常,仅需要在第一帧执行所有这三项步骤,后续仅进行图片融合即可,除非摄像头之间的位置关系和曝光补偿参数发生变化,则需要重新计算。
通过实验证明,采用本发明实施例提供的图像拼接方法,在理想条件下(即摄像头之间的位置关系和曝光补偿参数不变的情况下),与传统的图像拼接方法相比,运算速度可提高60倍。在边缘处理设备上运行本发明实施例提供的图像拼接方法仅需要耗时30ms或更短的时间。相反,采用传统的方法在同样的边缘处理设备上进行图像拼接需要耗时大约2000ms。一种可选的实现方式如图6所示,各个摄像头61、62,……,6n均与同一边缘处理设备70连接,各摄像头拍摄图片并传送到边缘处理设备70,边缘处理设备70执行上述图像拼接方法50,对来自各个摄像头的图片进行拼接。
需要说明的是,上述方法以两个摄像头举例说明,但实际应用中,可能会将多个摄像头拍摄的图片拼接在一起,其原理与上述方法相同,这里不再赘述。
本发明实施例提供的方法可应用于各种需要进行图像拼接的场景。下面分别说明:
1、多目标多摄像头跟踪(Multi-Target Multi-Camera Tracking,MTMC)
在MTMC系统的技术难点在于如何从不同的摄像头视觉中找到同一个目标。由于不同的拍摄角度和摄像头参数,由不同摄像头拍摄得到的同一目标的特征并不相同。而在同一个摄像头拍摄的图片中跟踪多个目标的技术相对成熟。本发明实施例提供的图像拼接方法50进行实时的图像拼接,对于同一个地点的多个摄像头拍摄的图片进行拼接,摄像头的视野被扩展,这样就克服了跨摄像头进行目标跟踪的技术难点。如图2所示,本发明实施例提供的图像拼接方法50可以将不同摄像头拍摄的分别具有一定视野且相互重叠区域的图片拼接成具有更广阔的视野的图片,进而一帧帧的实时采集的图片组成的视频就形成了具有更广阔视野的视频。通过合理设置各个摄像头的位置,覆盖整个监控区域。从而多摄像头跟踪的问题转换成了技术相对成熟的单摄像头跟踪的问题。
2、CCTV系统
采用本发明实施例提供的图像拼接方法50,相邻区域的摄像头所拍摄的图片可以被拼接在一起。具体地,可按照区域将监控摄像头分组,同一组内的摄像头之间监控的区域相邻,且相互之间有重叠。同一组内的摄像头所拍摄的图片被实时拼接在一起以获得更自然的拼接 后的图片或视频数据流。如图8所示,几个摄像头61、62,……,6n连接到同一个边缘处理设备70,该边缘处理设备70可直接获取各个摄像头的视频流,进行图片拼接后将多个视频流拼接为一个视频流,拼接后的视频流(可通过网关)发送至路由器80和数字视频录像机(Digital Video Recorder,DVR)90,而DVR90最终向用户呈现或在监控墙显示视频流。一方面,可减少输入通道的数量;并且,摄像头之间的空间位置关系是被整合的,多个摄像头所拍摄的区域被整合为一个完整的区域,适用目标识别和跟踪系统的应用;此外,人工监视的工作量也将大大降低。
3、交通监控
如图9所示,在交通监控系统中,为了监控一个路段,为了实现覆盖,需要沿路部署大量摄像头,每个摄像头只能监视该路段中的一小部分。比如:为了检测停靠的车辆,可能从多个角度获取车辆的图片以实现车辆识别并抓取车牌。采用本发明实施例提供的图像拼接方法50,一方面,从不同角度拍摄的图片可更有效地抓拍到车辆的车票,降低车辆检测的难度;另一方面,多个摄像头所拍摄的图片之间的拼接,可将基于多个摄像头的目标跟踪问题转化为单一一个摄像头下的目标跟踪问题,这可极大降低技术难度和车辆跟踪的计算复杂度。
不论是前述的MTMC系统、CCTV系统,还是交通监控系统,以及其他需要进行视频监控的系统,采用本发明实施例提供的方法,通过图片拼接和图片融合,摄像头被集合成组,可兼容不同类型、具有不同视野的摄像头,实现传统的摄像头设备无法实现的监控效果。如图10A、图10B和图10C所示,可采用各种不同的方法来部署摄像头以达到不同应用场景的需求。
比如:如图10A所示,同向的摄像头可在水平方向或垂直方向或两个方向上均部署,可扩展监控图片的宽度和高度,实现更大视角的控制。
再比如:对于视野中有遮挡或盲点的场景,可采用图6B所示的方式从多个角度控制监控区域,以减少盲点和死角的影响。
又比如:可采用图6C所示的方式实现全景摄像头或广角摄像头相同的功能。视野范围可以按照实际需求合并现有的摄像头来实现。
本发明实施例提供的图像拼接方法50在实际应用中可按照实际需求灵活调整摄像头的角度和方向,只要同一个组内的摄像头所拍摄的区域之间有重叠即可。将具有不同角度的视频流的图片拼接到一起,拼接后的图片的形状依据这些组合起来的摄像头的视野而定,可以产生各自的具有特殊形状的监控图片。使用现有的摄像头即可实现广角摄像头甚至全景摄像头实现的功能,并且摄像头的部署灵活,可获取用户期望的角度而不是固定的角度。
在目前绝大多数系统中,摄像头的角度是固定的,不同的应用场景需要采用不同数量的 摄像头。摄像头部署得越多,则需要显示的监控画面就越多。但是,采用本发明实施例提供的图片拼接方法50,可实现视频流的集成,提高应用的灵活性。通过部署能够实现该方法的软件或应用,即可将现有的“一个摄像头,一个屏幕”的方式转变为“多个摄像头,一个屏幕”的方式。
本发明实施例还提供一种图像拼接装置70,可执行前述的图像拼接方法50。图像拼接装置70可以实现为计算机处理器的网络,以执行本发明实施例中的图像拼接方法50中图像拼接装置70侧的处理。图像拼接装置70也可以是如图3所示的单台计算机、单片机或处理芯片,可部署在前述的边缘处理设备70中。图像拼接装置70可包括至少一个存储器7001,其包括计算机可读介质,例如随机存取存储器(RAM)。图像拼接装置70还包括与至少一个存储器7001耦合的至少一个处理器7002。计算机可执行指令存储在至少一个存储器7001中,并且当由至少一个处理器7002执行时,可以使至少一个处理器7002执行本文所述的步骤。
图3中所示的至少一个存储器7001可以包含图像拼接程序701,使得至少一个处理器7002执行本发明实施例中所述的图像拼接方法50中图像拼接装置70侧的处理。图像拼接程序701可以包括:
-图片获取模块7011,被配置为获取第一摄像头拍摄的当前帧的第一图片和第二摄像头拍摄的当前帧的第二图片,其中,第一图片和第二图片所拍摄的区域之间有重叠;
-第一判断模块7012,被配置为判断第一条件是否满足,第一条件包括:在拍摄当前帧与前一帧的图片时,第一摄像头和第二摄像头之间的相对位置关系保持不变;
-第一参数获取模块7013,被配置为若第一条件满足,则获取对第一摄像头拍摄的前一帧的第三图片进行投影变换时使用的第一投影变换参数以及对第二摄像头拍摄的前一帧的第四图片进行投影变换时使用的第二投影变换参数;
-图片处理模块7014,被配置为对第一图片和第二图片进行融合,其中,使用第一投影变换参数对第一图片进行投影变换,以及使用第二投影变换参数对第二图片进行投影变换。
可选地,第一判断模块7012被具体配置为:若在拍摄当前帧与前一帧的图片时,第一摄像头和第二摄像头的位置均保持不变,则确定第一摄像头和第二摄像头之间的相对位置关系保持不变。
可选地,第一判断模块7012被具体配置为:确定第一图片相对于第三图片发生梯度方向变化的特征点的数量占所有特征点数量的第一比例;若第一比例小于预设的第一比例阈值,则确定第一摄像头在拍摄当前帧和前一帧图片时位置保持不变;确定第二图片相对于第四图片发生梯度方向变化的特征点的数量占所有特征点数量的第二比例;若第二比例小于预设的第二比例阈值,则确定第二摄像头在拍摄当前帧和前一帧图片时位置保持不变。
可选地,第一判断模块7012被具体配置为:确定第一图片相对于第三图片移动的特征点的数量占所有特征点数量的第三比例;若第三比例小于预设的第三比例阈值,则确定在拍摄当前帧和前一帧图片时,第一摄像头的位置保持不变;确定第二图片相对于第四图片移动的特征点的数量占所有特征点数量的第四比例;若第四比例小于预设的第四比例阈值,则确定在拍摄当前帧和前一帧图片时,第二摄像头的位置保持不变。
可选地,图像拼接装置70还包括第二判断模块7015,被配置为:判断第二条件是否满足,第二条件包括:第一图片相对于第三图片的亮度变化小于预设的第一亮度变化阈值,且第二图片相对于第四图片的亮度变化小于预设的第二亮度变化阈值;以及还包括第二参数获取模块7016,被配置为:若第二条件满足,则获取对第三图片进行曝光补偿时使用的第一曝光补偿参数和对第四图片进行曝光补偿时使用的第二曝光补偿参数。此时,图片处理模块7014,被具体配置为:对第一图片和第二图片进行融合,其中,使用第一曝光补偿参数对第一图片进行曝光补偿以及使用第二曝光补偿参数对第二图片进行曝光补偿。
可选地,图像拼接装置70还包括第一参数计算模块7017,被配置为若第一条件不满足,则计算对第一图片进行投影变换需要使用的第三投影变换参数和对第二图片进行投影变换需要使用的第四投影变换参数;图片处理模块7014,被具体配置为对第一图片和第二图片进行融合,其中,使用第三投影变换参数对第一图片进行投影变换以及使用第四投影变换参数对第二图片进行投影变换;图像拼接装置70还包括第二参数计算模块7018,被配置为若第二条件不满足,则计算对第一图片进行曝光补偿需要使用的第三曝光补偿参数和对第二图片进行曝光补偿需要使用的第四曝光补偿参数;图片处理模块7014,被具体配置为对第一图片和第二图片进行融合,其中,使用第三曝光补偿参数对第一图片进行曝光补偿以及使用第四曝光补偿参数对第二图片进行曝光补偿;其中,第一参数计算模块7017计算第三投影变换参数和第四投影变换参数的操作和第二参数计算模块7018计算第三曝光补偿参数和第四曝光补偿参数的操作是并行执行的。
此外,上述各模块还也可视为由硬件实现的各个功能模块,用于实现图像拼接装置70在执行图像拼接方法时涉及的各种功能,比如预先将该方法中涉及的各流程的控制逻辑烧制到诸如现场可编程门阵列(Field-Programmable Gate Array,FPGA)芯片或复杂可编程逻辑器件(Complex Programmable Logic Device,CPLD)中,而由这些芯片或器件执行上述各模块的功能,具体实现方式可依工程实践而定。
此外,图像拼接装置70还可包括一个通信模块7003,用于图像拼接装置70与其他设备之间的通信,比如与摄像头之间进行通信。
应当提及的是,本发明实施例可以包括具有不同于图7所示架构的装置。上述架构仅仅 是示例性的,用于解释本发明实施例提供的图像拼接方法50。
上述至少一个处理器7002可以包括微处理器、专用集成电路(ASIC)、数字信号处理器(DSP)、中央处理单元(CPU)、图形处理单元(GPU)、状态机等。计算机可读介质的实施例包括但不限于软盘、CD-ROM、磁盘,存储器芯片、ROM、RAM、ASIC、配置的处理器、全光介质、所有磁带或其他磁性介质,或计算机处理器可以从中读取指令的任何其他介质。此外,各种其它形式的计算机可读介质可以向计算机发送或携带指令,包括路由器、专用或公用网络、或其它有线和无线传输设备或信道。指令可以包括任何计算机编程语言的代码,包括C、C++、C语言、Visual Basic、java和JavaScript。
此外,本发明实施例实施例还提供一种计算机可读介质,该计算机可读介质上存储有计算机可读指令,计算机可读指令在被处理器执行时,使处理器执行前述的图像拼接方法。计算机可读介质的实施例包括软盘、硬盘、磁光盘、光盘(如CD-ROM、CD-R、CD-RW、DVD-ROM、DVD-RAM、DVD-RW、DVD+RW)、磁带、非易失性存储卡和ROM。可选地,可以由通信网络从服务器计算机上或云上下载计算机可读指令。
需要说明的是,上述各流程和各系统结构图中不是所有的步骤和模块都是必须的,可以根据实际的需要忽略某些步骤或模块。各步骤的执行顺序不是固定的,可以根据需要进行调整。上述各实施例中描述的系统结构可以是物理结构,也可以是逻辑结构,即,有些模块可能由同一物理实体实现,或者,有些模块可能分由多个物理实体实现,或者,可以由多个独立设备中的某些部件共同实现。

Claims (16)

  1. 图像拼接方法(50),其特征在于,包括:
    -获取(S500)第一摄像头拍摄的当前帧的第一图片和第二摄像头拍摄的当前帧的第二图片,其中,所述第一图片和所述第二图片所拍摄的区域之间有重叠;
    -判断(S501)第一条件是否满足,所述第一条件包括:在拍摄当前帧与前一帧的图片时,所述第一摄像头和所述第二摄像头之间的相对位置关系保持不变;
    -若所述第一条件满足,则获取(S502)对所述第一摄像头拍摄的前一帧的第三图片进行投影变换时使用的第一投影变换参数以及对所述第二摄像头拍摄的前一帧的第四图片进行投影变换时使用的第二投影变换参数;
    -对所述第一图片和所述第二图片进行融合(S503),其中,使用所述第一投影变换参数对所述第一图片进行投影变换,以及使用所述第二投影变换参数对所述第二图片进行投影变换。
  2. 如权利要求1所述的方法,其特征在于,所述判断(S501)第一条件是否满足,包括:若在拍摄当前帧与前一帧的图片时,所述第一摄像头和所述第二摄像头的位置均保持不变,则确定所述第一摄像头和所述第二摄像头之间的相对位置关系保持不变。
  3. 如权利要求2所述的方法,其特征在于,所述判断(S501)第一条件是否满足,包括:
    -确定所述第一图片相对于所述第三图片发生梯度方向变化的特征点的数量占所有特征点数量的第一比例;
    -若所述第一比例小于预设的第一比例阈值,则确定所述第一摄像头在拍摄当前帧和前一帧图片时位置保持不变;
    -确定所述第二图片相对于所述第四图片发生梯度方向变化的特征点的数量占所有特征点数量的第二比例;
    -若所述第二比例小于预设的第二比例阈值,则确定所述第二摄像头在拍摄当前帧和前一帧图片时位置保持不变。
  4. 如权利要求2所述的方法,其特征在于,所述判断(S501)第一条件是否满足,包括:
    -确定所述第一图片相对于所述第三图片移动的特征点的数量占所有特征点数量的第三比例;
    -若所述第三比例小于预设的第三比例阈值,则确定在拍摄当前帧和前一帧图片时,所述第一摄像头的位置保持不变;
    -确定所述第二图片相对于所述第四图片移动的特征点的数量占所有特征点数量的第四比例;
    -若所述第四比例小于预设的第四比例阈值,则确定在拍摄当前帧和前一帧图片时,所述第二摄像头的位置保持不变。
  5. 如权利要求1所述的方法,其特征在于,
    -还包括:
    -判断(S504)第二条件是否满足,所述第二条件包括:所述第一图片相对于所述第三图片的亮度变化小于预设的第一亮度变化阈值,且所述第二图片相对于所述第四图片的亮度变化小于预设的第二亮度变化阈值;
    -若所述第二条件满足,则获取(S505)对所述第三图片进行曝光补偿时使用的第一曝光补偿参数和对所述第四图片进行曝光补偿时使用的第二曝光补偿参数;
    -对所述第一图片和所述第二图片进行融合(S503),包括:
    -对所述第一图片和所述第二图片进行融合,其中,使用所述第一曝光补偿参数对所述第一图片进行曝光补偿以及使用所述第二曝光补偿参数对所述第二图片进行曝光补偿。
  6. 如权利要求5所述的方法,其特征在于,还包括:
    若所述第一条件不满足,则计算(S506)对所述第一图片进行投影变换需要使用的第三投影变换参数和对所述第二图片进行投影变换需要使用的第四投影变换参数;对所述第一图片和所述第二图片进行融合(S503),包括:对所述第一图片和所述第二图片进行融合,其中,使用所述第三投影变换参数对所述第一图片进行投影变换以及使用所述第四投影变换参数对所述第二图片进行投影变换;
    若所述第二条件不满足,则计算(S507)对所述第一图片进行曝光补偿需要使用的第三曝光补偿参数和对所述第二图片进行曝光补偿需要使用的第四曝光补偿参数;对所述第一图片和所述第二图片进行融合(S503),包括:对所述第一图片和所述第二图片进行融合,其中,使用所述第三曝光补偿参数对所述第一图片进行曝光补偿以及使用所述第四曝光补偿参数对所述第二图片进行曝光补偿;
    其中,计算(S506)所述第三投影变换参数和所述第四投影变换参数的步骤和计算(S507)所述第三曝光补偿参数和所述第四曝光补偿参数的步骤是并行执行的。
  7. 如权利要求1所述的方法,其特征在于,
    所述第一摄像头和所述第二摄像头是闭路电视CCTV系统中的摄像头,且所述方法是由所述第一摄像头和所述第二摄像头所连接的同一边缘处理设备执行的;或者
    所述第一摄像头和所述第二摄像头是路边停车系统中的摄像头,且所述方法是由所述第一摄像头和所述第二摄像头所连接的同一边缘处理设备执行的。
  8. 图像拼接装置(70),其特征在于,包括:
    -图片获取模块(7011),被配置为获取第一摄像头拍摄的当前帧的第一图片和第二摄像头拍摄的当前帧的第二图片,其中,所述第一图片和所述第二图片所拍摄的区域之间有重叠;
    -第一判断模块(7012),被配置为判断第一条件是否满足,所述第一条件包括:在拍摄当前帧与前一帧的图片时,所述第一摄像头和所述第二摄像头之间的相对位置关系保持不变;
    -第一参数获取模块(7013),被配置为若所述第一条件满足,则获取对所述第一摄像头拍摄的前一帧的第三图片进行投影变换时使用的第一投影变换参数以及对所述第二摄像头拍摄的前一帧的第四图片进行投影变换时使用的第二投影变换参数;
    -图片处理模块(7014),被配置为对所述第一图片和所述第二图片进行融合,其中,使用所述第一投影变换参数对所述第一图片进行投影变换,以及使用所述第二投影变换参数对所述第二图片进行投影变换。
  9. 如权利要求8所述的装置,其特征在于,所述第一判断模块(7012)被具体配置为:若在拍摄当前帧与前一帧的图片时,所述第一摄像头和所述第二摄像头的位置均保持不变,则确定所述第一摄像头和所述第二摄像头之间的相对位置关系保持不变。
  10. 如权利要求9所述的装置,其特征在于,所述第一判断模块(7012)被具体配置为:
    -确定所述第一图片相对于所述第三图片发生梯度方向变化的特征点的数量占所有特征点数量的第一比例;
    -若所述第一比例小于预设的第一比例阈值,则确定所述第一摄像头在拍摄当前帧和前一帧图片时位置保持不变;
    -确定所述第二图片相对于所述第四图片发生梯度方向变化的特征点的数量占所有特征点数量的第二比例;
    -若所述第二比例小于预设的第二比例阈值,则确定所述第二摄像头在拍摄当前帧和前一帧图片时位置保持不变。
  11. 如权利要求9所述的装置,其特征在于,所述第一判断模块(7012)被具体配置为:
    -确定所述第一图片相对于所述第三图片移动的特征点的数量占所有特征点数量的第三比例;
    -若所述第三比例小于预设的第三比例阈值,则确定在拍摄当前帧和前一帧图片时,所述第一摄像头的位置保持不变;
    -确定所述第二图片相对于所述第四图片移动的特征点的数量占所有特征点数量的第四比例;
    -若所述第四比例小于预设的第四比例阈值,则确定在拍摄当前帧和前一帧图片时,所述第二摄像头的位置保持不变。
  12. 如权利要求8所述的装置,其特征在于,
    -还包括:
    -第二判断模块(7015),被配置为:判断第二条件是否满足,所述第二条件包括:所述第一图片相对于所述第三图片的亮度变化小于预设的第一亮度变化阈值,且所述第二图片相对于所述第四图片的亮度变化小于预设的第二亮度变化阈值;
    -第二参数获取模块(7016),被配置为:若所述第二条件满足,则获取对所述第三图片进行曝光补偿时使用的第一曝光补偿参数和对所述第四图片进行曝光补偿时使用的第二曝光补偿参数;
    -所述图片处理模块(7014),被具体配置为:对所述第一图片和所述第二图片进行融合,其中,使用所述第一曝光补偿参数对所述第一图片进行曝光补偿以及使用所述第二曝光补偿参数对所述第二图片进行曝光补偿。
  13. 如权利要求12所述的装置,其特征在于,
    还包括第一参数计算模块(7017),被配置为若所述第一条件不满足,则计算对所述第一图片进行投影变换需要使用的第三投影变换参数和对所述第二图片进行投影变换需要使用的第四投影变换参数;所述图片处理模块(7014),被具体配置为对所述第一图片和所述第二图片进行融合,其中,使用所述第三投影变换参数对所述第一图片进行投影变换以及使用所述第四投影变换参数对所述第二图片进行投影变换;
    还包括第二参数计算模块(7018),被配置为若所述第二条件不满足,则计算对所述第一图片进行曝光补偿需要使用的第三曝光补偿参数和对所述第二图片进行曝光补偿需要使用的第四曝光补偿参数;所述图片处理模块(7014),被具体配置为对所述第一图片和所述第二图 片进行融合,其中,使用所述第三曝光补偿参数对所述第一图片进行曝光补偿以及使用所述第四曝光补偿参数对所述第二图片进行曝光补偿;
    其中,所述第一参数计算模块(7017)计算所述第三投影变换参数和所述第四投影变换参数的操作和所述第二参数计算模块(7018)计算所述第三曝光补偿参数和所述第四曝光补偿参数的操作是并行执行的。
  14. 如权利要求8所述的装置,其特征在于,
    所述第一摄像头和所述第二摄像头是闭路电视CCTV系统中的摄像头,且所述方法是由所述第一摄像头和所述第二摄像头所连接的同一边缘处理设备执行的;或者所述第一摄像头和所述第二摄像头是路边停车系统中的摄像头,且所述方法是由所述第一摄像头和所述第二摄像头所连接的同一边缘处理设备执行的。
  15. 一种图像拼接装置(70),其特征在于,包括:
    至少一个存储器(7001),被配置为存储计算机可读代码;
    至少一个处理器(7002),被配置为调用所述计算机可读代码,执行如权利要求1~7任一项所述的方法。
  16. 一种计算机可读介质,其特征在于,所述计算机可读介质上存储有计算机可读指令,所述计算机可读指令在被处理器执行时,使所述处理器执行如权利要求1~7任一项所述的方法。
PCT/CN2020/142401 2020-12-31 2020-12-31 一种图像拼接方法、装置和计算机可读介质 WO2022141512A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP20967850.7A EP4273790A1 (en) 2020-12-31 2020-12-31 Image stitching method and apparatus, and computer-readable medium
PCT/CN2020/142401 WO2022141512A1 (zh) 2020-12-31 2020-12-31 一种图像拼接方法、装置和计算机可读介质
CN202080107492.8A CN116490894A (zh) 2020-12-31 2020-12-31 一种图像拼接方法、装置和计算机可读介质
US18/259,342 US20240064265A1 (en) 2020-12-31 2020-12-31 Image Stitching Method and Apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/142401 WO2022141512A1 (zh) 2020-12-31 2020-12-31 一种图像拼接方法、装置和计算机可读介质

Publications (1)

Publication Number Publication Date
WO2022141512A1 true WO2022141512A1 (zh) 2022-07-07

Family

ID=82260109

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/142401 WO2022141512A1 (zh) 2020-12-31 2020-12-31 一种图像拼接方法、装置和计算机可读介质

Country Status (4)

Country Link
US (1) US20240064265A1 (zh)
EP (1) EP4273790A1 (zh)
CN (1) CN116490894A (zh)
WO (1) WO2022141512A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851076A (zh) * 2015-05-27 2015-08-19 武汉理工大学 用于商用车的全景环视泊车辅助系统及摄像头安装方法
CN110796596A (zh) * 2019-08-30 2020-02-14 深圳市德赛微电子技术有限公司 图像拼接方法、成像装置及全景成像系统
CN111583110A (zh) * 2020-04-24 2020-08-25 华南理工大学 一种航拍图像的拼接方法
US20200374498A1 (en) * 2018-12-17 2020-11-26 Lightform, Inc. Method for augmenting surfaces in a space with visual content

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592124B (zh) * 2011-01-13 2013-11-27 汉王科技股份有限公司 文本图像的几何校正方法、装置和双目立体视觉系统
EP3032459A1 (en) * 2014-12-10 2016-06-15 Ricoh Company, Ltd. Realogram scene analysis of images: shelf and label finding
CN106709894B (zh) * 2015-08-17 2020-10-27 北京亿羽舜海科技有限公司 一种图像实时拼接方法及系统
CN110874817B (zh) * 2018-08-29 2022-02-01 上海商汤智能科技有限公司 图像拼接方法和装置、车载图像处理装置、设备、介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851076A (zh) * 2015-05-27 2015-08-19 武汉理工大学 用于商用车的全景环视泊车辅助系统及摄像头安装方法
US20200374498A1 (en) * 2018-12-17 2020-11-26 Lightform, Inc. Method for augmenting surfaces in a space with visual content
CN110796596A (zh) * 2019-08-30 2020-02-14 深圳市德赛微电子技术有限公司 图像拼接方法、成像装置及全景成像系统
CN111583110A (zh) * 2020-04-24 2020-08-25 华南理工大学 一种航拍图像的拼接方法

Also Published As

Publication number Publication date
CN116490894A (zh) 2023-07-25
US20240064265A1 (en) 2024-02-22
EP4273790A1 (en) 2023-11-08

Similar Documents

Publication Publication Date Title
KR101956151B1 (ko) 사용자 단말기에 이용되는 전경 영상 생성 방법 및 장치
US7583815B2 (en) Wide-area site-based video surveillance system
US10489885B2 (en) System and method for stitching images
CN111583116A (zh) 基于多摄像机交叉摄影的视频全景拼接融合方法及系统
JP2009193421A (ja) 画像処理装置、カメラ装置、画像処理方法、およびプログラム
WO2014023231A1 (zh) 宽视场超高分辨率光学成像系统及方法
TW201619910A (zh) 監控系統及其影像處理方法
US7224392B2 (en) Electronic imaging system having a sensor for correcting perspective projection distortion
CN106709894B (zh) 一种图像实时拼接方法及系统
JP7285791B2 (ja) 画像処理装置、および出力情報制御方法、並びにプログラム
WO2021184302A1 (zh) 图像处理方法、装置、成像设备、可移动载体及存储介质
WO2016184131A1 (zh) 基于双摄像头拍摄图像的方法、装置及计算机存储介质
US20160212410A1 (en) Depth triggered event feature
WO2020029877A1 (zh) 多相机拼接的亮度调整方法及便携式终端
KR20200064908A (ko) 제어장치, 촬상장치, 및 기억매체
WO2020259444A1 (zh) 一种图像处理方法及相关设备
WO2022141512A1 (zh) 一种图像拼接方法、装置和计算机可读介质
CN105210362B (zh) 图像调整设备、图像调整方法和图像捕获设备
KR102138333B1 (ko) 파노라마 영상 생성 장치 및 방법
JP7051365B2 (ja) 画像処理装置、画像処理方法、及びプログラム
CN114449130B (zh) 一种多摄像头的视频融合方法及系统
TW201410015A (zh) 多攝影機的整合處理系統及其方法
WO2015141185A1 (ja) 撮像制御装置、撮像制御方法および記録媒体
KR101132976B1 (ko) 복수 개의 카메라를 구비한 모바일 기기, 이를 이용한 디스플레이 표시방법
EP4060982A1 (en) Method and monitoring camera for handling video streams

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20967850

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202080107492.8

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 18259342

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020967850

Country of ref document: EP

Effective date: 20230731