WO2022141512A1 - 一种图像拼接方法、装置和计算机可读介质 - Google Patents
一种图像拼接方法、装置和计算机可读介质 Download PDFInfo
- Publication number
- WO2022141512A1 WO2022141512A1 PCT/CN2020/142401 CN2020142401W WO2022141512A1 WO 2022141512 A1 WO2022141512 A1 WO 2022141512A1 CN 2020142401 W CN2020142401 W CN 2020142401W WO 2022141512 A1 WO2022141512 A1 WO 2022141512A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- picture
- camera
- exposure compensation
- parameter
- projection transformation
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 83
- 230000009466 transformation Effects 0.000 claims abstract description 108
- 238000012545 processing Methods 0.000 claims abstract description 36
- 230000004927 fusion Effects 0.000 claims description 10
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000012544 monitoring process Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/71—Circuitry for evaluating the brightness variation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/76—Circuitry for compensating brightness variation in the scene by influencing the image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- Embodiments of the present invention relate to the technical field of image processing, and in particular, to an image stitching (image stitching) method, apparatus, and computer-readable medium.
- a closed circuit television (Closed Circuit Television, CCTV) system is a video data transmission system, in which the video data transmission is transmitted in a fixed loop.
- CCTV Camera Television
- cameras, monitors and recording equipment are directly connected.
- the security camera system is the security camera system, which can be widely referenced in such as retail stores, banks and government organizations, and even in the home environment.
- FIG. 3 shows the flow of a current image stitching method
- FIG. 4 shows the results of image processing at each stage.
- the process mainly includes two stages of processing: image registration stage 10 (image registration) and image composition stage 20 (image composition).
- image registration stage 10 mainly includes step S101 feature point calculation, step S102 feature point matching and step S103 homography estimation, and the obtained homography matrix is used as a projection transformation parameter in the image synthesis stage.
- image synthesis stage 202 mainly includes step S201 exposure estimation and step S202 image fusion.
- the projection transformation parameters calculated in the image registration stage 10 are used to perform projection transformation on the images to be spliced.
- the flow shown in FIG. 3 requires a lot of computation, especially the image registration stage 10 has the highest computational complexity and the longest computation time, which cannot meet the needs of real-time video processing.
- the embodiments of the present invention provide an image stitching method, device, and computer-readable medium, which improve the current image stitching process, and can greatly shorten the calculation time to meet the needs of real-time video processing.
- an image stitching method which can be performed by an edge processing device connected to both a first camera and a second camera, the method may include: acquiring a first picture of a current frame captured by the first camera and captured by the second camera The second picture of the current frame of When a frame of pictures is taken, the relative positional relationship between the first camera and the second camera remains unchanged; if the first condition is satisfied, the first camera of the previous frame captured by the first camera is obtained.
- the second picture is fused, wherein the first picture is projectively transformed using the first projective transformation parameter, and the second picture is projectively transformed using the second projective transformation parameter.
- an image stitching apparatus including modules for performing the steps in the method provided in the first aspect.
- a third aspect provides an image stitching apparatus, comprising: at least one memory configured to store computer-readable codes; at least one processor configured to invoke the computer-readable codes to perform the steps provided in the first aspect .
- a computer-readable medium storing computer-readable instructions on the computer-readable medium, the computer-readable instructions, when executed by a processor, cause the processor to execute the method provided in the first aspect. step.
- the projection transformation parameters of the previous frame can be reused, which avoids the delay caused by recalculation, and greatly reduces the calculation complexity.
- the pictures taken by different cameras with overlapping areas can be stitched into a picture with a wider field of view, and the multi-camera tracking problem can be converted into a relatively mature single-camera tracking problem, which reduces the technical difficulty.
- the pictures taken by the cameras in the same group can be spliced together in real time to obtain a single spliced picture or video stream, which can reduce the number of input channels and reduce the workload of manual monitoring.
- whether the first condition is satisfied may be specifically determined by the following method: if the positions of the first camera and the second camera are both maintained when taking pictures of the current frame and the previous frame If it does not change, it is determined that the relative positional relationship between the first camera and the second camera remains unchanged. Provides a simple way to determine that the positional relationship between cameras remains unchanged.
- the number of feature points whose gradient direction changes in the first picture relative to the third picture accounts for a first proportion of all the feature points; If the first proportional threshold is set, it is determined that the position of the first camera remains unchanged when the current frame and the previous frame of pictures are captured; the feature points of the gradient direction change of the second picture relative to the fourth picture are determined.
- the number accounts for a second proportion of the number of all feature points; if the second proportion is less than a preset second proportion threshold, it is determined that the position of the second camera remains unchanged when the current frame and the previous frame are captured.
- the number of feature points moved by the first picture relative to the third picture accounts for a third proportion of the number of all feature points; if the third proportion is less than a preset third proportion threshold, then When taking pictures of the current frame and the previous frame, the position of the first camera remains unchanged; it is determined that the number of feature points moved by the second picture relative to the fourth picture accounts for a fourth ratio of the number of all feature points; If the fourth ratio is smaller than the preset fourth ratio threshold, it is determined that the position of the second camera remains unchanged when the current frame and the previous frame are captured.
- the second condition includes: the brightness change of the first picture relative to the third picture is less than a preset first brightness change threshold , and the brightness change of the second picture relative to the fourth picture is less than a preset second brightness change threshold; if the second condition is satisfied, obtain the third picture used when performing exposure compensation on the third picture. an exposure compensation parameter and a second exposure compensation parameter used when performing exposure compensation on the fourth picture; when merging the first picture and the second picture, the A second picture is fused, wherein exposure compensation is performed on the first picture using the first exposure compensation parameter and exposure compensation is performed on the second picture using the second exposure compensation parameter.
- the exposure compensation parameters of the previous frame can be reused, which further reduces the duration of image stitching.
- the first condition is not satisfied, calculating the third projective transformation parameter that needs to be used to perform the projective transformation on the first picture and the third projective transformation parameter that needs to be used to perform the projective transformation on the second picture.
- the fourth projection transformation parameter of the The parameter performs a projective transformation on the second picture.
- the second condition is not satisfied, calculate a third exposure compensation parameter that needs to be used to perform exposure compensation on the first picture and a fourth exposure compensation parameter that needs to be used to perform exposure compensation on the second picture.
- the third exposure compensation parameter may be used to perform exposure compensation on the first picture and the fourth exposure compensation parameter may be used to compensate the second
- the picture is subjected to exposure compensation; wherein, the steps of calculating the third projection transformation parameter and the fourth projection transformation parameter and the steps of calculating the third exposure compensation parameter and the fourth exposure compensation parameter are performed in parallel.
- the estimation of exposure and the calculation of projection transformation parameters are serial, but here the two are executed in parallel, even if the parameters of the previous frame cannot be reused, the execution process of the entire image stitching method can be shortened.
- Figure 1 is a schematic diagram of a monitoring wall in a CCTV system.
- FIG. 2 is a schematic diagram of stitching together pictures taken by different cameras.
- FIG. 3 shows the flow of a current image stitching method.
- FIG. 4 shows a picture obtained by stitching the image stitching method shown in FIG. 3 .
- FIG. 5 is a flowchart of an image stitching method provided by an embodiment of the present invention.
- FIG. 6 shows the connection relationship between the camera and the edge processing device when the image stitching method provided by the embodiment of the present invention is adopted.
- FIG. 7 is a schematic structural diagram of an image stitching apparatus provided by an embodiment of the present invention.
- FIG. 8 is a schematic diagram of an embodiment of the present invention applied to a CCTV system.
- FIG. 9 is a schematic diagram of an embodiment of the present invention applied to a traffic monitoring system.
- 10A to 10C illustrate the deployment manner of the camera in the embodiment of the present invention.
- the term “including” and variations thereof represent open-ended terms meaning “including but not limited to”.
- the term “based on” means “based at least in part on”.
- the terms “one embodiment” and “an embodiment” mean “at least one embodiment.”
- the term “another embodiment” means “at least one other embodiment.”
- the terms “first”, “second”, etc. may refer to different or the same objects. Other definitions, whether explicit or implicit, may be included below. The definition of a term is consistent throughout the specification unless the context clearly dictates otherwise.
- the image registration stage takes the longest time due to the high computational complexity.
- the following table is a comparison of the computational complexity of each stage in the image stitching process:
- the computational complexity of the image registration stage can be reduced, the time required for the image registration stage can be reduced, thereby reducing the duration of the entire image stitching process.
- the embodiment of the present invention considering a scenario, that is, the relative positional relationship between the cameras that shoot each picture remains unchanged when the pictures of the previous frame and the current frame are taken, then between the two frames of pictures, the same camera will The angle of the photographed object has not changed. Therefore, it can be considered that the parameters of the projection transformation have not changed.
- the projection transformation parameters of the previous frame can be reused to process the current frame picture without recalculation, which greatly reduces the computational complexity. degree, shortening the processing delay.
- the embodiments of the present invention will be described in detail below. For the sake of clarity, the splicing of two pictures is taken as an example, and the splicing principle of multiple pictures is the same.
- the image stitching method 50 may include the following steps:
- S500 Acquire two pictures of the current frame to be spliced: a first picture and a second picture.
- the first picture is shot by the first camera
- the second picture is shot by the second camera.
- the positions of the two cameras remain the same when taking pictures continuously, and the fields of view of the two cameras overlap, it can be considered that the two cameras are between the areas captured by the two pictures obtained when taking the same frame of pictures. There is overlap.
- S501 Determine whether the first condition is satisfied.
- the first condition includes: when taking pictures of the current frame and the previous frame, the relative positional relationship between the first camera and the second camera remains unchanged; if the first condition is satisfied, execute step S502; otherwise, execute step S502 S506.
- step S501 when judging whether the first condition is satisfied in step S501, it can be considered that if the positions of the first camera and the second camera remain unchanged when the pictures of the current frame and the previous frame are taken, the first camera can be determined. The relative positional relationship with the second camera remains unchanged.
- various solutions including the following optional solution 1 and optional solution 2 can be adopted.
- the number of feature points whose gradient direction changes in the first picture relative to the third picture accounts for the first proportion of all the feature points. If the first proportion is less than the preset first proportion threshold, it is determined that the first camera is shooting. The position remains the same between the current frame and the previous frame. Similarly, it is determined that the number of feature points where the gradient direction of the second picture changes relative to the fourth picture accounts for a second proportion of the number of all feature points. If the second proportion is less than the preset second proportion threshold, it is determined that the second camera is in The position remains the same when taking the current frame and the previous frame. Optionally, the first proportional threshold and the second proportional threshold are equal.
- first projection transformation parameters used when performing projection transformation on the picture of the previous frame captured by the first camera (referred to as “third image”) and the image captured by the second camera.
- Projection transformation parameters used when the picture of the previous frame (referred to as “fourth picture") is subjected to projective transformation.
- the projection transformation parameter can be a homography matrix.
- S506 Calculate a third projection transformation parameter that needs to be used to perform the projection transformation on the first picture and a fourth projection transformation parameter that needs to be used to perform the projection transformation on the second picture. Since the projection transformation parameters of the previous frame cannot be used, only the projection transformation parameters of the current frame can be recalculated. Optionally, the overlapping area between the two pictures and the spatial mapping relationship between the pictures can be found first, and then the relationship between the group of pictures and the spliced pictures can be obtained. The feature points of each picture in the group of pictures are calculated separately, and the methods that can be used include but are not limited to: Scale-Invariant Feature Transform (SIFT), Speed Up Robust Feature (SURF) and Oriented Fast and Rotated Brief (ORB).
- SIFT Scale-Invariant Feature Transform
- SURF Speed Up Robust Feature
- ORB Oriented Fast and Rotated Brief
- These feature points are independent of the size and angle of the picture and can be used as reference points in the picture. Match the feature points of each image to find the coincident area between the images and a set of related points, and use these related points to calculate the homography matrix between the two images.
- the calculated homography matrix is used as a projection transformation parameter to perform projection transformation.
- homogeneity is a projective transformation used in projective geometry, which describes the change in the position of an object perceived by the observer when the angle changes.
- exposure compensation When performing image fusion, optionally, exposure compensation also needs to be performed. Therefore, optionally, exposure compensation parameters can also be obtained through the following steps.
- the second condition is used to determine that the brightness changes of the two frames before and after are sufficiently small, and if so, the current frame can reuse the exposure compensation parameters of the previous frame.
- the second condition may include: the brightness change of the first picture relative to the third picture is less than a preset first brightness change threshold, and the brightness change of the second picture relative to the fourth picture is less than a preset second brightness change threshold.
- the first brightness change threshold is equal to the second brightness change threshold.
- the brightness of each frame of a video stream will be compared at the position of the previously recorded feature points. If the degree of brightness change exceeds the threshold, it indicates that the exposure of the camera that shoots the video stream may have been If changes occur, the exposure compensation parameters need to be re-estimated.
- step S505 is performed, otherwise, step S507 is performed.
- S505 Acquire a first exposure compensation parameter used when performing exposure compensation on the third picture of the previous frame and a second exposure compensation parameter used when performing exposure compensation on the fourth picture of the previous frame.
- S507 Calculate a third exposure compensation parameter that needs to be used for exposure compensation for the first picture and a fourth exposure compensation parameter that needs to be used for exposure compensation to the second picture. Since the exposure compensation parameters of the previous frame cannot be used, only the exposure compensation parameters of the current frame can be recalculated.
- the first image and the second image are fused through the following steps.
- the projection transformation parameters obtained in the previous steps are used to perform projection transformation on the two pictures respectively, and the exposure compensation parameters obtained in the previous steps are used to perform exposure compensation on the two pictures respectively. Then, the first picture and the second picture after projective transformation and exposure compensation are fused. The two pictures are stitched together into a larger complete picture.
- a method such as multi-band blending can be used to perform image fusion, for example, exposure compensation is an optional processing step.
- the steps of acquiring the projection transformation parameters and acquiring the exposure compensation parameters are performed in parallel.
- the current frame picture can be directly processed by using the projection transformation parameters of the previous frame.
- the calculation of the whole process is complicated. The degree of processing is greatly reduced, and the processing time is reduced.
- the researchers of the present invention found that the calculation of the projection transformation parameters and the calculation of the exposure compensation parameters are relatively independent, so the two processes are creatively executed in parallel, Even if the aforementioned first condition is not satisfied, the step of calculating the projection transformation parameter and the step of obtaining the exposure compensation parameter can still be performed in parallel, which can partially shorten the entire image stitching compared with the serial processing method in the previous image stitching method. The processing time of the process.
- the image stitching method provided by the embodiment of the present invention includes three processing steps: image registration, exposure estimation, and image fusion. Usually, all three steps only need to be performed in the first frame, and only image fusion can be performed subsequently, unless the positional relationship between the cameras and the exposure compensation parameters have changed, they need to be recalculated.
- the image stitching method provided by the embodiment of the present invention can achieve a higher computing speed than the traditional image stitching method. 60 times higher. It only takes 30 ms or less to run the image stitching method provided by the embodiment of the present invention on the edge processing device. On the contrary, it takes about 2000ms to perform image stitching on the same edge processing device using the traditional method.
- An optional implementation manner is shown in FIG. 6 , each camera 61, 62, .
- the image stitching method 50 stitches pictures from various cameras.
- the method provided by the embodiment of the present invention can be applied to various scenarios where image stitching needs to be performed. The following are respectively explained:
- MTMC Multi-Target Multi-Camera Tracking
- the technical difficulty in the MTMC system is how to find the same target from different camera visions. Due to different shooting angles and camera parameters, the characteristics of the same target captured by different cameras are not the same. The technology of tracking multiple targets in pictures captured by the same camera is relatively mature.
- the image splicing method 50 provided by the embodiment of the present invention performs real-time image splicing, splicing pictures taken by multiple cameras at the same location, and the field of view of the cameras is expanded, thus overcoming the technical difficulty of target tracking across cameras. As shown in FIG.
- the image stitching method 50 provided by the embodiment of the present invention can stitch pictures with a certain field of view and overlapping areas taken by different cameras into a picture with a wider field of view, and then the real-time collection of frame by frame A video composed of pictures forms a video with a wider field of view.
- the position of each camera reasonably, it covers the entire monitoring area. Therefore, the problem of multi-camera tracking is transformed into the problem of single-camera tracking with relatively mature technology.
- pictures taken by cameras in adjacent areas can be stitched together.
- the surveillance cameras can be grouped according to regions, and the surveillance regions of the cameras in the same group are adjacent and overlap each other.
- Pictures taken by cameras in the same group are stitched together in real time to obtain a more natural stitched picture or video data stream.
- FIG. 8 several cameras 61, 62, .
- the spliced video stream (which can be passed through the gateway) is sent to the router 80 and the Digital Video Recorder (DVR) 90, and the DVR 90 finally presents the video stream to the user or displays the video stream on the monitoring wall.
- DVR Digital Video Recorder
- the number of input channels can be reduced; and the spatial positional relationship between cameras is integrated, and the areas captured by multiple cameras are integrated into a complete area, which is suitable for target recognition and tracking system applications; in addition, The workload of manual monitoring will also be greatly reduced.
- the method provided by the embodiment of the present invention is adopted, and the cameras are assembled into groups through picture stitching and picture fusion, which are compatible with different types. , Cameras with different fields of view, to achieve monitoring effects that cannot be achieved by traditional camera equipment. As shown in FIG. 10A , FIG. 10B and FIG. 10C , various methods can be used to deploy cameras to meet the requirements of different application scenarios.
- the cameras in the same direction can be deployed in the horizontal direction or the vertical direction or both directions, which can expand the width and height of the monitoring picture and realize the control of a larger viewing angle.
- the monitoring area can be controlled from multiple angles in the manner shown in FIG. 6B to reduce the influence of blind spots and blind spots.
- the same function as the panoramic camera or the wide-angle camera can be implemented in the manner shown in FIG. 6C .
- the field of view can be achieved by incorporating existing cameras according to actual needs.
- the image stitching method 50 provided by the embodiment of the present invention can flexibly adjust the angle and direction of the cameras according to actual requirements, as long as the areas captured by the cameras in the same group overlap.
- the pictures of video streams with different angles are spliced together, and the shape of the spliced pictures is determined according to the field of view of the combined cameras, so that respective surveillance pictures with special shapes can be generated.
- the functions of a wide-angle camera or even a panoramic camera can be realized by using the existing camera, and the deployment of the camera is flexible, and the angle desired by the user can be obtained instead of a fixed angle.
- the angle of the camera is fixed, and different application scenarios require different numbers of cameras.
- the picture splicing method 50 provided by the embodiment of the present invention, the integration of video streams can be realized, and the flexibility of the application can be improved.
- the existing "one camera, one screen” approach can be transformed into a "multiple cameras, one screen” approach.
- the embodiment of the present invention further provides an image stitching apparatus 70, which can execute the aforementioned image stitching method 50.
- the image stitching apparatus 70 may be implemented as a network of computer processors to perform processing on the image stitching apparatus 70 side in the image stitching method 50 in the embodiment of the present invention.
- the image stitching device 70 can also be a single computer, a single-chip microcomputer or a processing chip as shown in FIG. 3 , and can be deployed in the aforementioned edge processing device 70 .
- Image stitching device 70 may include at least one memory 7001, which includes a computer-readable medium, such as random access memory (RAM).
- Image stitching apparatus 70 also includes at least one processor 7002 coupled with at least one memory 7001 .
- Computer-executable instructions are stored in at least one memory 7001 and, when executed by at least one processor 7002, can cause at least one processor 7002 to perform the steps described herein.
- the at least one memory 7001 shown in FIG. 3 may contain an image stitching program 701, so that the at least one processor 7002 executes the processing on the image stitching apparatus 70 side in the image stitching method 50 described in the embodiment of the present invention.
- Image stitching procedure 701 may include:
- a picture acquisition module 7011 configured to acquire a first picture of the current frame taken by the first camera and a second picture of the current frame taken by the second camera, wherein there is a gap between the areas taken by the first picture and the second picture overlapping;
- a first judgment module 7012 configured to judge whether a first condition is satisfied, the first condition includes: when taking pictures of the current frame and the previous frame, the relative positional relationship between the first camera and the second camera remains unchanged ;
- a first parameter acquisition module 7013 configured to acquire, if the first condition is satisfied, acquire the first projection transformation parameters used when performing projection transformation on the third picture of the previous frame captured by the first camera, and the parameters captured by the second camera.
- a picture processing module 7014 configured to fuse the first picture and the second picture, wherein the first picture is projectively transformed using the first projective transformation parameters, and the second picture is projectively transformed using the second projective transformation parameters .
- the first judging module 7012 is specifically configured to: if the positions of the first camera and the second camera remain unchanged when taking pictures of the current frame and the previous frame, then determine the first camera and the second camera. The relative positional relationship between them remains unchanged.
- the first judging module 7012 is specifically configured to: determine that the number of feature points where the gradient direction of the first picture changes relative to the third picture accounts for a first proportion of the number of all feature points; if the first proportion is less than a preset
- the first proportional threshold is to determine that the position of the first camera remains unchanged when the current frame and the previous frame of pictures are captured; it is determined that the number of feature points whose gradient direction changes in the second picture relative to the fourth picture accounts for the third of all feature points.
- the first judging module 7012 is specifically configured to: determine a third ratio of the number of feature points moved by the first picture relative to the third picture to the number of all feature points; if the third ratio is less than a preset third ratio If the threshold value is set, it is determined that the position of the first camera remains unchanged when the current frame and the previous frame of pictures are taken; it is determined that the number of feature points moved by the second picture relative to the fourth picture accounts for the fourth proportion of the number of all feature points; if If the fourth ratio is smaller than the preset fourth ratio threshold, it is determined that the position of the second camera remains unchanged when the current frame and the previous frame are captured.
- the image stitching apparatus 70 further includes a second judgment module 7015, configured to: judge whether a second condition is satisfied, and the second condition includes: the brightness change of the first picture relative to the third picture is smaller than the preset first brightness change threshold, and the brightness change of the second picture relative to the fourth picture is smaller than the preset second brightness change threshold; and also includes a second parameter obtaining module 7016, configured to: if the second condition is satisfied, obtain the third The first exposure compensation parameter used when performing exposure compensation on the picture and the second exposure compensation parameter used when performing exposure compensation on the fourth picture.
- the picture processing module 7014 is specifically configured to: fuse the first picture and the second picture, wherein, use the first exposure compensation parameter to perform exposure compensation on the first picture and use the second exposure compensation parameter to perform exposure compensation on the second picture Perform exposure compensation.
- the image stitching device 70 further includes a first parameter calculation module 7017, which is configured to calculate a third projection transformation parameter that needs to be used to perform projection transformation on the first picture and perform projection transformation on the second picture if the first condition is not satisfied.
- the fourth projection transformation parameter that needs to be used for projection transformation the picture processing module 7014 is specifically configured to fuse the first picture and the second picture, wherein the third projection transformation parameter is used to perform projection transformation on the first picture and the fourth The projection transformation parameter performs projection transformation on the second picture;
- the image stitching device 70 further includes a second parameter calculation module 7018, which is configured to calculate a third exposure compensation that needs to be used to perform exposure compensation on the first picture if the second condition is not satisfied parameters and the fourth exposure compensation parameter that needs to be used to perform exposure compensation on the second picture;
- the picture processing module 7014 is specifically configured to fuse the first picture and the second picture, wherein the third exposure compensation parameter is used for the first picture.
- the first parameter calculation module 7017 calculates the operation of the third projection transformation parameter and the fourth projection transformation parameter and the second parameter calculation module 7018 calculates the third The operations of the exposure compensation parameter and the fourth exposure compensation parameter are performed in parallel.
- the above modules can also be regarded as various functional modules implemented by hardware, which are used to realize various functions involved in the image splicing apparatus 70 when executing the image splicing method, such as the control logic of each process involved in the method in advance. Burned into a chip such as a Field-Programmable Gate Array (FPGA) or a Complex Programmable Logic Device (CPLD), and these chips or devices perform the functions of the above modules.
- FPGA Field-Programmable Gate Array
- CPLD Complex Programmable Logic Device
- the image stitching apparatus 70 may further include a communication module 7003, which is used for communication between the image stitching apparatus 70 and other devices, such as communication with a camera.
- embodiments of the present invention may include apparatuses having architectures different from those shown in FIG. 7 .
- the above structure is only exemplary, and is used to explain the image stitching method 50 provided by the embodiment of the present invention.
- the at least one processor 7002 described above may include a microprocessor, an application specific integrated circuit (ASIC), a digital signal processor (DSP), a central processing unit (CPU), a graphics processing unit (GPU), a state machine, and the like.
- Examples of computer readable media include, but are not limited to, floppy disks, CD-ROMs, magnetic disks, memory chips, ROM, RAM, ASICs, configured processors, all-optical media, all magnetic tapes or other magnetic media, or from which a computer processor can Any other medium from which to read instructions.
- various other forms of computer-readable media can transmit or carry instructions to a computer, including routers, private or public networks, or other wired and wireless transmission devices or channels. Instructions can include code in any computer programming language, including C, C++, C, Visual Basic, java, and JavaScript.
- the embodiments of the present invention further provide a computer-readable medium, where computer-readable instructions are stored on the computer-readable medium, and when the computer-readable instructions are executed by the processor, the processor executes the foregoing image stitching method.
- Examples of computer-readable media include floppy disks, hard disks, magneto-optical disks, optical disks (eg, CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW), magnetic tape, non- Volatile memory cards and ROMs.
- the computer readable instructions may be downloaded from a server computer or cloud over a communications network.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (16)
- 图像拼接方法(50),其特征在于,包括:-获取(S500)第一摄像头拍摄的当前帧的第一图片和第二摄像头拍摄的当前帧的第二图片,其中,所述第一图片和所述第二图片所拍摄的区域之间有重叠;-判断(S501)第一条件是否满足,所述第一条件包括:在拍摄当前帧与前一帧的图片时,所述第一摄像头和所述第二摄像头之间的相对位置关系保持不变;-若所述第一条件满足,则获取(S502)对所述第一摄像头拍摄的前一帧的第三图片进行投影变换时使用的第一投影变换参数以及对所述第二摄像头拍摄的前一帧的第四图片进行投影变换时使用的第二投影变换参数;-对所述第一图片和所述第二图片进行融合(S503),其中,使用所述第一投影变换参数对所述第一图片进行投影变换,以及使用所述第二投影变换参数对所述第二图片进行投影变换。
- 如权利要求1所述的方法,其特征在于,所述判断(S501)第一条件是否满足,包括:若在拍摄当前帧与前一帧的图片时,所述第一摄像头和所述第二摄像头的位置均保持不变,则确定所述第一摄像头和所述第二摄像头之间的相对位置关系保持不变。
- 如权利要求2所述的方法,其特征在于,所述判断(S501)第一条件是否满足,包括:-确定所述第一图片相对于所述第三图片发生梯度方向变化的特征点的数量占所有特征点数量的第一比例;-若所述第一比例小于预设的第一比例阈值,则确定所述第一摄像头在拍摄当前帧和前一帧图片时位置保持不变;-确定所述第二图片相对于所述第四图片发生梯度方向变化的特征点的数量占所有特征点数量的第二比例;-若所述第二比例小于预设的第二比例阈值,则确定所述第二摄像头在拍摄当前帧和前一帧图片时位置保持不变。
- 如权利要求2所述的方法,其特征在于,所述判断(S501)第一条件是否满足,包括:-确定所述第一图片相对于所述第三图片移动的特征点的数量占所有特征点数量的第三比例;-若所述第三比例小于预设的第三比例阈值,则确定在拍摄当前帧和前一帧图片时,所述第一摄像头的位置保持不变;-确定所述第二图片相对于所述第四图片移动的特征点的数量占所有特征点数量的第四比例;-若所述第四比例小于预设的第四比例阈值,则确定在拍摄当前帧和前一帧图片时,所述第二摄像头的位置保持不变。
- 如权利要求1所述的方法,其特征在于,-还包括:-判断(S504)第二条件是否满足,所述第二条件包括:所述第一图片相对于所述第三图片的亮度变化小于预设的第一亮度变化阈值,且所述第二图片相对于所述第四图片的亮度变化小于预设的第二亮度变化阈值;-若所述第二条件满足,则获取(S505)对所述第三图片进行曝光补偿时使用的第一曝光补偿参数和对所述第四图片进行曝光补偿时使用的第二曝光补偿参数;-对所述第一图片和所述第二图片进行融合(S503),包括:-对所述第一图片和所述第二图片进行融合,其中,使用所述第一曝光补偿参数对所述第一图片进行曝光补偿以及使用所述第二曝光补偿参数对所述第二图片进行曝光补偿。
- 如权利要求5所述的方法,其特征在于,还包括:若所述第一条件不满足,则计算(S506)对所述第一图片进行投影变换需要使用的第三投影变换参数和对所述第二图片进行投影变换需要使用的第四投影变换参数;对所述第一图片和所述第二图片进行融合(S503),包括:对所述第一图片和所述第二图片进行融合,其中,使用所述第三投影变换参数对所述第一图片进行投影变换以及使用所述第四投影变换参数对所述第二图片进行投影变换;若所述第二条件不满足,则计算(S507)对所述第一图片进行曝光补偿需要使用的第三曝光补偿参数和对所述第二图片进行曝光补偿需要使用的第四曝光补偿参数;对所述第一图片和所述第二图片进行融合(S503),包括:对所述第一图片和所述第二图片进行融合,其中,使用所述第三曝光补偿参数对所述第一图片进行曝光补偿以及使用所述第四曝光补偿参数对所述第二图片进行曝光补偿;其中,计算(S506)所述第三投影变换参数和所述第四投影变换参数的步骤和计算(S507)所述第三曝光补偿参数和所述第四曝光补偿参数的步骤是并行执行的。
- 如权利要求1所述的方法,其特征在于,所述第一摄像头和所述第二摄像头是闭路电视CCTV系统中的摄像头,且所述方法是由所述第一摄像头和所述第二摄像头所连接的同一边缘处理设备执行的;或者所述第一摄像头和所述第二摄像头是路边停车系统中的摄像头,且所述方法是由所述第一摄像头和所述第二摄像头所连接的同一边缘处理设备执行的。
- 图像拼接装置(70),其特征在于,包括:-图片获取模块(7011),被配置为获取第一摄像头拍摄的当前帧的第一图片和第二摄像头拍摄的当前帧的第二图片,其中,所述第一图片和所述第二图片所拍摄的区域之间有重叠;-第一判断模块(7012),被配置为判断第一条件是否满足,所述第一条件包括:在拍摄当前帧与前一帧的图片时,所述第一摄像头和所述第二摄像头之间的相对位置关系保持不变;-第一参数获取模块(7013),被配置为若所述第一条件满足,则获取对所述第一摄像头拍摄的前一帧的第三图片进行投影变换时使用的第一投影变换参数以及对所述第二摄像头拍摄的前一帧的第四图片进行投影变换时使用的第二投影变换参数;-图片处理模块(7014),被配置为对所述第一图片和所述第二图片进行融合,其中,使用所述第一投影变换参数对所述第一图片进行投影变换,以及使用所述第二投影变换参数对所述第二图片进行投影变换。
- 如权利要求8所述的装置,其特征在于,所述第一判断模块(7012)被具体配置为:若在拍摄当前帧与前一帧的图片时,所述第一摄像头和所述第二摄像头的位置均保持不变,则确定所述第一摄像头和所述第二摄像头之间的相对位置关系保持不变。
- 如权利要求9所述的装置,其特征在于,所述第一判断模块(7012)被具体配置为:-确定所述第一图片相对于所述第三图片发生梯度方向变化的特征点的数量占所有特征点数量的第一比例;-若所述第一比例小于预设的第一比例阈值,则确定所述第一摄像头在拍摄当前帧和前一帧图片时位置保持不变;-确定所述第二图片相对于所述第四图片发生梯度方向变化的特征点的数量占所有特征点数量的第二比例;-若所述第二比例小于预设的第二比例阈值,则确定所述第二摄像头在拍摄当前帧和前一帧图片时位置保持不变。
- 如权利要求9所述的装置,其特征在于,所述第一判断模块(7012)被具体配置为:-确定所述第一图片相对于所述第三图片移动的特征点的数量占所有特征点数量的第三比例;-若所述第三比例小于预设的第三比例阈值,则确定在拍摄当前帧和前一帧图片时,所述第一摄像头的位置保持不变;-确定所述第二图片相对于所述第四图片移动的特征点的数量占所有特征点数量的第四比例;-若所述第四比例小于预设的第四比例阈值,则确定在拍摄当前帧和前一帧图片时,所述第二摄像头的位置保持不变。
- 如权利要求8所述的装置,其特征在于,-还包括:-第二判断模块(7015),被配置为:判断第二条件是否满足,所述第二条件包括:所述第一图片相对于所述第三图片的亮度变化小于预设的第一亮度变化阈值,且所述第二图片相对于所述第四图片的亮度变化小于预设的第二亮度变化阈值;-第二参数获取模块(7016),被配置为:若所述第二条件满足,则获取对所述第三图片进行曝光补偿时使用的第一曝光补偿参数和对所述第四图片进行曝光补偿时使用的第二曝光补偿参数;-所述图片处理模块(7014),被具体配置为:对所述第一图片和所述第二图片进行融合,其中,使用所述第一曝光补偿参数对所述第一图片进行曝光补偿以及使用所述第二曝光补偿参数对所述第二图片进行曝光补偿。
- 如权利要求12所述的装置,其特征在于,还包括第一参数计算模块(7017),被配置为若所述第一条件不满足,则计算对所述第一图片进行投影变换需要使用的第三投影变换参数和对所述第二图片进行投影变换需要使用的第四投影变换参数;所述图片处理模块(7014),被具体配置为对所述第一图片和所述第二图片进行融合,其中,使用所述第三投影变换参数对所述第一图片进行投影变换以及使用所述第四投影变换参数对所述第二图片进行投影变换;还包括第二参数计算模块(7018),被配置为若所述第二条件不满足,则计算对所述第一图片进行曝光补偿需要使用的第三曝光补偿参数和对所述第二图片进行曝光补偿需要使用的第四曝光补偿参数;所述图片处理模块(7014),被具体配置为对所述第一图片和所述第二图 片进行融合,其中,使用所述第三曝光补偿参数对所述第一图片进行曝光补偿以及使用所述第四曝光补偿参数对所述第二图片进行曝光补偿;其中,所述第一参数计算模块(7017)计算所述第三投影变换参数和所述第四投影变换参数的操作和所述第二参数计算模块(7018)计算所述第三曝光补偿参数和所述第四曝光补偿参数的操作是并行执行的。
- 如权利要求8所述的装置,其特征在于,所述第一摄像头和所述第二摄像头是闭路电视CCTV系统中的摄像头,且所述方法是由所述第一摄像头和所述第二摄像头所连接的同一边缘处理设备执行的;或者所述第一摄像头和所述第二摄像头是路边停车系统中的摄像头,且所述方法是由所述第一摄像头和所述第二摄像头所连接的同一边缘处理设备执行的。
- 一种图像拼接装置(70),其特征在于,包括:至少一个存储器(7001),被配置为存储计算机可读代码;至少一个处理器(7002),被配置为调用所述计算机可读代码,执行如权利要求1~7任一项所述的方法。
- 一种计算机可读介质,其特征在于,所述计算机可读介质上存储有计算机可读指令,所述计算机可读指令在被处理器执行时,使所述处理器执行如权利要求1~7任一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20967850.7A EP4273790A1 (en) | 2020-12-31 | 2020-12-31 | Image stitching method and apparatus, and computer-readable medium |
PCT/CN2020/142401 WO2022141512A1 (zh) | 2020-12-31 | 2020-12-31 | 一种图像拼接方法、装置和计算机可读介质 |
CN202080107492.8A CN116490894A (zh) | 2020-12-31 | 2020-12-31 | 一种图像拼接方法、装置和计算机可读介质 |
US18/259,342 US20240064265A1 (en) | 2020-12-31 | 2020-12-31 | Image Stitching Method and Apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/142401 WO2022141512A1 (zh) | 2020-12-31 | 2020-12-31 | 一种图像拼接方法、装置和计算机可读介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022141512A1 true WO2022141512A1 (zh) | 2022-07-07 |
Family
ID=82260109
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/142401 WO2022141512A1 (zh) | 2020-12-31 | 2020-12-31 | 一种图像拼接方法、装置和计算机可读介质 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240064265A1 (zh) |
EP (1) | EP4273790A1 (zh) |
CN (1) | CN116490894A (zh) |
WO (1) | WO2022141512A1 (zh) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104851076A (zh) * | 2015-05-27 | 2015-08-19 | 武汉理工大学 | 用于商用车的全景环视泊车辅助系统及摄像头安装方法 |
CN110796596A (zh) * | 2019-08-30 | 2020-02-14 | 深圳市德赛微电子技术有限公司 | 图像拼接方法、成像装置及全景成像系统 |
CN111583110A (zh) * | 2020-04-24 | 2020-08-25 | 华南理工大学 | 一种航拍图像的拼接方法 |
US20200374498A1 (en) * | 2018-12-17 | 2020-11-26 | Lightform, Inc. | Method for augmenting surfaces in a space with visual content |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102592124B (zh) * | 2011-01-13 | 2013-11-27 | 汉王科技股份有限公司 | 文本图像的几何校正方法、装置和双目立体视觉系统 |
EP3032459A1 (en) * | 2014-12-10 | 2016-06-15 | Ricoh Company, Ltd. | Realogram scene analysis of images: shelf and label finding |
CN106709894B (zh) * | 2015-08-17 | 2020-10-27 | 北京亿羽舜海科技有限公司 | 一种图像实时拼接方法及系统 |
CN110874817B (zh) * | 2018-08-29 | 2022-02-01 | 上海商汤智能科技有限公司 | 图像拼接方法和装置、车载图像处理装置、设备、介质 |
-
2020
- 2020-12-31 EP EP20967850.7A patent/EP4273790A1/en active Pending
- 2020-12-31 US US18/259,342 patent/US20240064265A1/en active Pending
- 2020-12-31 WO PCT/CN2020/142401 patent/WO2022141512A1/zh active Application Filing
- 2020-12-31 CN CN202080107492.8A patent/CN116490894A/zh active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104851076A (zh) * | 2015-05-27 | 2015-08-19 | 武汉理工大学 | 用于商用车的全景环视泊车辅助系统及摄像头安装方法 |
US20200374498A1 (en) * | 2018-12-17 | 2020-11-26 | Lightform, Inc. | Method for augmenting surfaces in a space with visual content |
CN110796596A (zh) * | 2019-08-30 | 2020-02-14 | 深圳市德赛微电子技术有限公司 | 图像拼接方法、成像装置及全景成像系统 |
CN111583110A (zh) * | 2020-04-24 | 2020-08-25 | 华南理工大学 | 一种航拍图像的拼接方法 |
Also Published As
Publication number | Publication date |
---|---|
CN116490894A (zh) | 2023-07-25 |
US20240064265A1 (en) | 2024-02-22 |
EP4273790A1 (en) | 2023-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101956151B1 (ko) | 사용자 단말기에 이용되는 전경 영상 생성 방법 및 장치 | |
US7583815B2 (en) | Wide-area site-based video surveillance system | |
US10489885B2 (en) | System and method for stitching images | |
CN111583116A (zh) | 基于多摄像机交叉摄影的视频全景拼接融合方法及系统 | |
JP2009193421A (ja) | 画像処理装置、カメラ装置、画像処理方法、およびプログラム | |
WO2014023231A1 (zh) | 宽视场超高分辨率光学成像系统及方法 | |
TW201619910A (zh) | 監控系統及其影像處理方法 | |
US7224392B2 (en) | Electronic imaging system having a sensor for correcting perspective projection distortion | |
CN106709894B (zh) | 一种图像实时拼接方法及系统 | |
JP7285791B2 (ja) | 画像処理装置、および出力情報制御方法、並びにプログラム | |
WO2021184302A1 (zh) | 图像处理方法、装置、成像设备、可移动载体及存储介质 | |
WO2016184131A1 (zh) | 基于双摄像头拍摄图像的方法、装置及计算机存储介质 | |
US20160212410A1 (en) | Depth triggered event feature | |
WO2020029877A1 (zh) | 多相机拼接的亮度调整方法及便携式终端 | |
KR20200064908A (ko) | 제어장치, 촬상장치, 및 기억매체 | |
WO2020259444A1 (zh) | 一种图像处理方法及相关设备 | |
WO2022141512A1 (zh) | 一种图像拼接方法、装置和计算机可读介质 | |
CN105210362B (zh) | 图像调整设备、图像调整方法和图像捕获设备 | |
KR102138333B1 (ko) | 파노라마 영상 생성 장치 및 방법 | |
JP7051365B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
CN114449130B (zh) | 一种多摄像头的视频融合方法及系统 | |
TW201410015A (zh) | 多攝影機的整合處理系統及其方法 | |
WO2015141185A1 (ja) | 撮像制御装置、撮像制御方法および記録媒体 | |
KR101132976B1 (ko) | 복수 개의 카메라를 구비한 모바일 기기, 이를 이용한 디스플레이 표시방법 | |
EP4060982A1 (en) | Method and monitoring camera for handling video streams |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20967850 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202080107492.8 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18259342 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020967850 Country of ref document: EP Effective date: 20230731 |