CN112004054A - Multi-azimuth monitoring method, equipment and computer readable storage medium - Google Patents

Multi-azimuth monitoring method, equipment and computer readable storage medium Download PDF

Info

Publication number
CN112004054A
CN112004054A CN202010747784.XA CN202010747784A CN112004054A CN 112004054 A CN112004054 A CN 112004054A CN 202010747784 A CN202010747784 A CN 202010747784A CN 112004054 A CN112004054 A CN 112004054A
Authority
CN
China
Prior art keywords
image
monitoring method
target
images
background image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010747784.XA
Other languages
Chinese (zh)
Inventor
赖振楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hosin Global Electronics Co Ltd
Original Assignee
Hosin Global Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hosin Global Electronics Co Ltd filed Critical Hosin Global Electronics Co Ltd
Priority to CN202010747784.XA priority Critical patent/CN112004054A/en
Publication of CN112004054A publication Critical patent/CN112004054A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Abstract

The invention provides a multi-azimuth monitoring method, equipment and a computer readable storage medium. The multi-azimuth monitoring method comprises the following steps: acquiring multiple continuous original images; respectively intercepting a plurality of local images from a plurality of preset positions of each frame of original image, and respectively generating a target image according to each local image; and respectively displaying the target images corresponding to each preset position, wherein the display sequence of the target images at the same preset position is the same as that of the original images. In the electronic equipment such as the monitoring equipment and the like, the monitoring method of the invention is adopted, the image acquired by the camera is divided and corrected, and the processed image is diffused to different display equipment or display pictures, so that the aim of monitoring a certain scene in multiple directions can be fulfilled by only using one wide-angle lens, the cost is saved, and the later troubleshooting and the maintenance are facilitated.

Description

Multi-azimuth monitoring method, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of video surveillance, and more particularly, to a multi-orientation surveillance method, apparatus, and computer-readable storage medium.
Background
In order to realize multi-directional monitoring in the prior art, a plurality of lenses need to be installed in the same scene, and the cost is increased. In addition, a single lens is driven to rotate through the holder, so that monitoring at different angles is realized. However, the pan-tilt increases the cost on the one hand, and on the other hand, the solution cannot be discontinuous in time at the monitoring of the various angles.
With the development of technology, the use of wide-angle cameras is becoming more and more common. These wide-angle cameras have a large field angle, which may be typically greater than 100 °, and are capable of capturing a wide range of scenes over a short range of capture distances.
Because wide-angle lenses have the advantage of large-area shooting, more and more monitoring systems employ wide-angle lenses. However, the image obtained by shooting with the wide-angle lens is smaller than the image of the scene shot with the standard lens in the picture; in addition, defects such as perspective distortion and image distortion are likely to occur in the screen, and such defects become more prominent as the focal length of the lens is shorter and the shooting distance is shorter.
Disclosure of Invention
The present application is directed to solving, at least in part, one of the technical problems in the related art.
The technical problem to be solved by the invention is that a single lens cannot realize multi-directional monitoring, and aiming at the problem, the technical scheme for solving the technical problem is as follows.
In a first aspect, the present invention provides a multi-orientation monitoring method. The method comprises the following steps: acquiring multiple continuous original images; respectively intercepting a plurality of local images from a plurality of preset positions of each frame of original image, and respectively generating a target image according to each local image; and respectively displaying the target images corresponding to each preset position, wherein the display sequence of the target images at the same preset position is the same as that of the original images.
Preferably, the original image is obtained by shooting with the same fisheye camera, and the preset positions respectively correspond to different shooting directions of the fisheye camera.
Preferably, the generating a target image according to each of the partial images includes the following steps performed for each of the partial images: and carrying out image distortion correction on the local image to obtain the target image.
Preferably, the generating a target image according to each of the partial images includes the following steps performed for each of the partial images: identifying a non-background image and a background image in the local image; carrying out image distortion correction processing on the non-background image and carrying out image distortion correction processing on the background image; and combining the non-background image after the correction processing with the background image after the correction processing to generate a target image.
Preferably, the performing image distortion correction processing on the non-background image includes: and processing each non-background image through a deep learning model respectively to realize image distortion correction.
Preferably, the generating a target image according to each of the partial images includes the following steps performed for each of the partial images: identifying a non-background image in the local image; carrying out image distortion correction processing on the non-background image; and combining the non-background image after the correction processing with the standard background image to generate a target image.
Preferably, the respectively displaying the target image at each preset position includes: the method comprises the steps of respectively displaying target images corresponding to a plurality of preset positions in a plurality of different display devices, and displaying the target image corresponding to one preset position on each display device.
Preferably, the respectively displaying the target image at each preset position includes: and respectively displaying the target images corresponding to the plurality of preset positions in a plurality of different display areas of the same display device, wherein each display area displays the target image corresponding to one preset position.
In a second aspect, the present invention provides a multi-azimuth monitoring apparatus, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the multi-azimuth monitoring method as described above when executing the computer program.
In a third aspect, the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the multi-aspect monitoring method as described above.
Drawings
FIG. 1 is a schematic flow chart of a multi-aspect monitoring method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of monitoring by using the multi-azimuth monitoring method provided by the embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The monitoring device can independently realize large-range dead-angle-free monitoring and has great advantages in short-distance shooting.
As shown in fig. 1, the multi-azimuth monitoring method provided by an embodiment of the present invention includes the following specific steps:
step S101, acquiring a plurality of continuous original images. The multiple continuous original images may be a surrounding environment monitoring video.
Specifically, referring to fig. 2, taking the example that the monitoring device employs the fisheye camera 1, the viewing angle of the fisheye camera 1 can generally reach 220 ° or 230 °, and the fisheye camera 1 can be installed in public or private places such as a lamp post of a road, an outer wall or an inner wall of a building, and the like. The original image in this step can be obtained by shooting with the fisheye camera 1, and the scene area and content of the original image are not limited.
Step S102, a plurality of local images are respectively intercepted from a plurality of preset positions 3 of each frame of original image, and a target image 4 is respectively generated according to each local image.
In this embodiment, each preset position 3 corresponds to a certain angle or a local area of a certain angle of the fisheye camera 1, and the preset positions 3 correspond to different shooting directions of the fisheye camera 1 respectively.
Specifically, the above-mentioned preset position 3 may be determined in advance according to the content of the original image. For example, when the original image is captured by the fisheye camera 1 of the road, the local image may correspond to each intersection of the road.
Step S103, respectively displaying the target images 4 corresponding to each preset position 3, wherein the display sequence of the target images 4 at the same preset position 3 is the same as the sequence of the original images.
After the partial image is intercepted, the partial image is processed by the processor, each partial image can generate a target image 4, and the target image 4 can be output to the terminal display device 2. On the terminal display device 2, each target image 4 is displayed with a different screen, and the display order of the target images 4 at the same preset position 3 is the same as the order of the original images (the order of the image frames of the video). According to the method of the embodiment, the original image acquired by the same camera can be divided into a plurality of images to be displayed on different devices or pictures, so that the content of the same monitoring lens is more detailed, the method conforms to the observation habit of common people, and is convenient for finding out details in the monitoring environment.
In another embodiment of the present invention, after the partial image is captured in step S102, an image distortion correction process is further performed to obtain the corrected target image 4.
Specifically, since an image captured by using the fisheye camera 1 or the wide-angle lens is distorted and distorted, it is necessary to correct the distortion of the image in a local area to produce a true and natural target image 4. In this embodiment, a purely mathematical distortion correction method (e.g., matrix operation distortion correction) may be employed. The scene or other contents (such as a person) in the corrected target image 4 are more natural and real than the partial image without the correction processing. The distortion correction algorithm of the present embodiment can also be used to process images obtained by wide-angle lens shooting.
In another embodiment of the present invention, before the distortion correction processing is performed on the local image, a segmentation processing may be performed on a background image and a non-background image in the local image.
Specifically, taking a non-background image as a person as an example, an artificial intelligence image segmentation model is adopted, a local image is scanned and then is transmitted to the artificial intelligence image segmentation model, the local image is subjected to image recognition, the artificial intelligence image segmentation model can recognize a background image and a person image, and the background and the person in the image are segmented after being distinguished. And then carrying out distortion correction treatment on the background image or the figure image, wherein after the distortion correction treatment, the background or the figure can be really restored without obvious deformation. The distortion-corrected background image or the distortion-uncorrected background image is combined with the distortion-corrected character image or the distortion-uncorrected character image to obtain a target image 4 with complete scene and characters and without obvious deformation.
In the distortion correction processing of the non-background image, each non-background image can be processed through the deep learning model, and the non-background image can be restored more truly and naturally because the deep learning model is subjected to a large amount of data training processing. For example, the deep learning model may be an existing Generative Adaptive Networks (GAN), but may also be other types of deep learning models in practical applications.
In another embodiment of the present invention, after the artificial intelligence image segmentation model segments the local image, the class of the segmented image region may be identified, and then white balance processing is performed according to the class of the image region. Or acquiring the content information of the scene area of the segmented image, searching the processing mode corresponding to the content information in the scene area, and adopting different processing modes aiming at different content information (such as people or buildings), so that the image of the people or the background image in the image is more natural, and the visual effect of the image is improved.
In another embodiment of the present invention, referring to fig. 2, the display method of the target image in step S103 has the following two cases: respectively displaying target images corresponding to a plurality of preset positions 3 in a plurality of different display devices 2, and displaying a target image 4 corresponding to the preset position 3 on each display device 2; or the target images 4 corresponding to the preset positions 3 are respectively displayed in a plurality of different display areas of the same display device 2, and each display area displays one target image 4 corresponding to the preset position 3. The user can select different display methods according to the actual situation of the user.
In another embodiment of the present invention, before the original image is divided in step S102, the original image obtained by the camera is subjected to AI (ARTIFICIAL INTELLIGENCE ) detail enhancement, which is to add a machine learning component to the image generation flow of "demosaicing". The color image reconstruction method is used for reconstructing on the basis of incomplete color sampling transmitted by the photosensitive element, and restoring a picture which is closer to a real object.
The embodiment provides a multi-azimuth monitoring device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to implement the steps of the multi-azimuth monitoring method.
Acquiring an original picture by using a fisheye camera 1 or other wide-angle lenses; the memory is used for storing image information and other information; the processor is used for executing the computer program, carrying out the work of identifying, segmenting, correcting, combining and the like on the image, and transmitting the processed image to the display device 2 or the memory; the display device 2 may display one or more target images 4 according to the user's needs, or the user may choose to display different pictures on different display devices 2. The monitoring device of the embodiment can achieve the aim of multi-directional monitoring on a certain scene by only using one camera, thereby not only saving the cost, but also being convenient for troubleshooting and maintenance in the later period.
The multi-directional monitoring apparatus in this embodiment is the same as the multi-directional monitoring method in the embodiment corresponding to fig. 1-2, and the specific implementation process thereof is detailed in the corresponding method embodiment, and the technical features in the method embodiment are all correspondingly applicable in this apparatus embodiment, which is not described herein again.
In one embodiment, the present invention further provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the multi-aspect monitoring method as described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the system is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A multi-azimuth monitoring method is characterized by comprising the following steps:
acquiring multiple continuous original images;
respectively intercepting a plurality of local images from a plurality of preset positions of each frame of original image, and respectively generating a target image according to each local image;
and respectively displaying the target images corresponding to each preset position, wherein the display sequence of the target images at the same preset position is the same as that of the original images.
2. The multi-azimuth monitoring method according to claim 1, wherein the original image is captured by a same fisheye camera, and the preset positions respectively correspond to different capturing orientations of the fisheye camera.
3. The multi-azimuth monitoring method according to claim 2, wherein the generating a target image from each of the partial images comprises the following steps performed for each of the partial images:
and carrying out image distortion correction on the local image to obtain the target image.
4. The multi-azimuth monitoring method according to claim 2, wherein the generating a target image from each of the partial images comprises the following steps performed for each of the partial images:
identifying a non-background image and a background image in the local image;
carrying out image distortion correction processing on the non-background image and carrying out image distortion correction processing on the background image;
and combining the non-background image after the correction processing with the background image after the correction processing to generate a target image.
5. The multi-azimuth monitoring method according to claim 4, wherein the performing image distortion correction processing on the non-background image comprises:
and processing each non-background image through a deep learning model respectively to realize image distortion correction.
6. The multi-azimuth monitoring method according to claim 2, wherein the generating a target image from each of the partial images comprises the following steps performed for each of the partial images:
identifying a non-background image in the local image;
carrying out image distortion correction processing on the non-background image;
and combining the non-background image after the correction processing with the standard background image to generate a target image.
7. The multi-azimuth monitoring method according to any one of claims 1-6, wherein the displaying the target image of each preset position respectively comprises:
the method comprises the steps of respectively displaying target images corresponding to a plurality of preset positions in a plurality of different display devices, and displaying the target image corresponding to one preset position on each display device.
8. The multi-azimuth monitoring method according to any one of claims 1-6, wherein the displaying the target image of each preset position respectively comprises:
and respectively displaying the target images corresponding to the plurality of preset positions in a plurality of different display areas of the same display device, wherein each display area displays the target image corresponding to one preset position.
9. A multi-azimuth monitoring apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of the multi-azimuth monitoring method according to any of claims 1 to 8.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the multi-aspect monitoring method according to any one of claims 1 to 8.
CN202010747784.XA 2020-07-29 2020-07-29 Multi-azimuth monitoring method, equipment and computer readable storage medium Pending CN112004054A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010747784.XA CN112004054A (en) 2020-07-29 2020-07-29 Multi-azimuth monitoring method, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010747784.XA CN112004054A (en) 2020-07-29 2020-07-29 Multi-azimuth monitoring method, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112004054A true CN112004054A (en) 2020-11-27

Family

ID=73462651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010747784.XA Pending CN112004054A (en) 2020-07-29 2020-07-29 Multi-azimuth monitoring method, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112004054A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191976A (en) * 2021-04-30 2021-07-30 Oppo广东移动通信有限公司 Image shooting method, device, terminal and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101860729A (en) * 2010-04-16 2010-10-13 天津理工大学 Target tracking method for omnidirectional vision
CN101865679A (en) * 2010-06-18 2010-10-20 杭州双树科技有限公司 Plane area measuring method based on digital image technology
CN105844584A (en) * 2016-03-19 2016-08-10 上海大学 Method for correcting image distortion of fisheye lens
CN107124543A (en) * 2017-02-20 2017-09-01 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108447022A (en) * 2018-03-20 2018-08-24 北京天睿空间科技股份有限公司 Moving target joining method based on single fixing camera image sequence
CN108510457A (en) * 2018-03-28 2018-09-07 京东方科技集团股份有限公司 Image correction method, device, display equipment
CN108629748A (en) * 2018-04-16 2018-10-09 深圳臻迪信息技术有限公司 Image correction method, device, electronic equipment and computer readable storage medium
CN108717704A (en) * 2018-05-15 2018-10-30 珠海全志科技股份有限公司 Method for tracking target, computer installation based on fish eye images and computer readable storage medium
CN110430359A (en) * 2019-07-31 2019-11-08 北京迈格威科技有限公司 Shoot householder method, device, computer equipment and storage medium
CN110570373A (en) * 2019-09-04 2019-12-13 北京明略软件系统有限公司 Distortion correction method and apparatus, computer-readable storage medium, and electronic apparatus
CN111353336A (en) * 2018-12-21 2020-06-30 华为技术有限公司 Image processing method, device and equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101860729A (en) * 2010-04-16 2010-10-13 天津理工大学 Target tracking method for omnidirectional vision
CN101865679A (en) * 2010-06-18 2010-10-20 杭州双树科技有限公司 Plane area measuring method based on digital image technology
CN105844584A (en) * 2016-03-19 2016-08-10 上海大学 Method for correcting image distortion of fisheye lens
CN107124543A (en) * 2017-02-20 2017-09-01 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108447022A (en) * 2018-03-20 2018-08-24 北京天睿空间科技股份有限公司 Moving target joining method based on single fixing camera image sequence
CN108510457A (en) * 2018-03-28 2018-09-07 京东方科技集团股份有限公司 Image correction method, device, display equipment
CN108629748A (en) * 2018-04-16 2018-10-09 深圳臻迪信息技术有限公司 Image correction method, device, electronic equipment and computer readable storage medium
CN108717704A (en) * 2018-05-15 2018-10-30 珠海全志科技股份有限公司 Method for tracking target, computer installation based on fish eye images and computer readable storage medium
CN111353336A (en) * 2018-12-21 2020-06-30 华为技术有限公司 Image processing method, device and equipment
CN110430359A (en) * 2019-07-31 2019-11-08 北京迈格威科技有限公司 Shoot householder method, device, computer equipment and storage medium
CN110570373A (en) * 2019-09-04 2019-12-13 北京明略软件系统有限公司 Distortion correction method and apparatus, computer-readable storage medium, and electronic apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
关雪梅: "图像特征提取技术研究", 《绥化学院学报》 *
杨前华等: "基于鱼眼视频图像的人群密度估计算法的研究", 《电子器件》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191976A (en) * 2021-04-30 2021-07-30 Oppo广东移动通信有限公司 Image shooting method, device, terminal and storage medium
CN113191976B (en) * 2021-04-30 2024-03-22 Oppo广东移动通信有限公司 Image shooting method, device, terminal and storage medium

Similar Documents

Publication Publication Date Title
CN106791710B (en) Target detection method and device and electronic equipment
US10609282B2 (en) Wide-area image acquiring method and apparatus
US10764496B2 (en) Fast scan-type panoramic image synthesis method and device
US10204398B2 (en) Image distortion transformation method and apparatus
CN111028137B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN110650291B (en) Target focus tracking method and device, electronic equipment and computer readable storage medium
CN110290323B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US20220222830A1 (en) Subject detecting method and device, electronic device, and non-transitory computer-readable storage medium
CN108805807B (en) Splicing method and system for ring scene images
CN110298862A (en) Method for processing video frequency, device, computer readable storage medium and computer equipment
CN108335272B (en) Method and device for shooting picture
WO2019037038A1 (en) Image processing method and device, and server
CN111951180A (en) Image shake correction method, image shake correction apparatus, computer device, and storage medium
CN112995510B (en) Method and system for detecting environment light of security monitoring camera
CN108717704B (en) Target tracking method based on fisheye image, computer device and computer readable storage medium
CN107610045B (en) Brightness compensation method, device and equipment in fisheye picture splicing and storage medium
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
WO2018166170A1 (en) Image processing method and device, and intelligent conferencing terminal
CN112004054A (en) Multi-azimuth monitoring method, equipment and computer readable storage medium
CN108600631B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN110650288B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN109120856B (en) Camera shooting method and device
CN112215749A (en) Image splicing method, system and equipment based on cylindrical projection and storage medium
CN113472998B (en) Image processing method, image processing device, electronic equipment and storage medium
CN115278103A (en) Security monitoring image compensation processing method and system based on environment perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination