CN111144478B - Automatic detection method for through lens - Google Patents

Automatic detection method for through lens Download PDF

Info

Publication number
CN111144478B
CN111144478B CN201911356569.0A CN201911356569A CN111144478B CN 111144478 B CN111144478 B CN 111144478B CN 201911356569 A CN201911356569 A CN 201911356569A CN 111144478 B CN111144478 B CN 111144478B
Authority
CN
China
Prior art keywords
image
lens
marking
shooting
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911356569.0A
Other languages
Chinese (zh)
Other versions
CN111144478A (en
Inventor
郑文锋
杨波
李建强
刘珊
曾庆川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201911356569.0A priority Critical patent/CN111144478B/en
Publication of CN111144478A publication Critical patent/CN111144478A/en
Application granted granted Critical
Publication of CN111144478B publication Critical patent/CN111144478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses an automatic detection method of a cross-over lens, which is characterized in that fixed cameras with a plurality of angles are installed according to the specific conditions of a shooting field, then a field director is matched to shoot a shooting instruction, each scene is continuously shot for a plurality of times through the lens, the original images of the field are collected and preprocessed, and finally the images are compared and matched one by one through a matching algorithm, so that the automatic detection of the cross-over lens is realized by utilizing a comparison and matching mode, and the problems of low manual detection efficiency and low accuracy are solved.

Description

Automatic detection method for through lens
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an automatic detection method of a cut-and-help shot.
Background
One of the important tasks in movie and TV drama shooting is to manually detect the problem of cut through the shot, the cut through the wall shot (Goof) refers to the unreasonable front and back shots appearing in the movie and TV drama, because the shots of the movie are shot separately, if the arrangement in the picture is slightly different when the shots of the same scene are shot, a 'hard damage' is formed on the screen.
Specifically, it is necessary to manually detect whether various kinds of cut-to-help problems occur in the shot image, such as the occurrence of an article different from the year of play, an unreasonable change in the position of a fixed article in the previous shot (preceding shot), and the like.
The existing main method is manual inspection, and although the existing automatic detection method for the cut-to-help shot is adopted, the method can only detect scenes or objects which are not matched with the set background in the movie and television play, cannot completely detect the cut-to-help shot, and has the problems of low efficiency, low accuracy and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an automatic detection method of a through-cut lens, which automatically detects the through-cut lens in an image contrast mode and solves the problems of low manual detection efficiency and accuracy.
In order to achieve the above object, the present invention provides an automatic detection method for a cut-to-help lens, which is characterized by comprising the following steps:
(1) fixed camera installed at multiple angles
According to the specific conditions of a shooting site, fixed cameras at multiple angles are installed, and the fact that the auxiliary fixed lens of each fixed camera can shoot as many fixed backgrounds as possible is guaranteed;
(2) collecting the original image of the scene
Matching with a field director starting shooting instruction, continuously shooting each scene for multiple times by the lens, simultaneously recording corresponding shooting time, obtaining an original image with time sequence and marked scenes, and recording the original image as CijWherein, i is 1,2, …, i represents scene times, j is 1,2, …, j represents shooting sequence; finally, all the original image groups are combined into an image set;
(3) preprocessing of raw image
(3.1) extracting and naming the preprocessed images
Extracting time information carried by original image in image setThen, arranging according to the shooting time sequence of the same scene, and marking the original image shot for the first time in the same scene as a preamble image
Figure BDA00023360943700000213
The rest of the original images are recorded as subsequent images
Figure BDA0002336094370000021
i=1,2,…,j=2,3,…;
(3.2) detecting the position of the actor in the image to carry out rectangular marking
Acquiring actor positions in the preceding image and the subsequent image respectively by using a target detection algorithm, and recording the actor positions as P1、P2(ii) a Marking the position of the actor as a rectangular area, and then deleting the corresponding rectangular areas from the preceding image and the subsequent image respectively to obtain an image to be compared, and marking the image as a rectangular area
Figure BDA0002336094370000022
i=1,2,…,j=2,3,…;
(4) Automatic detection of through lens
(4.1) comparing the images
Figure BDA0002336094370000023
Converting into grayscale, and cutting two grayscale images with equal size
Figure BDA0002336094370000024
i=1,2,…,j=2,3,…;
(4.2) realizing matching contrast by traversing pixel by pixel
(4.2.1) first create a blank picture of the same size as after cropping, denoted C0
(4.2.2) control the image by the outer layer by adopting a double-layer circulation mechanism
Figure BDA0002336094370000025
Pixel position of, inner layer controls image
Figure BDA0002336094370000026
Pixel location of (2), by matchTemplate API function pair
Figure BDA0002336094370000027
And
Figure BDA0002336094370000028
comparing and matching the pixels one by one, outputting the result and storing the result in result, wherein 0 is used for indicating mismatching, 1 is used for indicating matching, and then the result has a matrix consisting of 0 and 1;
(4.3) carrying out rectangular marking on unmatched pixel points
(4.3.1) finding out unmatched pixel points in result by utilizing a findContours API function, highlighting, rectangularly marking the highlighted pixel points, and finally forming the pixel points with rectangularly marks into a transparent background image C'ijIs represented by
Figure BDA0002336094370000029
And
Figure BDA00023360943700000210
the matching result graph of (1);
(4.3.2) clear background image C'ijSubsequent images
Figure BDA00023360943700000211
And a blank picture C0Copy function in turn, thereby converting C'ijAnd
Figure BDA00023360943700000212
copy to C0In the method, a rectangular marker image of the information of the upper
Figure BDA00023360943700000214
i=1,2,…,j=2,3,…。
The invention aims to realize the following steps:
according to the method for automatically detecting the cross-over lens, fixed cameras with multiple angles are installed according to the specific conditions of a shooting field, then a field director is matched to start up a shooting instruction, each scene is continuously shot for multiple times through the lens, original images of the field are collected and preprocessed, and finally the images are compared and matched one by one through a matching algorithm, so that the automatic detection of the cross-over lens is realized by using a comparison and matching mode, and the problem that the manual detection efficiency and the accuracy are low is solved.
Drawings
FIG. 1 is a flow chart of an automatic detection method for a cut-to-help shot according to the present invention;
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Examples
FIG. 1 is a flowchart of an automatic detection method for a cut-to-help shot according to the present invention.
In this embodiment, as shown in fig. 1, the method for automatically detecting a cut-to-help shot in the present invention includes the following steps:
s1, installing fixed cameras at multiple angles
According to the specific conditions of a shooting site, fixed cameras at multiple angles are installed, and the fact that the auxiliary fixed lens of each fixed camera can shoot as many fixed backgrounds as possible is guaranteed;
s2, collecting the original image of the scene
Matching with a field director starting shooting instruction, continuously shooting each scene for multiple times by the lens, simultaneously recording corresponding shooting time, obtaining an original image with time sequence and marked scenes, and recording the original image as CijWhere i is 1,2, …, i indicates the scene number, j is 1,2, …, j indicates the shooting sequence, for example, C21 indicates the first shooting lens of the second scene; finally, all the original image groups are combined into an image set;
s3, preprocessing original image
S3.1, extracting and naming the preprocessed image
Extracting time information carried by original images in the image set, arranging according to the shooting time sequence of the same scene, and marking the original image shot for the first time in the same scene as a preamble image
Figure BDA0002336094370000031
The rest of the original image is recorded as a subsequent image
Figure BDA0002336094370000041
i-1, 2, …, j-2, 3, …; for example, the first shot (j ═ 1) of the first field (i ═ 1) is recorded as a preamble image
Figure BDA0002336094370000042
And recording the second shooting lens (j ═ 2) of the first field as a subsequent image
Figure BDA0002336094370000043
S3.2, detecting the position of an actor in the image and marking the actor in a rectangular mode
Acquiring actor positions in the preceding image and the subsequent image respectively by using a target detection algorithm, and recording the actor positions as P1、P2(ii) a Marking the position of the actor as a rectangular area, and then deleting the corresponding rectangular areas from the preceding image and the subsequent image respectively to obtain an image to be compared, and marking the image as a rectangular area
Figure BDA0002336094370000044
i=1,2,…,j=2,3,…;
In this embodiment, an existing popular framework capable of performing a target Detection algorithm, such as a tensoflow Object Detection API framework, is installed and tested, and the method uses an SSD + mobilent model with a default framework API;
first and most importantly, ensure that Tensorflow 1.5+ has been installed. If a GPU with NVIDIA is available, optionally, in order to fully utilize the computing power of the hardware device and increase the computing speed, a version of the GPU with tensflo is used.
Installation related necessary dependencies: pillow, jupyter, matplotlib, lxml, Tensorflow Object _ detection API, Protobuf.
Configuring the operating environment: after downloading Protobuf, uncompress, then configure its file bin path into an environment variable, then open CMD in an administrator manner and enter into the research directory in the model-master after being decompressed by the Tensorflow Object _ detection API framework, execute the following commands: proto _ object _ detection/proto/, proto _ python _ out.
The following commands are executed for environment detection: py tests whether the running environment is built successfully.
At this time, the task of detecting the position of the actor may be performed through a command or an IDE tool. Specifically, position information P1 and P2 of the actor in the two images can be acquired through a getPos (img, tag) API provided by OpenCv;
s4, automatic detection of cut-off shot
S4.1, comparing the images to be compared
Figure BDA0002336094370000045
Converting into grayscale, and cutting two grayscale images with equal size
Figure BDA0002336094370000046
i=1,2,…,j=2,3,…;
S4.2, pixel-by-pixel traversal is carried out to realize matching contrast
S4.2.1, a blank picture with the same size as the cut picture is created first, and is marked as C0
S4.2.2, control image by double-layer circulation mechanism
Figure BDA0002336094370000051
Pixel position of, inner layer controls image
Figure BDA0002336094370000052
Pixel location of (2), by matchTemplate API function pair
Figure BDA0002336094370000053
And
Figure BDA0002336094370000054
comparing and matching the pixels one by one, outputting the result and storing the result in result, wherein 0 is used for indicating mismatching, 1 is used for indicating matching, and then the result has a matrix consisting of 0 and 1;
s4.3, carrying out rectangular marking on unmatched pixel points
S4.3.1, finding unmatched pixel points in result by using findContours API function, highlighting, rectangularly marking the highlighted pixel points, and finally forming the pixel points with rectangular marks into a transparent background image C'ijIs represented by
Figure BDA0002336094370000055
And
Figure BDA0002336094370000056
the matching result graph of (1);
s4.3.2, mixing transparent background image C'ijSubsequent images
Figure BDA0002336094370000057
And a blank picture C0Copy function in turn, thereby converting C'ijAnd
Figure BDA0002336094370000058
copy to C0In the method, a rectangular marking image of the information of the upper
Figure BDA0002336094370000059
i=1,2,…,j=2,3,…。
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (1)

1. An automatic detection method for a cut-to-help lens is characterized by comprising the following steps:
(1) and a fixed camera installed at a plurality of angles
According to the specific conditions of a shooting site, fixed cameras at multiple angles are installed, and the fact that an auxiliary fixed lens of each fixed camera can shoot a fixed background is guaranteed;
(2) collecting the original image of the scene
Matching with a field director starting shooting instruction, continuously shooting each scene for multiple times by the lens, simultaneously recording corresponding shooting time, obtaining an original image with time sequence and marked scenes, and recording the original image as CijWherein, i is 1,2, …, i represents scene times, j is 1,2, …, j represents shooting sequence; finally, all the original image groups are combined into an image set;
(3) preprocessing of raw image
(3.1) extracting and naming the preprocessed image
Extracting time information carried by original images in the image set, arranging according to the shooting time sequence of the same scene, and marking the original image shot for the first time in the same scene as a preamble image
Figure FDA0003600762070000011
The rest of the original images are recorded as subsequent images
Figure FDA0003600762070000012
(3.2) detecting the position of the actor in the image for rectangular marking
Acquiring actor positions in the preceding image and the subsequent image respectively by using a target detection algorithm, and recording the actor positions as P1、P2(ii) a Marking the position of the actor as a rectangular area, and then deleting the corresponding rectangular areas from the preceding image and the subsequent image respectively to obtain a graph to be comparedLike, is marked as
Figure FDA0003600762070000013
(4) Automatic detection of through lens
(4.1) comparing the images
Figure FDA0003600762070000014
Converting into grayscale, and cutting two grayscale images with equal size
Figure FDA0003600762070000015
(4.2) realizing matching contrast by traversing pixel by pixel
(4.2.1) first create a blank picture of the same size as after cropping, denoted C0
(4.2.2) control the image by the outer layer by adopting a double-layer circulation mechanism
Figure FDA0003600762070000016
Pixel position of, inner layer controls image
Figure FDA0003600762070000017
Pixel location of (2), by matchTemplate API function pair
Figure FDA0003600762070000018
And
Figure FDA0003600762070000019
comparing and matching the pixels one by one, outputting the result and storing the result in result, wherein 0 is used for indicating mismatching, 1 is used for indicating matching, and then the result has a matrix consisting of 0 and 1;
(4.3) carrying out rectangular marking on unmatched pixel points
(4.3.1) finding unmatched pixel points in result by using a findContours API function, highlighting, marking highlighted pixel points with rectangles, and finally marking the highlighted pixel points with rectanglesPixel points with rectangular marks form a transparent background image C'ijIs represented by
Figure FDA0003600762070000021
And
Figure FDA0003600762070000022
the matching result graph of (1);
(4.3.2) clear background image C'ijSubsequent images
Figure FDA0003600762070000023
And a blank picture C0Copy function in turn, thereby converting C'ijAnd
Figure FDA0003600762070000024
copy to C0In the method, a rectangular marker image of the information of the upper
Figure FDA0003600762070000025
Figure FDA0003600762070000026
CN201911356569.0A 2019-12-25 2019-12-25 Automatic detection method for through lens Active CN111144478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911356569.0A CN111144478B (en) 2019-12-25 2019-12-25 Automatic detection method for through lens

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911356569.0A CN111144478B (en) 2019-12-25 2019-12-25 Automatic detection method for through lens

Publications (2)

Publication Number Publication Date
CN111144478A CN111144478A (en) 2020-05-12
CN111144478B true CN111144478B (en) 2022-06-14

Family

ID=70520004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911356569.0A Active CN111144478B (en) 2019-12-25 2019-12-25 Automatic detection method for through lens

Country Status (1)

Country Link
CN (1) CN111144478B (en)

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991015921A1 (en) * 1990-04-11 1991-10-17 Multi Media Techniques Process and device for modifying a zone of successive images
WO2010111916A1 (en) * 2009-04-01 2010-10-07 索尼公司 Device and method for multiclass object detection
JP2010240085A (en) * 2009-04-03 2010-10-28 Mitsubishi Electric Corp Multileaf collimator observation device and radiotherapy apparatus
JP2011174265A (en) * 2010-02-24 2011-09-08 Miwa Lock Co Ltd Locking confirmation key to lock
WO2012058902A1 (en) * 2010-11-02 2012-05-10 中兴通讯股份有限公司 Method and apparatus for combining panoramic image
CN102663743A (en) * 2012-03-23 2012-09-12 西安电子科技大学 Multi-camera cooperative character tracking method in complex scene
EP2602588A1 (en) * 2011-12-06 2013-06-12 Hexagon Technology Center GmbH Position and Orientation Determination in 6-DOF
CN104463899A (en) * 2014-12-31 2015-03-25 北京格灵深瞳信息技术有限公司 Target object detecting and monitoring method and device
WO2015093147A1 (en) * 2013-12-19 2015-06-25 株式会社日立製作所 Multi-camera imaging system and method for combining multi-camera captured images
WO2017020559A1 (en) * 2015-08-05 2017-02-09 哈尔滨工业大学 Multi-type bga chip visual identification method based on row and column linear clustering
CN108063932A (en) * 2017-11-10 2018-05-22 广州极飞科技有限公司 A kind of method and device of luminosity calibration
CN108269271A (en) * 2018-01-15 2018-07-10 深圳市云之梦科技有限公司 A kind of clothes expose the false with human body image, match the method and system migrated
WO2018151356A1 (en) * 2017-02-15 2018-08-23 동명대학교산학협력단 Multiscale curvature-based visual vector model hashing method
CN207832123U (en) * 2018-02-10 2018-09-07 青岛江成电子科技有限公司 A kind of flywheel shell workpiece thread detecting device
CN208849866U (en) * 2018-10-15 2019-05-10 深圳市云开物联技术有限公司 A kind of more scene candid cameras
CN109886238A (en) * 2019-03-01 2019-06-14 湖北无垠智探科技发展有限公司 Unmanned plane Image Change Detection algorithm based on semantic segmentation
CN109894375A (en) * 2019-03-07 2019-06-18 东莞市雅创自动化科技有限公司 A kind of connection side steering between automatic more transfer dish of detection screening system
WO2019114617A1 (en) * 2017-12-12 2019-06-20 华为技术有限公司 Method, device, and system for fast capturing of still frame
CN110021065A (en) * 2019-03-07 2019-07-16 杨晓春 A kind of indoor environment method for reconstructing based on monocular camera
WO2019179200A1 (en) * 2018-03-22 2019-09-26 深圳岚锋创视网络科技有限公司 Three-dimensional reconstruction method for multiocular camera device, vr camera device, and panoramic camera device
CN110543867A (en) * 2019-09-09 2019-12-06 北京航空航天大学 crowd density estimation system and method under condition of multiple cameras

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991015921A1 (en) * 1990-04-11 1991-10-17 Multi Media Techniques Process and device for modifying a zone of successive images
WO2010111916A1 (en) * 2009-04-01 2010-10-07 索尼公司 Device and method for multiclass object detection
JP2010240085A (en) * 2009-04-03 2010-10-28 Mitsubishi Electric Corp Multileaf collimator observation device and radiotherapy apparatus
JP2011174265A (en) * 2010-02-24 2011-09-08 Miwa Lock Co Ltd Locking confirmation key to lock
WO2012058902A1 (en) * 2010-11-02 2012-05-10 中兴通讯股份有限公司 Method and apparatus for combining panoramic image
EP2602588A1 (en) * 2011-12-06 2013-06-12 Hexagon Technology Center GmbH Position and Orientation Determination in 6-DOF
CN102663743A (en) * 2012-03-23 2012-09-12 西安电子科技大学 Multi-camera cooperative character tracking method in complex scene
WO2015093147A1 (en) * 2013-12-19 2015-06-25 株式会社日立製作所 Multi-camera imaging system and method for combining multi-camera captured images
CN104463899A (en) * 2014-12-31 2015-03-25 北京格灵深瞳信息技术有限公司 Target object detecting and monitoring method and device
WO2017020559A1 (en) * 2015-08-05 2017-02-09 哈尔滨工业大学 Multi-type bga chip visual identification method based on row and column linear clustering
WO2018151356A1 (en) * 2017-02-15 2018-08-23 동명대학교산학협력단 Multiscale curvature-based visual vector model hashing method
CN108063932A (en) * 2017-11-10 2018-05-22 广州极飞科技有限公司 A kind of method and device of luminosity calibration
WO2019114617A1 (en) * 2017-12-12 2019-06-20 华为技术有限公司 Method, device, and system for fast capturing of still frame
CN108269271A (en) * 2018-01-15 2018-07-10 深圳市云之梦科技有限公司 A kind of clothes expose the false with human body image, match the method and system migrated
CN207832123U (en) * 2018-02-10 2018-09-07 青岛江成电子科技有限公司 A kind of flywheel shell workpiece thread detecting device
WO2019179200A1 (en) * 2018-03-22 2019-09-26 深圳岚锋创视网络科技有限公司 Three-dimensional reconstruction method for multiocular camera device, vr camera device, and panoramic camera device
CN208849866U (en) * 2018-10-15 2019-05-10 深圳市云开物联技术有限公司 A kind of more scene candid cameras
CN109886238A (en) * 2019-03-01 2019-06-14 湖北无垠智探科技发展有限公司 Unmanned plane Image Change Detection algorithm based on semantic segmentation
CN109894375A (en) * 2019-03-07 2019-06-18 东莞市雅创自动化科技有限公司 A kind of connection side steering between automatic more transfer dish of detection screening system
CN110021065A (en) * 2019-03-07 2019-07-16 杨晓春 A kind of indoor environment method for reconstructing based on monocular camera
CN110543867A (en) * 2019-09-09 2019-12-06 北京航空航天大学 crowd density estimation system and method under condition of multiple cameras

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Pipeline scanning architecture with computation reduction for rectangle pattern matching in real-time traffic sign detection;Hoang,A.T等;《 IEEE International Symposium on Circuits & Systems》;20140728;第1-3页 *
基于多连接特征的图像修复取证研究;姚思如;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190415(第4期);第I138-942页 *

Also Published As

Publication number Publication date
CN111144478A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
WO2021208600A1 (en) Image processing method, smart device, and computer-readable storage medium
JP6730690B2 (en) Dynamic generation of scene images based on the removal of unwanted objects present in the scene
US8666191B2 (en) Systems and methods for image capturing
CN101416219B (en) Foreground/background segmentation in digital images
US10129485B2 (en) Methods and systems for generating high dynamic range images
WO2018058934A1 (en) Photographing method, photographing device and storage medium
US9300876B2 (en) Fill with camera ink
CN101515998A (en) Image processing apparatus, image processing method, and program
US20110273620A1 (en) Removal of shadows from images in a video signal
WO2021184302A1 (en) Image processing method and apparatus, imaging device, movable carrier, and storage medium
Joze et al. Imagepairs: Realistic super resolution dataset via beam splitter camera rig
AU2011205087A1 (en) Multi-hypothesis projection-based shift estimation
US9894285B1 (en) Real-time auto exposure adjustment of camera using contrast entropy
US11832018B2 (en) Image stitching in the presence of a full field of view reference image
CN102739953A (en) Image processing device, image processing method, and image processing program
CN103984942A (en) Object recognition method and mobile terminal
TW201911226A (en) Multi-camera capture image processing
CN105830091A (en) Systems and methods for generating composite images of long documents using mobile video data
CN100492088C (en) Automatic focusing method
US9094617B2 (en) Methods and systems for real-time image-capture feedback
CN109543530B (en) Blackboard writing position detection method, storage medium and system
CN111144478B (en) Automatic detection method for through lens
CN112037128B (en) Panoramic video stitching method
Zhou et al. Video text processing method based on image stitching
US20210075970A1 (en) Method and electronic device for capturing roi

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant