WO2019080061A1 - Camera device-based occlusion detection and repair device, and occlusion detection and repair method therefor - Google Patents

Camera device-based occlusion detection and repair device, and occlusion detection and repair method therefor

Info

Publication number
WO2019080061A1
WO2019080061A1 PCT/CN2017/107875 CN2017107875W WO2019080061A1 WO 2019080061 A1 WO2019080061 A1 WO 2019080061A1 CN 2017107875 W CN2017107875 W CN 2017107875W WO 2019080061 A1 WO2019080061 A1 WO 2019080061A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
occlusion
camera
color
repair
Prior art date
Application number
PCT/CN2017/107875
Other languages
French (fr)
Chinese (zh)
Inventor
谢俊
赵聪
杨松龄
陈爽新
Original Assignee
深圳市柔宇科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市柔宇科技有限公司 filed Critical 深圳市柔宇科技有限公司
Priority to CN201780092103.7A priority Critical patent/CN110770786A/en
Priority to PCT/CN2017/107875 priority patent/WO2019080061A1/en
Publication of WO2019080061A1 publication Critical patent/WO2019080061A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Definitions

  • the present invention relates to an image pickup apparatus, and more particularly to an occlusion detection and repair apparatus based on an image pickup apparatus and an occlusion detection and repair method thereof.
  • the camera quality of electronic devices is one of the main considerations for consumers when they choose to purchase electronic devices. In other words, if an electronic device has excellent image quality, it will become a major selling point of the electronic device. However, when the existing electronic device is photographed, if there is a target object, such as a finger, a stain, or the like, the camera of the electronic device is blocked, the film taken will have a dark area, and the filming rate is low.
  • the embodiment of the invention discloses an occlusion detection and repairing device and an occlusion detection and repairing method thereof, which can detect occlusion and repair the occlusion area of the film, can effectively improve the splicing rate, and the repairing effect is good.
  • the occlusion detection and repair device based on the imaging device disclosed in the embodiment of the invention.
  • the occlusion detection and repairing device includes: an image capturing unit, the image capturing unit captures a first image; and a detecting module, wherein the detecting module detects whether the camera unit is present in the framing range of the camera unit a target within a preset distance; a memory storing the framing range, the preset distance, and a preset matching degree, the memory further storing a plurality of second images, wherein the plurality of second images are An image captured by the camera unit; and a processor, the camera unit, the detection module, and the memory are respectively electrically connected to the processor, and the processor is configured to: detect in the detection module Calculating an occlusion region of the first image when the target object within the preset distance is within the framing range of the camera unit; and extracting a first feature of the first image Obtaining the plurality of second images, and extracting second feature points corresponding to the first feature points
  • the occlusion processing method based on the imaging device disclosed in the embodiment of the present invention.
  • the occlusion processing method includes the steps of: capturing a first image; detecting whether a target object within a preset range of the image capturing unit is within a preset distance; and detecting the image capturing unit Calculating an occlusion region of the first image when a target object within the preset distance is within the framing range; extracting a first feature point of the first image; acquiring a plurality of second images, And extracting, from each of the second images, a second feature point corresponding to the first feature point; calculating a second feature point of each of the second images and a first feature point of the first image a degree of matching between the two, and selecting one of the second images whose matching degree satisfies the preset matching degree as a repair source image; and calculating a position of the occlusion region corresponding to the repair source image, and An occlusion region of the first image is repaired using an image of the
  • a computer readable storage medium wherein the computer readable storage medium stores a plurality of program instructions, where the program instructions are executed by the processor, and the step of: capturing the first image; detecting the framing of the camera unit Whether there is a target within a predetermined distance from the camera unit in the range; when it is detected that the target object within the preset distance of the camera unit is within the view range of the camera unit, Calculating an occlusion region of the first image; extracting a first feature point of the first image; acquiring a plurality of second images, and extracting, corresponding to the first feature point, from each of the second images a second feature point; calculating a matching degree between the second feature point of each of the second images and the first feature point of the first image, and selecting one of the matching degrees to satisfy the preset matching degree
  • the second image is used as a repair source image; and calculating a position of the occlusion region corresponding to the repair source image, and adopting an image of the occlusion region corresponding to the first
  • the occlusion detection and repair device and the occlusion detection and repair method of the present invention when the detection unit detects that the camera unit is occluded, uses the image captured by the camera unit to perform occlusion repair, and can detect occlusion in time to avoid continuation. Occlusion, and even if there is occlusion, it can be repaired in time, improve the filming rate, and the repair effect is good.
  • FIG. 1 is a structural block diagram of an occlusion detection and repair apparatus according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of an imaging unit and a detection module of an occlusion detection and repair device according to an embodiment of the invention.
  • FIG. 3 is a schematic diagram of an imaging unit and a detection module of an occlusion detection and repair device according to another embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a minimum shooting distance, a common portion, and a non-common portion when an image is taken when the camera unit is a binocular camera according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of image comparison of the detection module of the occlusion detection and repair device when the detection module is an image detection unit according to an embodiment of the invention.
  • FIG. 6 is a flowchart of an occlusion detection and repair method according to an embodiment of the present invention.
  • Figure 7 is a sub-flow diagram of step S602 of Figure 6 in an embodiment.
  • Figure 8 is a sub-flow diagram of step S602 of Figure 6 in another embodiment.
  • FIG. 1 is a structural block diagram of an occlusion detection and repair device 100 based on an imaging device according to an embodiment of the present invention.
  • the occlusion detection repair device 100 is applied to an electronic device.
  • the electronic device includes but is not limited to a camera, a mobile phone, a tablet computer, a notebook computer, a desktop computer, etc. Think of smart helmets, smart glasses and other wearable devices.
  • the occlusion detection and repair device 100 includes a processor 10, a memory 20, an imaging unit 30, and a detection module 40.
  • the memory 20, the camera unit 30, and the detection module 40 are electrically connected to the processor 10, respectively.
  • the camera unit 30 is configured to take a photo or video for a shooting scene to obtain image or video information of the shooting scene. Specifically, the camera unit 30 is configured to capture a first image.
  • the camera unit 30 may include at least one camera 31.
  • the camera unit 30 includes a camera 31.
  • the camera unit 30 includes two cameras 31, a binocular camera. It can be understood that in other embodiments, the camera unit 30 can include three or more cameras 31, which can be specifically set according to actual needs.
  • the memory 20 is configured to store a framing range, a preset distance, and a preset matching degree.
  • the framing range is a framing range of the imaging unit 30 at the time of shooting.
  • the preset distance is a preset distance between the camera unit 30 and the target when the target object is photographed.
  • the matching degree is the similarity between the two images. The higher the similarity, the higher the matching degree. Conversely, the lower the similarity, the lower the matching degree.
  • the preset matching degree that is, the similarity between the two images reaches a predetermined similarity.
  • the memory also stores a plurality of second images.
  • the plurality of second images are images captured by the imaging unit 30.
  • the detecting module 40 is configured to detect whether there is a target object in the framing range of the image capturing unit 30 or that the camera 31 of the camera unit 30 is to be blocked.
  • the processor 10 is configured to calculate an occlusion region of the first image when the detection module 40 detects the target within the framing range of the camera unit 30.
  • the detection module 40 includes an inductive detection unit 41.
  • the sensing detection unit 41 is electrically connected to the processor 10 .
  • the sensing detection unit 41 is configured to generate a sensing signal including occlusion position information when detecting that a target object is about to approach the imaging unit 30 or has partially blocked all or part of the camera 31 of the imaging unit 30.
  • the processor 10 is configured to calculate an occlusion region of the first image according to occlusion position information in the sensing signal.
  • the sensing detection unit 41 includes at least one proximity sensor 411.
  • the proximity sensor 411 can also be a distance sensor.
  • the at least one proximity sensor 411 is disposed within a preset distance range around the camera 31.
  • the proximity sensor 411 is disposed between the two cameras 31.
  • the processor 10 senses the The position information of the proximity sensor 411 of the target calculates an occlusion area of the first image.
  • the imaging unit 30 includes one camera 31
  • the at least one proximity sensor 411 is disposed within a preset distance range around the camera 31.
  • the at least one proximity sensor 411 may also be disposed at a position between two adjacent cameras 31.
  • the sensing detection unit 41 includes a touch module 413 mounted on the at least one camera 31 of the camera unit 30 .
  • the touch module 413 can be a flexible touch unit.
  • the touch module 413 generates an inductive signal including touch position coordinates when sensing that a target object is in contact therewith.
  • the processor 10 calculates an occlusion region of the first image according to the touch position coordinates in the sensing signal.
  • the touch module 413 is also disposed within a preset distance range around the camera 31 of the camera unit 30.
  • the touch module 413 generates an inductive signal including touch position coordinates when sensing a target within a preset distance range around the camera 31.
  • the processor 10 determines that the camera unit 30 is about to be blocked according to the touch position coordinates, and issues an occlusion reminder.
  • the detection module 40 further includes an imaging detection unit 43.
  • the imaging detecting unit 43 is the imaging unit 30 itself.
  • the imaging unit 30 captures a third image.
  • the third image is an image captured by the imaging unit 30 prior to the first image and stored in the memory 20.
  • the memory 20 also stores a color threshold, a first difference threshold, and a connected number.
  • the color threshold is a maximum value of a color of an image captured when the camera 31 is blocked.
  • the first difference threshold is a difference between color values of corresponding positions of the two images.
  • the number of connections is the number of blocks in which the difference is greater than or equal to the first difference threshold and communicates with each other.
  • the processor 10 is further configured to cut the first image into a plurality of first small blocks M according to a predetermined size, and cut the third image into the predetermined size as a plurality of second small blocks N, wherein each of the first small blocks M corresponds to one of the second small blocks N, that is, the first small block M and the corresponding second small block N are cameras 31 Images taken at the same shooting position.
  • the processor 10 is further configured to calculate a color average of each of the first small block M and each of the second small blocks N.
  • the processor 10 is further configured to determine whether a color average value of each of the first small blocks M is smaller than the color threshold, that is, whether the color of the first small block M is dark.
  • the treatment is further configured to determine whether a difference between a color average value of each of the first small blocks M and a color average value of the corresponding second small block N is smaller than the first difference threshold, that is, The first small block M is close to the color of the corresponding second small block N, and the color change is relatively small, and the area may be occluded.
  • the processor 10 is further configured to: when the color average of the first small block M is smaller than the color threshold, and the difference corresponding to the first small block M is smaller than the first difference threshold, The first small piece M is marked.
  • the processor 10 is further configured to determine, when the number of the first small blocks M that are marked and communicated with each other is greater than or equal to the number of connected pieces, determine an area where the first small blocks M are located.
  • the occlusion area forms a dark area. It can be understood that when the two first small blocks M that have been marked are adjacent, the two first small blocks M are in communication. It can be understood that the first image and the third image described above are images taken by the same camera 31 of the imaging unit 30.
  • the camera unit 30 is a binocular camera.
  • the third image captured by the camera unit 30 and the first image are images taken by the binocular camera simultaneously for the same shooting scene, and the first image and the third image are followed by a minimum shooting distance. Divided into public and non-public parts.
  • the calculation of the occlusion region of the non-common portion is the same as the above embodiment, that is, the calculation of the occlusion region by the image captured by the camera 31 that captures the first image is performed. However, for the calculation of the occlusion area of the common portion, the following manner can be adopted, which is described in detail below.
  • the memory 20 also stores a second difference threshold.
  • the processor 10 is further configured to cut a common portion of the first image into a plurality of first small pieces M according to a predetermined size, and cut a common portion of the third image into the second small size according to the predetermined size.
  • Block N each of the first small blocks M corresponding to one of the second small blocks N, that is, each of the first small blocks M and the corresponding second small blocks N are the binocular cameras An image taken for the same target.
  • the processor 10 is further configured to calculate a color average of each of the first small block M and each of the second small blocks N.
  • the processor 10 is further configured to determine whether a color average value of each of the first small blocks M is smaller than the color threshold, that is, whether the color of the first small block M is dark.
  • the processor 10 is further configured to determine whether a difference between a color average value of each of the first small blocks M and a color average value of the corresponding second small block N is greater than the second difference threshold. That is, the images taken by the binocular camera for the same target have a large difference, and therefore, there may be a case where one of the cameras 31 is blocked.
  • the processor 10 is further configured to: the color average value of the first small block M is smaller than the color threshold, and the difference between the first small block M and the corresponding second small block N is greater than When the second difference threshold is used, the first small block M is marked.
  • the processor 10 is further configured to determine, when the number of the first small blocks M that are marked and communicated with each other is greater than or equal to the number of connected pieces, determine an area where the first small blocks M are located. Occlusion area.
  • the processor 10 is further configured to repair an occlusion region in the first image.
  • the processor 10 is configured to extract a first feature point of the first image.
  • the processor 10 is configured to extract a first feature point of the first image other than the occlusion region.
  • the processor 10 is configured to acquire the plurality of second images and extract a second feature point of each of the second images.
  • the first feature point and the second feature point may be a SIFT (Scale Invariant Feature Transform) feature, a FAST (Features from Accelerated Segment Test) feature, or an ORB (Oriented FAST and Rotated BRIEF) feature. Select at least one according to actual needs.
  • SIFT Scale Invariant Feature Transform
  • FAST Features from Accelerated Segment Test
  • ORB Oriented FAST and Rotated BRIEF
  • the processor 10 is configured to calculate a matching degree between a second feature point of each of the second images and a corresponding first feature point of the first image, and select one of the matching degrees to satisfy the pre- The second image of the matching degree is set as the repair source image.
  • the processor 10 is configured to calculate a position of the occlusion area corresponding to the image in the repair source, and repair an occlusion of the first image by using an image of the occlusion area corresponding to the first image in the repair source image. region.
  • the processor 10 is configured to perform matrix transformation on an image corresponding to the occlusion region in the repair source image.
  • the matrix transformation describes a conversion relationship between an image corresponding to the occlusion region and a corresponding pixel of the occlusion region in the repair source image, which is equivalent to a perspective transformation.
  • the processor 10 is further configured to perform color adjustment on the repair source image before calculating the position of the occlusion region in the repair source image, so that the repair source image and the The color of the first image is consistent, wherein the color adjustment includes color gamut adjustment, brightness adjustment, and the like. Since the white balance parameter between the first image and the repair source image may be inconsistent, there is a slight difference between the first image and the repair source image, and therefore, before the occlusion region is repaired, Fix the source image for color adjustment to better fix the occlusion area.
  • the processor 10 is further configured to select, as the repair source image, the second image whose matching degree satisfies a preset matching degree and the matching degree is the highest.
  • the processor 10 is further configured to preferentially capture an image captured by the camera 31 of the first image from the binocular camera. Selecting the second image, when the camera 31 that captures the first image in the binocular camera does not satisfy the image of the preset matching degree in the image captured first, another from the binocular camera The second image is selected from images captured by the camera 31.
  • the processor 10 can be a microcontroller, a microprocessor, a single chip, a digital signal processor, or the like.
  • the memory 20 can be a computer readable storage medium such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like. In some embodiments, the memory 20 stores a number of program instructions that can be executed by the processor 10 to perform the aforementioned functions.
  • FIG. 6 is a flowchart of a method for detecting occlusion detection in an embodiment of the present invention.
  • the occlusion detection repairing method is applied to the occlusion detection repairing apparatus 100 described above, and the order of execution is not limited to the order shown in FIG. 6.
  • the method includes the steps of:
  • step S601 it is detected whether there is a target occlusion or a camera that will block the camera unit 30. If yes, the process proceeds to step S602, otherwise, the process ends.
  • the sensing detection unit 41 generates an inductive signal including occlusion position information when detecting that a target object is about to approach the imaging unit 30 or has partially blocked all or part of the camera 31 of the imaging unit 30.
  • the proximity detecting unit 41 may be a proximity sensor (or distance sensor) 411 disposed within a preset distance range of the at least one camera 31 and/or between the at least one camera 31.
  • the sensing detection unit 41 can also be a touch module 413 mounted on at least one camera 31.
  • the camera unit 30 itself is used to detect whether there is a target that is approaching the camera unit 30 or has partially or completely blocked the camera 31 of the camera unit 30.
  • the target object when it is detected that the target object within the preset distance of the image capturing unit 30 exists in the viewing range of the image capturing unit 30, it is determined that the target object is occluding or about to block the image capturing unit. 30 cameras.
  • Step S602 calculating an occlusion area of the first image.
  • the processor 10 is configured to calculate an occlusion region of the first image according to occlusion position information in the sensing signal. In other embodiments, the processor 10 is configured to calculate an occlusion region according to the first image captured by the imaging unit 30 and the third image captured previously.
  • Step S603 the processor 10 repairs an occlusion area in the first image.
  • the processor 10 extracts a first feature point of the first image, acquires the plurality of second images, and And extracting, in each of the second images, a second feature point corresponding to the first feature point.
  • the processor 10 calculates a matching degree between a second feature point of each of the second images and a corresponding first feature point of the first image, and selects one of the matching degrees to satisfy the preset matching
  • the second image of degrees is used as a repair source image.
  • the processor 10 calculates a position corresponding to the occlusion area in the repair source image, and repairs an occlusion area of the first image by using an image of the occlusion area corresponding to the first image in the repair source image.
  • step S602 includes:
  • step S6021 the processor 10 cuts the first image into a plurality of first small blocks M according to a predetermined size.
  • Step S6022 the processor 10 acquires a third image, and cuts the third image into a plurality of second small blocks N according to the predetermined size, wherein each of the first small blocks M and one of the The second small block N corresponds.
  • the third image is an image captured by the imaging unit 30 prior to the first image and stored in the memory 20.
  • the memory 20 also stores a color threshold, a first difference threshold, and a connected number.
  • step S6023 the processor 10 calculates a color average value of each of the first small block M and each of the second small blocks N.
  • step S6024 the processor 10 determines whether the average value of the color of each of the first small blocks M is smaller than the color threshold, and if yes, proceeds to step S6025, otherwise, ends.
  • Step S6025 the processor 10 determines whether the difference between the color average value of each of the first small blocks M and the corresponding color average value of the second small block N is smaller than the first difference threshold. If yes, go to step S6026, otherwise, end.
  • step S6026 the processor 10 marks the first small block M.
  • step S6027 when the number of the first small blocks M that are marked and connected to each other is greater than or equal to the number of connected, the processor 10 determines that the area where the first small blocks M are located is Occlusion area.
  • the first image and the third image are images simultaneously captured by the binocular camera for the same shooting scene, the first image and the third image.
  • the image is divided into a common portion and a non-public portion according to a minimum shooting distance.
  • the calculation of the occlusion region of the non-common portion is the same as the above embodiment, that is, the calculation of the occlusion region is performed using the image previously captured by the camera 31 that captured the first image.
  • the manner shown in FIG. 6 can be adopted, which is described in detail below.
  • step S602 includes:
  • step S6021' the processor 10 cuts the common portion of the first image into a plurality of first patches M according to a predetermined size.
  • Step S6022' the processor 10 cuts the common portion of the third image into a plurality of second small blocks N according to the predetermined size, and each of the first small blocks M and one of the second small blocks N corresponds to each of the first small block M and the corresponding second small block N being an image taken by the binocular camera for the same target.
  • step S6023' the processor 10 calculates a color average of each of the first small block M and each of the second small blocks N.
  • step S6024' the processor 10 determines whether the average value of the color of each of the first small blocks M is smaller than the color threshold, and if so, proceeds to step S6025', otherwise, ends.
  • Step S6025' the processor 10 determines whether the difference between the color average value of each of the first small blocks M and the corresponding color average value of the second small block N is greater than the second difference value. The threshold, if yes, proceeds to step S6026', otherwise, ends.
  • step S6026' the processor 10 marks the first small block M.
  • Step S6027' when the number of the first small blocks M that are marked and connected to each other is greater than or equal to the number of connected, the processor determines that the area where the first small blocks M are located is Occlusion area.
  • the plurality of program instructions are used by the processor 10 to perform execution to perform the steps in any of the methods of FIGS. 6-8.
  • the present invention further provides a computer readable storage medium having stored therein a plurality of program instructions, which are executed by the processor 10 for execution, and are executed in FIG. 6-8. Any method step of calculating an occlusion region in the first image and repairing an occlusion region in the first image.
  • the computer storage medium is the memory 20, and may be any storable information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like. Storage device.
  • the occlusion detection and repair device based on the imaging device of the present invention and the occlusion detection and repair method thereof can perform occlusion detection when capturing an image, repair the occlusion region of the captured image, improve the splicing rate, and have a good repair effect. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

Disclosed in the present application is a camera device-based occlusion detection and repair method, said method comprising the following steps: photographing a first image; detecting whether there is a target object, which is at a preset distance away from a camera unit, within a framing range of the camera unit; if so, calculating an occlusion area of the first image; extracting a first feature point of the first image; acquiring a plurality of second images photographed by the camera unit, extracting from each of the second images, second feature points corresponding to the first feature point; calculating the matching degree between the second feature point of each of the second images and the corresponding first feature point of the first image, selecting a second image, the matching degree of which satisfies a preset matching degree, as a repair source image; and calculating a corresponding position of the occlusion area in the repair source image, using an image of the repair source image corresponding to the occlusion area of the first image to repair the occlusion area of the first image. The present application can effectively perform occlusion detection and image repair, has a good repair effect and a high filming rate.

Description

基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法Occlusion detection and repair device based on camera device and occlusion detection and repair method thereof 技术领域Technical field
本发明涉及一种摄像设备,尤其涉及一种基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法。The present invention relates to an image pickup apparatus, and more particularly to an occlusion detection and repair apparatus based on an image pickup apparatus and an occlusion detection and repair method thereof.
背景技术Background technique
电子设备的摄像品质是消费者在选择购买电子设备时的主要考虑因素之一。换句话说,如果电子设备具有卓越的摄像品质,将成为所述电子设备的一大卖点。然,现有的电子设备在摄像时,如果有目标物,例如手指、污渍等挡住电子设备的摄像头时,所拍摄的影片将会出现暗区,成片率低。The camera quality of electronic devices is one of the main considerations for consumers when they choose to purchase electronic devices. In other words, if an electronic device has excellent image quality, it will become a major selling point of the electronic device. However, when the existing electronic device is photographed, if there is a target object, such as a finger, a stain, or the like, the camera of the electronic device is blocked, the film taken will have a dark area, and the filming rate is low.
发明内容Summary of the invention
本发明实施例公开一种遮挡检测修复装置及其遮挡检测修复方法,能够检测遮挡及对影片的遮挡区域修复,可有效提高成片率,修复效果好。The embodiment of the invention discloses an occlusion detection and repairing device and an occlusion detection and repairing method thereof, which can detect occlusion and repair the occlusion area of the film, can effectively improve the splicing rate, and the repairing effect is good.
本发明实施例公开的基于摄像设备的遮挡检测修复装置。所述遮挡检测修复装置包括:摄像单元,所述摄像单元拍摄第一图像;侦测模组,所述侦测模组侦测在所述摄像单元的取景范围内是否存在与所述摄像单元在预设距离内的目标物;存储器,所述存储器存储所述取景范围、所述预设距离及预设匹配度,所述存储器还存储多个第二图像,所述多个第二图像为所述摄像单元拍摄的图像;及处理器,所述摄像单元、所述侦测模组及所述存储器分别与所述处理器电性连接,所述处理器用于:在所述侦测模组侦测到所述摄像单元的所述取景范围内存在与所述摄像单元在所述预设距离内的目标物时,计算所述第一图像的遮挡区域;提取所述第一图像的第一特征点;获取所述多个第二图像,并从每个所述第二图像中提取与所述第一特征点相对应的第二特征点;计算每个所述第二图像的第二特征点与所述第一图像的第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像;及计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源 图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。The occlusion detection and repair device based on the imaging device disclosed in the embodiment of the invention. The occlusion detection and repairing device includes: an image capturing unit, the image capturing unit captures a first image; and a detecting module, wherein the detecting module detects whether the camera unit is present in the framing range of the camera unit a target within a preset distance; a memory storing the framing range, the preset distance, and a preset matching degree, the memory further storing a plurality of second images, wherein the plurality of second images are An image captured by the camera unit; and a processor, the camera unit, the detection module, and the memory are respectively electrically connected to the processor, and the processor is configured to: detect in the detection module Calculating an occlusion region of the first image when the target object within the preset distance is within the framing range of the camera unit; and extracting a first feature of the first image Obtaining the plurality of second images, and extracting second feature points corresponding to the first feature points from each of the second images; calculating second feature points of each of the second images With the first image Matching degree between a feature point, and selecting one of the second images whose matching degree satisfies the preset matching degree as a repair source image; and calculating the occlusion region corresponding to the repair source image Location and use the repair source An image of the occlusion region corresponding to the first image in the image repairs an occlusion region of the first image.
本发明实施例公开的基于摄像设备的遮挡处理方法。所述遮挡处理方法包括步骤:拍摄第一图像;侦测在一摄像单元的取景范围内是否存在与所述摄像单元在预设距离内的目标物;在侦测到所述摄像单元的所述取景范围内存在与所述摄像单元在所述预设距离内的目标物时,计算所述第一图像的遮挡区域;提取所述第一图像的第一特征点;获取多个第二图像,并从每个所述第二图像中提取与所述第一特征点相对应的第二特征点;计算每个所述第二图像的第二特征点与所述第一图像的第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像;及计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。The occlusion processing method based on the imaging device disclosed in the embodiment of the present invention. The occlusion processing method includes the steps of: capturing a first image; detecting whether a target object within a preset range of the image capturing unit is within a preset distance; and detecting the image capturing unit Calculating an occlusion region of the first image when a target object within the preset distance is within the framing range; extracting a first feature point of the first image; acquiring a plurality of second images, And extracting, from each of the second images, a second feature point corresponding to the first feature point; calculating a second feature point of each of the second images and a first feature point of the first image a degree of matching between the two, and selecting one of the second images whose matching degree satisfies the preset matching degree as a repair source image; and calculating a position of the occlusion region corresponding to the repair source image, and An occlusion region of the first image is repaired using an image of the occlusion region of the repair source image corresponding to the first image.
一种计算机可读存储介质,所述计算机可读存储介质中存储有若干程序指令,所述若干程序指令供处理器调用执行后,执行步骤:拍摄第一图像;侦测在一摄像单元的取景范围内是否存在与所述摄像单元在预设距离内的目标物;在侦测到所述摄像单元的所述取景范围内存在与所述摄像单元在所述预设距离内的目标物时,计算所述第一图像的遮挡区域;提取所述第一图像的第一特征点;获取多个第二图像,并从每个所述第二图像中提取与所述第一特征点相对应的第二特征点;计算每个所述第二图像的第二特征点与所述第一图像的第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像;及计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。A computer readable storage medium, wherein the computer readable storage medium stores a plurality of program instructions, where the program instructions are executed by the processor, and the step of: capturing the first image; detecting the framing of the camera unit Whether there is a target within a predetermined distance from the camera unit in the range; when it is detected that the target object within the preset distance of the camera unit is within the view range of the camera unit, Calculating an occlusion region of the first image; extracting a first feature point of the first image; acquiring a plurality of second images, and extracting, corresponding to the first feature point, from each of the second images a second feature point; calculating a matching degree between the second feature point of each of the second images and the first feature point of the first image, and selecting one of the matching degrees to satisfy the preset matching degree The second image is used as a repair source image; and calculating a position of the occlusion region corresponding to the repair source image, and adopting an image of the occlusion region corresponding to the first image in the repair source image Multiplexing the occlusion region of the first image.
本发明的遮挡检测修复装置及其遮挡检测修复方法,在通过侦测单元侦测到摄像单元有遮挡时,采用所述摄像单元之前拍摄的图像进行遮挡区域修复,能够及时检测到遮挡,避免持续遮挡,并且即使出现遮挡,也能够及时修复,提高成片率,修复效果好。 The occlusion detection and repair device and the occlusion detection and repair method of the present invention, when the detection unit detects that the camera unit is occluded, uses the image captured by the camera unit to perform occlusion repair, and can detect occlusion in time to avoid continuation. Occlusion, and even if there is occlusion, it can be repaired in time, improve the filming rate, and the repair effect is good.
附图说明DRAWINGS
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings to be used in the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without paying any creative work.
图1为本发明一实施例中的遮挡检测修复装置的结构框图。FIG. 1 is a structural block diagram of an occlusion detection and repair apparatus according to an embodiment of the present invention.
图2为本发明一实施例中的遮挡检测修复装置的摄像单元及侦测模组的示意图。2 is a schematic diagram of an imaging unit and a detection module of an occlusion detection and repair device according to an embodiment of the invention.
图3为本发明另一实施例中的遮挡检测修复装置的摄像单元及侦测模组的示意图。FIG. 3 is a schematic diagram of an imaging unit and a detection module of an occlusion detection and repair device according to another embodiment of the present invention.
图4为本发明一实施例中的摄像单元为双目摄像头时所拍摄图像时最小拍摄距离、公共部分及非公共部分的示意图。4 is a schematic diagram of a minimum shooting distance, a common portion, and a non-common portion when an image is taken when the camera unit is a binocular camera according to an embodiment of the present invention.
图5为本发明一实施例中的遮挡检测修复装置的侦测模组为摄像侦测单元时其图像比对的示意图。FIG. 5 is a schematic diagram of image comparison of the detection module of the occlusion detection and repair device when the detection module is an image detection unit according to an embodiment of the invention.
图6为本发明一实施例中的遮挡检测修复方法的流程图。FIG. 6 is a flowchart of an occlusion detection and repair method according to an embodiment of the present invention.
图7为一实施例中图6中步骤S602的子流程图。Figure 7 is a sub-flow diagram of step S602 of Figure 6 in an embodiment.
图8为另一实施例中图6中步骤S602的子流程图。Figure 8 is a sub-flow diagram of step S602 of Figure 6 in another embodiment.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
请参阅图1,为本发明一实施例中的基于摄像设备的遮挡检测修复装置100的结构框图。所述遮挡检测修复装置100应用于电子设备上。所述电子设备包括但不限于摄像机、手机、平板电脑、笔记本电脑、桌面型电脑等,也可 以为智能头盔,智能眼镜等穿戴式设备等。所述遮挡检测修复装置100包括处理器10、存储器20、摄像单元30和侦测模组40。所述存储器20、摄像单元30和侦测模组40分别与所述处理器10电性连接。Please refer to FIG. 1 , which is a structural block diagram of an occlusion detection and repair device 100 based on an imaging device according to an embodiment of the present invention. The occlusion detection repair device 100 is applied to an electronic device. The electronic device includes but is not limited to a camera, a mobile phone, a tablet computer, a notebook computer, a desktop computer, etc. Think of smart helmets, smart glasses and other wearable devices. The occlusion detection and repair device 100 includes a processor 10, a memory 20, an imaging unit 30, and a detection module 40. The memory 20, the camera unit 30, and the detection module 40 are electrically connected to the processor 10, respectively.
所述摄像单元30用于针对一拍摄场景进行拍照或者录影以获取该拍摄场景的图像或视频信息。具体的,所述摄像单元30用于拍摄第一图像。请一并参考图2,所述摄像单元30可以包括至少一个摄像头31。在一些实施例中,所述摄像单元30包括一个摄像头31。在另一些实施例中,所述摄像单元30包括两个摄像头31,即双目摄像头。可以理解,在其它实施例中,所述摄像单元30可以包括三个及三个以上的摄像头31,具体可根据实际需要设置。The camera unit 30 is configured to take a photo or video for a shooting scene to obtain image or video information of the shooting scene. Specifically, the camera unit 30 is configured to capture a first image. Referring to FIG. 2 together, the camera unit 30 may include at least one camera 31. In some embodiments, the camera unit 30 includes a camera 31. In other embodiments, the camera unit 30 includes two cameras 31, a binocular camera. It can be understood that in other embodiments, the camera unit 30 can include three or more cameras 31, which can be specifically set according to actual needs.
所述存储器20用于存储取景范围、预设距离及预设匹配度。其中,所述取景范围为所述摄像单元30在拍摄时的取景范围。所述预设距离为所述摄像单元30拍摄目标物时与目标物之间的预设距离。所述匹配度为两幅图像之间的相似度,相似度越高,所述匹配度越高,反之,相似度越低,则所述匹配度越低。所述预设匹配度即两幅图像之间的相似度达到预定的相似度。所述存储器还存储多个第二图像。其中,所述多个第二图像为所述摄像单元30拍摄的图像。The memory 20 is configured to store a framing range, a preset distance, and a preset matching degree. The framing range is a framing range of the imaging unit 30 at the time of shooting. The preset distance is a preset distance between the camera unit 30 and the target when the target object is photographed. The matching degree is the similarity between the two images. The higher the similarity, the higher the matching degree. Conversely, the lower the similarity, the lower the matching degree. The preset matching degree, that is, the similarity between the two images reaches a predetermined similarity. The memory also stores a plurality of second images. The plurality of second images are images captured by the imaging unit 30.
所述侦测模组40用于侦测在所述摄像单元30的所述取景范围内是否有目标物遮挡或者即将遮挡所述摄像单元30的摄像头31。当在所述摄像单元30的所述取景范围内,所述侦测模组40侦测到目标物时,所述处理器10用于计算所述第一图像的遮挡区域。The detecting module 40 is configured to detect whether there is a target object in the framing range of the image capturing unit 30 or that the camera 31 of the camera unit 30 is to be blocked. The processor 10 is configured to calculate an occlusion region of the first image when the detection module 40 detects the target within the framing range of the camera unit 30.
在一些实施例中,所述侦测模组40包括感应侦测单元41。所述感应侦测单元41与所述处理器10电性连接。所述感应侦测单元41用于在侦测到有目标物即将靠近所述摄像单元30或者已经遮住所述摄像单元30的部分或者全部摄像头31时,产生包含遮挡位置信息的感应信号。所述处理器10用于根据所述感应信号中的遮挡位置信息计算所述第一图像的遮挡区域。In some embodiments, the detection module 40 includes an inductive detection unit 41. The sensing detection unit 41 is electrically connected to the processor 10 . The sensing detection unit 41 is configured to generate a sensing signal including occlusion position information when detecting that a target object is about to approach the imaging unit 30 or has partially blocked all or part of the camera 31 of the imaging unit 30. The processor 10 is configured to calculate an occlusion region of the first image according to occlusion position information in the sensing signal.
请一并参考图2,在一些实施例中,所述感应侦测单元41包括至少一个接近传感器411。该接近传感器411也可为距离传感器。所述至少一个接近传感器411设置在所述摄像头31的周围预设距离范围内。在另一实施例中,所述接近传感器411设置在两个摄像头31之间。所述处理器10根据感应到所述 目标物的所述接近传感器411的位置信息计算所述第一图像的遮挡区域。具体地,当所述摄像单元30包括一个摄像头31时,所述至少一个接近传感器411设置在所述摄像头31的周围预设距离范围内。当所述摄像单元30包括两个摄像头31时,所述至少一个接近传感器411还可以设置在相邻两个摄像31头之间的位置上。Referring to FIG. 2 together, in some embodiments, the sensing detection unit 41 includes at least one proximity sensor 411. The proximity sensor 411 can also be a distance sensor. The at least one proximity sensor 411 is disposed within a preset distance range around the camera 31. In another embodiment, the proximity sensor 411 is disposed between the two cameras 31. The processor 10 senses the The position information of the proximity sensor 411 of the target calculates an occlusion area of the first image. Specifically, when the imaging unit 30 includes one camera 31, the at least one proximity sensor 411 is disposed within a preset distance range around the camera 31. When the camera unit 30 includes two cameras 31, the at least one proximity sensor 411 may also be disposed at a position between two adjacent cameras 31.
在一些实施例中,请一并参考图3,所述感应侦测单元41包括安装在所述摄像单元30的所述至少一个摄像头31上的触控模块413。可以理解,所述触控模块413可以为柔性触控单元。所述触控模块413在感应到有目标物与其接触时产生包含触控位置坐标的感应信号。所述处理器10根据所述感应信号中的触控位置坐标计算所述第一图像的遮挡区域。In some embodiments, referring to FIG. 3 , the sensing detection unit 41 includes a touch module 413 mounted on the at least one camera 31 of the camera unit 30 . It can be understood that the touch module 413 can be a flexible touch unit. The touch module 413 generates an inductive signal including touch position coordinates when sensing that a target object is in contact therewith. The processor 10 calculates an occlusion region of the first image according to the touch position coordinates in the sensing signal.
在一些实施例中,所述触控模块413还设置在所述摄像单元30的所述摄像头31周围的预设距离范围内。所述触控模块413在感应到目标物在摄像头31周围的预设距离范围内时便产生包含触控位置坐标的感应信号。所述处理器10根据所述触控位置坐标判断所述摄像单元30即将被遮挡,并发出遮挡提醒。In some embodiments, the touch module 413 is also disposed within a preset distance range around the camera 31 of the camera unit 30. The touch module 413 generates an inductive signal including touch position coordinates when sensing a target within a preset distance range around the camera 31. The processor 10 determines that the camera unit 30 is about to be blocked according to the touch position coordinates, and issues an occlusion reminder.
在一些实施例中,所述侦测模组40还包括摄像侦测单元43。所述摄像侦测单元43即为所述摄像单元30本身。所述摄像单元30拍摄第三图像。所述第三图像为所述摄像单元30先于所述第一图像所拍摄并存储在所述存储器20中的图像。可以理解,所述存储器20还存储颜色阈值、第一差值阈值和连通个数。其中,所述颜色阈值为当摄像头31被遮挡时所拍摄图像的颜色最大值。所述第一差值阈值为两幅图像对应位置的颜色值之间的差值。所述连通个数为所述差值大于或等于所述第一差值阈值且相互连通的区块个数。In some embodiments, the detection module 40 further includes an imaging detection unit 43. The imaging detecting unit 43 is the imaging unit 30 itself. The imaging unit 30 captures a third image. The third image is an image captured by the imaging unit 30 prior to the first image and stored in the memory 20. It can be understood that the memory 20 also stores a color threshold, a first difference threshold, and a connected number. Wherein, the color threshold is a maximum value of a color of an image captured when the camera 31 is blocked. The first difference threshold is a difference between color values of corresponding positions of the two images. The number of connections is the number of blocks in which the difference is greater than or equal to the first difference threshold and communicates with each other.
请一并参考图4和图5,所述处理器10还用于将所述第一图像按照预定尺寸切割为若干第一小块M,及将所述第三图像按照所述预定尺寸切割为若干第二小块N,其中,每个所述第一小块M与其中一个所述第二小块N对应,即所述第一小块M与对应的所述第二小块N为摄像头31在同一拍摄位置所拍摄的图像。所述处理器10还用于计算每个所述第一小块M和每个所述第二小块N的颜色平均值。所述处理器10还用于判断每个所述第一小块M的颜色平均值是否小于所述颜色阈值,即所述第一小块M的颜色是否偏暗。所述处理 器10还用于判断每个所述第一小块M的颜色平均值与对应的所述第二小块N的颜色平均值之间的差值是否小于所述第一差值阈值,即所述第一小块M与对应的所述第二小块N的颜色接近,颜色变化比较小,则该区域可能被遮挡。所述处理器10还用于在所述第一小块M的颜色平均值小于所述颜色阈值,且所述第一小块M对应的所述差值小于所述第一差值阈值时,将所述第一小块M做标记。所述处理器10还用于在做过标记且相互连通的所述第一小块M的个数大于或等于所述连通个数时,确定该些所述第一小块M所在的区域为遮挡区域,即形成暗区。可以理解,当做过标记的两个第一小块M相邻,则所述两个第一小块M相连通。可以理解,上述第一图像和第三图像为所述摄像单元30的同一摄像头31拍摄的图像。Referring to FIG. 4 and FIG. 5 together, the processor 10 is further configured to cut the first image into a plurality of first small blocks M according to a predetermined size, and cut the third image into the predetermined size as a plurality of second small blocks N, wherein each of the first small blocks M corresponds to one of the second small blocks N, that is, the first small block M and the corresponding second small block N are cameras 31 Images taken at the same shooting position. The processor 10 is further configured to calculate a color average of each of the first small block M and each of the second small blocks N. The processor 10 is further configured to determine whether a color average value of each of the first small blocks M is smaller than the color threshold, that is, whether the color of the first small block M is dark. The treatment The device 10 is further configured to determine whether a difference between a color average value of each of the first small blocks M and a color average value of the corresponding second small block N is smaller than the first difference threshold, that is, The first small block M is close to the color of the corresponding second small block N, and the color change is relatively small, and the area may be occluded. The processor 10 is further configured to: when the color average of the first small block M is smaller than the color threshold, and the difference corresponding to the first small block M is smaller than the first difference threshold, The first small piece M is marked. The processor 10 is further configured to determine, when the number of the first small blocks M that are marked and communicated with each other is greater than or equal to the number of connected pieces, determine an area where the first small blocks M are located. The occlusion area forms a dark area. It can be understood that when the two first small blocks M that have been marked are adjacent, the two first small blocks M are in communication. It can be understood that the first image and the third image described above are images taken by the same camera 31 of the imaging unit 30.
在一些实施例中,所述摄像单元30为双目摄像头。其中,所述摄像单元30拍摄的第三图像与所述第一图像为所述双目摄像头针对同一拍摄场景同时拍摄的图像,所述第一图像和所述第三图像按照一最小拍摄距离被划分为公共部分和非公共部分。其中,所述非公共部分的遮挡区域的计算同上述实施例,即,采用由拍摄所述第一图像的所述摄像头31在先拍摄的图像进行遮挡区域的计算。但对于所述公共部分的遮挡区域计算,可采用下述方式,详述如下。In some embodiments, the camera unit 30 is a binocular camera. The third image captured by the camera unit 30 and the first image are images taken by the binocular camera simultaneously for the same shooting scene, and the first image and the third image are followed by a minimum shooting distance. Divided into public and non-public parts. The calculation of the occlusion region of the non-common portion is the same as the above embodiment, that is, the calculation of the occlusion region by the image captured by the camera 31 that captures the first image is performed. However, for the calculation of the occlusion area of the common portion, the following manner can be adopted, which is described in detail below.
所述存储器20还存储第二差值阈值。所述处理器10还用于将所述第一图像的公共部分按照预定尺寸切割为若干第一小块M,及将所述第三图像的公共部分按照所述预定尺寸切割为若干第二小块N,每个所述第一小块M与其中一个所述第二小块N对应,即每个所述第一小块M与对应的所述第二小块N为所述双目摄像头针对同一目标拍摄的图像。所述处理器10还用于计算每个所述第一小块M和每个所述第二小块N的颜色平均值。所述处理器10还用于判断每个所述第一小块M的颜色平均值是否小于所述颜色阈值,即所述第一小块M的颜色是否偏暗。所述处理器10还用于判断每个所述第一小块M的颜色平均值与对应的所述第二小块N的颜色平均值之间的差值是否大于所述第二差值阈值,即所述双目摄像头针对同一目标拍摄的图像差别较大,因此,可能存在其中一个摄像头31被遮挡的情况。所述处理器10还用于在所述第一小块M的颜色平均值小于所述颜色阈值,且所述第一小块M与对应的所述第二小块N之间的差值大于所述第二差值阈值时,将所述第一小块M做标记。 所述处理器10还用于在做过标记且相互连通的所述第一小块M的个数大于或等于所述连通个数时,确定该些所述第一小块M所在的区域为遮挡区域。The memory 20 also stores a second difference threshold. The processor 10 is further configured to cut a common portion of the first image into a plurality of first small pieces M according to a predetermined size, and cut a common portion of the third image into the second small size according to the predetermined size. Block N, each of the first small blocks M corresponding to one of the second small blocks N, that is, each of the first small blocks M and the corresponding second small blocks N are the binocular cameras An image taken for the same target. The processor 10 is further configured to calculate a color average of each of the first small block M and each of the second small blocks N. The processor 10 is further configured to determine whether a color average value of each of the first small blocks M is smaller than the color threshold, that is, whether the color of the first small block M is dark. The processor 10 is further configured to determine whether a difference between a color average value of each of the first small blocks M and a color average value of the corresponding second small block N is greater than the second difference threshold. That is, the images taken by the binocular camera for the same target have a large difference, and therefore, there may be a case where one of the cameras 31 is blocked. The processor 10 is further configured to: the color average value of the first small block M is smaller than the color threshold, and the difference between the first small block M and the corresponding second small block N is greater than When the second difference threshold is used, the first small block M is marked. The processor 10 is further configured to determine, when the number of the first small blocks M that are marked and communicated with each other is greater than or equal to the number of connected pieces, determine an area where the first small blocks M are located. Occlusion area.
所述处理器10还用于修复所述第一图像中的遮挡区域。The processor 10 is further configured to repair an occlusion region in the first image.
具体地,所述处理器10用于提取所述第一图像的第一特征点。优选地,所述处理器10用于提取所述第一图像除所述遮挡区域以外的第一特征点。所述处理器10用于获取所述多个第二图像,并提取每个所述第二图像的第二特征点。可以理解,所述第一特征点和所述第二特征点可以是SIFT(Scale Invariant Feature Transform)特征、FAST(Features from Accelerated Segment Test)特征或者ORB(Oriented FAST and Rotated BRIEF)特征等,具体可根据实际需要选择至少一个。Specifically, the processor 10 is configured to extract a first feature point of the first image. Preferably, the processor 10 is configured to extract a first feature point of the first image other than the occlusion region. The processor 10 is configured to acquire the plurality of second images and extract a second feature point of each of the second images. It can be understood that the first feature point and the second feature point may be a SIFT (Scale Invariant Feature Transform) feature, a FAST (Features from Accelerated Segment Test) feature, or an ORB (Oriented FAST and Rotated BRIEF) feature. Select at least one according to actual needs.
所述处理器10用于计算每个所述第二图像的第二特征点与所述第一图像的对应第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像。The processor 10 is configured to calculate a matching degree between a second feature point of each of the second images and a corresponding first feature point of the first image, and select one of the matching degrees to satisfy the pre- The second image of the matching degree is set as the repair source image.
所述处理器10用于计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。具体地,所述处理器10用于将所述修复源图像中对应所述遮挡区域的图像进行矩阵变换。所述矩阵变换描述所述修复源图像中对应所述遮挡区域的图像和所述遮挡区域的对应像素的转换关系,其相当于一个透视变换。The processor 10 is configured to calculate a position of the occlusion area corresponding to the image in the repair source, and repair an occlusion of the first image by using an image of the occlusion area corresponding to the first image in the repair source image. region. Specifically, the processor 10 is configured to perform matrix transformation on an image corresponding to the occlusion region in the repair source image. The matrix transformation describes a conversion relationship between an image corresponding to the occlusion region and a corresponding pixel of the occlusion region in the repair source image, which is equivalent to a perspective transformation.
在一些实施例中,所述处理器10还用于在计算所述遮挡区域对应在所述修复源图像中的位置之前,将所述修复源图像进行色彩调整,使得所述修复源图像与所述第一图像的色彩一致,其中,所述色彩调整包括色域调整和亮度调整等。由于所述第一图像和所述修复源图像之间的白平衡参数可能不一致,导致所述第一图像和所述修复源图像之间存在细微差别,因此,在修复遮挡区域之前,对所述修复源图像进行色彩调整,可以更好的修复所述遮挡区域。In some embodiments, the processor 10 is further configured to perform color adjustment on the repair source image before calculating the position of the occlusion region in the repair source image, so that the repair source image and the The color of the first image is consistent, wherein the color adjustment includes color gamut adjustment, brightness adjustment, and the like. Since the white balance parameter between the first image and the repair source image may be inconsistent, there is a slight difference between the first image and the repair source image, and therefore, before the occlusion region is repaired, Fix the source image for color adjustment to better fix the occlusion area.
在一些实施例中,所述处理器10还用于选择所述匹配度满足预设匹配度且所述匹配度最高的所述第二图像作为所述修复源图像。In some embodiments, the processor 10 is further configured to select, as the repair source image, the second image whose matching degree satisfies a preset matching degree and the matching degree is the highest.
在一些实施例中,所述摄像单元30为双目摄像头时,所述处理器10还用于优先从所述双目摄像头中拍摄所述第一图像的摄像头31在先拍摄的图像中 选择所述第二图像,在所述双目摄像头中拍摄所述第一图像的摄像头31在先拍摄的图像中没有满足所述预设匹配度的图像时,从所述双目摄像头的另一摄像头31所拍摄的图像中选择所述第二图像。In some embodiments, when the camera unit 30 is a binocular camera, the processor 10 is further configured to preferentially capture an image captured by the camera 31 of the first image from the binocular camera. Selecting the second image, when the camera 31 that captures the first image in the binocular camera does not satisfy the image of the preset matching degree in the image captured first, another from the binocular camera The second image is selected from images captured by the camera 31.
其中所述处理器10可为微控制器、微处理器、单片机、数字信号处理器等。The processor 10 can be a microcontroller, a microprocessor, a single chip, a digital signal processor, or the like.
所述存储器20可为存储卡、固态存储器、微硬盘、光盘等计算机可读存储介质。在一些实施例中,所述存储器20中存储有若干程序指令,所述程序指令可被处理器10调用后执行前述的功能。The memory 20 can be a computer readable storage medium such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like. In some embodiments, the memory 20 stores a number of program instructions that can be executed by the processor 10 to perform the aforementioned functions.
请参阅图6,为本发明一实施例中的遮挡检测修复方法的流程图。所述遮挡检测修复方法应用于前述的遮挡检测修复装置100中,执行顺序并不限于图6所示的顺序。所述方法包括步骤:Please refer to FIG. 6 , which is a flowchart of a method for detecting occlusion detection in an embodiment of the present invention. The occlusion detection repairing method is applied to the occlusion detection repairing apparatus 100 described above, and the order of execution is not limited to the order shown in FIG. 6. The method includes the steps of:
步骤S601,侦测是否有目标物遮挡或者即将遮挡所述摄像单元30的摄像头,如果是则进入步骤S602,否则,结束。在一些实施例中,感应侦测单元41在侦测到有目标物即将靠近所述摄像单元30或者已经遮住所述摄像单元30的部分或者全部摄像头31时,产生包含遮挡位置信息的感应信号。所述感应侦测单元41可以是设置在所述至少一个摄像头31的周围预设距离范围内和/或所述至少一个摄像头31之间的接近传感器(或距离传感器)411。所述感应侦测单元41还可以是安装在至少一个摄像头31上的触控模块413。在另一些实施例中,采用所述摄像单元30本身侦测是否有目标物即将靠近所述摄像单元30或者已经遮住所述摄像单元30的部分或者全部摄像头31。In step S601, it is detected whether there is a target occlusion or a camera that will block the camera unit 30. If yes, the process proceeds to step S602, otherwise, the process ends. In some embodiments, the sensing detection unit 41 generates an inductive signal including occlusion position information when detecting that a target object is about to approach the imaging unit 30 or has partially blocked all or part of the camera 31 of the imaging unit 30. . The proximity detecting unit 41 may be a proximity sensor (or distance sensor) 411 disposed within a preset distance range of the at least one camera 31 and/or between the at least one camera 31. The sensing detection unit 41 can also be a touch module 413 mounted on at least one camera 31. In other embodiments, the camera unit 30 itself is used to detect whether there is a target that is approaching the camera unit 30 or has partially or completely blocked the camera 31 of the camera unit 30.
具体的,在侦测到所述摄像单元30的所述取景范围内存在与所述摄像单元30在所述预设距离内的目标物时,判断所述目标物遮挡或者即将遮挡所述摄像单元30的摄像头。Specifically, when it is detected that the target object within the preset distance of the image capturing unit 30 exists in the viewing range of the image capturing unit 30, it is determined that the target object is occluding or about to block the image capturing unit. 30 cameras.
步骤S602,计算所述第一图像的遮挡区域。在一些实施例中,所述处理器10用于根据所述感应信号中的遮挡位置信息计算所述第一图像的遮挡区域。在另一些实施例中,所述处理器10用于根据所述摄像单元30拍摄的第一图像和在先拍摄的第三图像计算遮挡区域。Step S602, calculating an occlusion area of the first image. In some embodiments, the processor 10 is configured to calculate an occlusion region of the first image according to occlusion position information in the sensing signal. In other embodiments, the processor 10 is configured to calculate an occlusion region according to the first image captured by the imaging unit 30 and the third image captured previously.
步骤S603,所述处理器10修复所述第一图像中的遮挡区域。具体地,所述处理器10提取所述第一图像的第一特征点,获取所述多个第二图像,并从 每一所述第二图像中提取与所述第一特征点相对应的第二特征点。所述处理器10计算每个所述第二图像的第二特征点与所述第一图像的对应第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像。所述处理器10计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。Step S603, the processor 10 repairs an occlusion area in the first image. Specifically, the processor 10 extracts a first feature point of the first image, acquires the plurality of second images, and And extracting, in each of the second images, a second feature point corresponding to the first feature point. The processor 10 calculates a matching degree between a second feature point of each of the second images and a corresponding first feature point of the first image, and selects one of the matching degrees to satisfy the preset matching The second image of degrees is used as a repair source image. The processor 10 calculates a position corresponding to the occlusion area in the repair source image, and repairs an occlusion area of the first image by using an image of the occlusion area corresponding to the first image in the repair source image.
请参阅图7,为步骤S602在一些实施例中的子流程图。如图5所示,所述步骤S602包括:Please refer to FIG. 7, which is a sub-flowchart of step S602 in some embodiments. As shown in FIG. 5, the step S602 includes:
步骤S6021,所述处理器10将所述第一图像按照预定尺寸切割为若干第一小块M。In step S6021, the processor 10 cuts the first image into a plurality of first small blocks M according to a predetermined size.
步骤S6022,所述处理器10获取第三图像,并将所述第三图像按照所述预定尺寸切割为若干第二小块N,其中,每个所述第一小块M与其中一个所述第二小块N对应。具体地,所述第三图像为所述摄像单元30先于所述第一图像所拍摄并存储在所述存储器20中的图像。所述存储器20还存储颜色阈值、第一差值阈值和连通个数。Step S6022, the processor 10 acquires a third image, and cuts the third image into a plurality of second small blocks N according to the predetermined size, wherein each of the first small blocks M and one of the The second small block N corresponds. Specifically, the third image is an image captured by the imaging unit 30 prior to the first image and stored in the memory 20. The memory 20 also stores a color threshold, a first difference threshold, and a connected number.
步骤S6023,所述处理器10计算每个所述第一小块M和每个所述第二小块N的颜色平均值。In step S6023, the processor 10 calculates a color average value of each of the first small block M and each of the second small blocks N.
步骤S6024,所述处理器10判断每个所述第一小块M的颜色平均值是否小于所述颜色阈值,如果是,则进入步骤S6025,否则,结束。In step S6024, the processor 10 determines whether the average value of the color of each of the first small blocks M is smaller than the color threshold, and if yes, proceeds to step S6025, otherwise, ends.
步骤S6025,所述处理器10判断每个所述第一小块M的颜色平均值与对应的所述第二小块N的颜色平均值之间的差值是否小于所述第一差值阈值,如果是,则进入步骤S6026,否则,结束。Step S6025, the processor 10 determines whether the difference between the color average value of each of the first small blocks M and the corresponding color average value of the second small block N is smaller than the first difference threshold. If yes, go to step S6026, otherwise, end.
步骤S6026,所述处理器10将所述第一小块M做标记。In step S6026, the processor 10 marks the first small block M.
步骤S6027,所述处理器10在做过标记且相互连通的所述第一小块M的个数大于或等于所述连通个数时,确定该些所述第一小块M所在的区域为遮挡区域。In step S6027, when the number of the first small blocks M that are marked and connected to each other is greater than or equal to the number of connected, the processor 10 determines that the area where the first small blocks M are located is Occlusion area.
可以理解,所述摄像单元30为双目摄像头时,所述第一图像和所述第三图像为所述双目摄像头针对同一拍摄场景同时拍摄的图像,所述第一图像和所述第三图像按照一最小拍摄距离被划分为公共部分和非公共部分。其中,所述 非公共部分的遮挡区域的计算同上述实施例,即,采用由拍摄所述第一图像的所述摄像头31在先拍摄的图像进行遮挡区域的计算。但对于所述公共部分的遮挡区域计算,可采用图6所示的方式,详述如下。It can be understood that, when the camera unit 30 is a binocular camera, the first image and the third image are images simultaneously captured by the binocular camera for the same shooting scene, the first image and the third image. The image is divided into a common portion and a non-public portion according to a minimum shooting distance. Wherein said The calculation of the occlusion region of the non-common portion is the same as the above embodiment, that is, the calculation of the occlusion region is performed using the image previously captured by the camera 31 that captured the first image. However, for the occlusion area calculation of the common portion, the manner shown in FIG. 6 can be adopted, which is described in detail below.
请参阅图8,为步骤S602在另一些实施例中的子流程图。如图6所示,所述步骤S602包括:Please refer to FIG. 8, which is a sub-flowchart in step S602 in other embodiments. As shown in FIG. 6, the step S602 includes:
步骤S6021’,所述处理器10将所述第一图像的公共部分按照预定尺寸切割为若干第一小块M。In step S6021', the processor 10 cuts the common portion of the first image into a plurality of first patches M according to a predetermined size.
步骤S6022’,所述处理器10将所述第三图像的公共部分按照所述预定尺寸切割为若干第二小块N,每个所述第一小块M与其中一个所述第二小块N对应,即每个所述第一小块M与对应的所述第二小块N为所述双目摄像头针对同一目标拍摄的图像。Step S6022', the processor 10 cuts the common portion of the third image into a plurality of second small blocks N according to the predetermined size, and each of the first small blocks M and one of the second small blocks N corresponds to each of the first small block M and the corresponding second small block N being an image taken by the binocular camera for the same target.
步骤S6023’,所述处理器10计算每个所述第一小块M和每个所述第二小块N的颜色平均值。In step S6023', the processor 10 calculates a color average of each of the first small block M and each of the second small blocks N.
步骤S6024’,所述处理器10判断每个所述第一小块M的颜色平均值是否小于所述颜色阈值,如果是,则进入步骤S6025’,否则,结束。In step S6024', the processor 10 determines whether the average value of the color of each of the first small blocks M is smaller than the color threshold, and if so, proceeds to step S6025', otherwise, ends.
步骤S6025’,所述处理器10判断每个所述第一小块M的颜色平均值与对应的所述第二小块N的颜色平均值之间的差值是否大于所述第二差值阈值,如果是,则进入步骤S6026’,否则,结束。Step S6025', the processor 10 determines whether the difference between the color average value of each of the first small blocks M and the corresponding color average value of the second small block N is greater than the second difference value. The threshold, if yes, proceeds to step S6026', otherwise, ends.
步骤S6026’,所述处理器10将所述第一小块M做标记。In step S6026', the processor 10 marks the first small block M.
步骤S6027’,所述处理器在做过标记且相互连通的所述第一小块M的个数大于或等于所述连通个数时,确定该些所述第一小块M所在的区域为遮挡区域。Step S6027', when the number of the first small blocks M that are marked and connected to each other is greater than or equal to the number of connected, the processor determines that the area where the first small blocks M are located is Occlusion area.
其中,当存储器40中存储有若干程序指令时,所述若干程序指令用于供处理器10调用执行而执行图6-8中任一方法中的步骤。Wherein, when a plurality of program instructions are stored in the memory 40, the plurality of program instructions are used by the processor 10 to perform execution to perform the steps in any of the methods of FIGS. 6-8.
在一些实施例中,本发明还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有若干程序指令,所述若干程序指令供处理器10调用执行后,执行图6-8的任一方法步骤,从而计算出所述第一图像中的遮挡区域,并修复所述第一图像中的遮挡区域。在一些实施例中,所述计算机存储介质即为所述存储器20,可为存储卡、固态存储器、微硬盘、光盘等任意可存储信息 的存储设备。In some embodiments, the present invention further provides a computer readable storage medium having stored therein a plurality of program instructions, which are executed by the processor 10 for execution, and are executed in FIG. 6-8. Any method step of calculating an occlusion region in the first image and repairing an occlusion region in the first image. In some embodiments, the computer storage medium is the memory 20, and may be any storable information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like. Storage device.
从而,本发明的基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法,能够在拍摄图像时进行遮挡检测,并对已拍摄的图像的遮挡区域进行修复,提高成片率,且修复效果好。Therefore, the occlusion detection and repair device based on the imaging device of the present invention and the occlusion detection and repair method thereof can perform occlusion detection when capturing an image, repair the occlusion region of the captured image, improve the splicing rate, and have a good repair effect. .
以上所述是本发明的优选实施例,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也视为本发明的保护范围。 The above is a preferred embodiment of the present invention, and it should be noted that those skilled in the art can also make several improvements and retouchings without departing from the principles of the present invention. It is the scope of protection of the present invention.

Claims (20)

  1. 一种基于摄像设备的遮挡检测修复装置,其特征在于,所述遮挡检测修复装置包括:An occlusion detection and repair device based on an imaging device, wherein the occlusion detection and repair device comprises:
    摄像单元,所述摄像单元拍摄第一图像;a camera unit that captures a first image;
    侦测模组,所述侦测模组侦测在所述摄像单元的取景范围内是否存在与所述摄像单元在预设距离内的目标物;a detecting module, the detecting module detecting whether there is a target object within a preset range of the camera unit within a preset range of the camera unit;
    存储器,所述存储器存储所述取景范围、所述预设距离及预设匹配度,所述存储器还存储多个第二图像,所述多个第二图像为所述摄像单元拍摄的图像;及a memory that stores the framing range, the preset distance, and a preset matching degree, the memory further storing a plurality of second images, the plurality of second images being images captured by the camera unit;
    处理器,所述摄像单元、所述侦测模组及所述存储器分别与所述处理器电性连接,所述处理器用于:The processor, the camera unit, the detection module, and the memory are respectively electrically connected to the processor, and the processor is configured to:
    在所述侦测模组侦测到所述摄像单元的所述取景范围内存在与所述摄像单元在所述预设距离内的目标物时,计算所述第一图像的遮挡区域;Calculating an occlusion region of the first image when the detecting module detects that a target object within the preset distance of the image capturing unit exists within the viewing range of the image capturing unit;
    提取所述第一图像的第一特征点;Extracting a first feature point of the first image;
    获取所述多个第二图像,并从每个所述第二图像中提取与所述第一特征点相对应的第二特征点;Obtaining the plurality of second images, and extracting, from each of the second images, a second feature point corresponding to the first feature point;
    计算每个所述第二图像的第二特征点与所述第一图像的第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像;及Calculating a matching degree between the second feature point of each of the second images and the first feature point of the first image, and selecting one of the second matching points that satisfies the second matching degree Image as a repair source image; and
    计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。Calculating a position of the occlusion area corresponding to the repair source image, and repairing an occlusion area of the first image by using an image of the occlusion area corresponding to the first image in the repair source image.
  2. 如权利要求1所述的遮挡检测修复装置,其特征在于,所述侦测模组包括感应侦测单元,所述感应侦测单元与所述处理器电性连接,所述感应侦测单元在侦测到有目标物即将靠近所述摄像单元或者已经遮住所述摄像单元的部分或者全部摄像头时,产生包含遮挡位置信息的感应信号,所述处理器根据所述感应信号中的遮挡位置信息计算所述第一图像的遮挡区域。 The occlusion detection and repair device of claim 1 , wherein the detection module comprises an inductive detection unit, the inductive detection unit is electrically connected to the processor, and the inductive detection unit is When detecting that a target object is about to approach the camera unit or has partially or completely blocked the camera unit, generating a sensing signal including occlusion position information, and the processor according to the occlusion position information in the sensing signal An occlusion region of the first image is calculated.
  3. 如权利要求2所述的遮挡检测修复装置,其特征在于,所述摄像单元包括至少一个摄像头,所述感应侦测单元包括至少一个接近传感器或距离传感器,所述至少一个接近传感器或距离传感器设置在所述至少一个摄像头的周围预设距离范围内,所述处理器根据感应到所述目标物的所述接近传感器或距离传感器的位置信息计算所述第一图像的遮挡区域。The occlusion detection and repair device according to claim 2, wherein the imaging unit comprises at least one camera, the sensing detection unit comprises at least one proximity sensor or distance sensor, and the at least one proximity sensor or distance sensor is configured. The processor calculates an occlusion region of the first image according to position information of the proximity sensor or the distance sensor that senses the target within a preset distance range around the at least one camera.
  4. 如权利要求2所述的遮挡检测修复装置,其特征在于,所述摄像单元包括至少一个摄像头,所述感应侦测单元包括安装在所述至少一个摄像头上的触控模块,所述触控模块在感应到有目标物与其接触时产生包含触控位置坐标的感应信号,所述处理器根据所述感应信号中的触控位置坐标计算所述第一图像的遮挡区域。The occlusion detection and repair device of claim 2, wherein the camera unit comprises at least one camera, and the sensor detection unit comprises a touch module mounted on the at least one camera, the touch module A sensing signal including touch position coordinates is generated when the target object is sensed, and the processor calculates an occlusion region of the first image according to the touch position coordinates in the sensing signal.
  5. 如权利要求4所述的遮挡检测修复装置,其特征在于,所述触控模块还设置在所述至少一个摄像头周围的预设距离范围内,所述触控模块在感应到目标物在所述摄像头周围的预设距离范围内时便产生包含触控位置坐标的感应信号,所述处理器根据所述触控位置坐标判断所述摄像单元即将被遮挡,并发出遮挡提醒。The occlusion detection and repair device of claim 4, wherein the touch module is further disposed within a preset distance range around the at least one camera, and the touch module senses the target in the The sensing signal including the coordinates of the touch position is generated when the preset distance is within the range of the camera. The processor determines that the camera unit is about to be blocked according to the touch position coordinate, and sends an occlusion reminder.
  6. 如权利要求1所述的遮挡检测修复装置,其特征在于,所述摄像单元拍摄第三图像,所述第三图像为所述摄像单元先于所述第一图像所拍摄的图像,所述存储器还存储颜色阈值、第一差值阈值和连通个数,所述处理器还用于:The occlusion detection and repair device according to claim 1, wherein the imaging unit captures a third image, and the third image is an image captured by the imaging unit prior to the first image, the memory The color threshold, the first difference threshold, and the connected number are also stored, and the processor is further configured to:
    将所述第一图像按照预定尺寸切割为若干第一小块;Cutting the first image into a plurality of first patches according to a predetermined size;
    将所述第三图像按照所述预定尺寸切割为若干第二小块,每个所述第一小块与其中一个所述第二小块对应;Cutting the third image into a plurality of second patches according to the predetermined size, each of the first patches corresponding to one of the second patches;
    计算每个所述第一小块和每个所述第二小块的颜色平均值;Calculating a color average of each of the first small block and each of the second small blocks;
    判断每个所述第一小块的颜色平均值是否小于所述颜色阈值,且每个所述第一小块的颜色平均值与对应的所述第二小块的颜色平均值之间的差值是否小于所述第一差值阈值;Determining whether a color average value of each of the first small blocks is smaller than the color threshold, and a difference between a color average value of each of the first small blocks and a color average value of the corresponding second small block Whether the value is less than the first difference threshold;
    在所述第一小块的颜色平均值小于所述颜色阈值,且所述第一小块对应的所述差值小于所述第一差值阈值时,将所述第一小块做标记;及When the color average of the first small block is smaller than the color threshold, and the difference corresponding to the first small block is smaller than the first difference threshold, marking the first small block; and
    在做过标记且相互连通的所述第一小块的个数大于或等于所述连通 个数时,确定该些所述第一小块所在的区域为遮挡区域。The number of the first small blocks that have been marked and communicated with each other is greater than or equal to the connectivity When the number is determined, it is determined that the area where the first small block is located is an occlusion area.
  7. 如权利要求1所述的遮挡检测修复装置,其特征在于,所述摄像单元为双目摄像头,所述摄像单元拍摄第三图像,所述第一图像和所述第三图像为所述双目摄像头的两个摄像头针对同一拍摄场景在同一时刻分别拍摄的图像,所述第一图像和所述第三图像按照一最小拍摄距离被划分为公共部分和非公共部分;所述存储器还存储颜色阈值、第二差值阈值和连通个数,所述处理器还用于:The occlusion detection and repair device according to claim 1, wherein the imaging unit is a binocular camera, the imaging unit captures a third image, and the first image and the third image are the binocular The two cameras of the camera respectively respectively capture images at the same time for the same shooting scene, the first image and the third image are divided into a common portion and a non-common portion according to a minimum shooting distance; the memory also stores a color threshold The second difference threshold and the number of connected, the processor is further configured to:
    将所述第一图像的公共部分按照预定尺寸切割为若干第一小块;Cutting a common portion of the first image into a plurality of first patches according to a predetermined size;
    将所述第三图像的公共部分按照所述预定尺寸切割为若干第二小块,每个所述第一小块与其中一个所述第二小块对应;Cutting a common portion of the third image into a plurality of second patches according to the predetermined size, each of the first patches corresponding to one of the second patches;
    计算每个所述第一小块和每个所述第二小块的颜色平均值;Calculating a color average of each of the first small block and each of the second small blocks;
    判断每个所述第一小块的颜色平均值是否小于所述颜色阈值,且每个所述第一小块的颜色平均值与对应的所述第二小块的颜色平均值之间的差值是否大于所述第二差值阈值;Determining whether a color average value of each of the first small blocks is smaller than the color threshold, and a difference between a color average value of each of the first small blocks and a color average value of the corresponding second small block Whether the value is greater than the second difference threshold;
    在所述第一小块的颜色平均值小于所述颜色阈值,且所述第一小块对应的所述差值大于所述第二差值阈值时,将所述第一小块做标记;及When the average value of the color of the first small block is smaller than the color threshold, and the difference corresponding to the first small block is greater than the second difference threshold, marking the first small block; and
    在做过标记且相互连通的所述第一小块的个数大于或等于所述连通个数时,确定该些所述第一小块所在的区域为遮挡区域。When the number of the first small blocks that are marked and connected to each other is greater than or equal to the number of connected pieces, it is determined that the area where the first small blocks are located is an occlusion area.
  8. 如权利要求1所述的遮挡检测修复装置,所述处理器还用于:在计算所述遮挡区域对应在所述修复源图像中的位置之前,将所述修复源图像进行色彩调整,使得所述修复源图像与所述第一图像的色彩一致,其中,所述色彩调整包括色域调整和亮度调整。The occlusion detection repairing apparatus according to claim 1, wherein the processor is further configured to: perform color adjustment on the repair source image before calculating the position of the occlusion region corresponding to the repair source image, so that The repair source image is consistent with the color of the first image, wherein the color adjustment includes color gamut adjustment and brightness adjustment.
  9. 如权利要求1所述的遮挡检测修复装置,其特征在于,所述处理器还用于选择所述匹配度满足预设匹配度且所述匹配度最高的所述第二图像作为所述修复源图像。The occlusion detection and repair apparatus according to claim 1, wherein the processor is further configured to select the second image whose matching degree satisfies a preset matching degree and the matching degree is the highest as the repair source. image.
  10. 如权利要求1所述的遮挡检测修复装置,其特征在于,所述摄像单元为双目摄像头时,所述处理器还用于优先从所述双目摄像头中拍摄所述第一图像的摄像头在先拍摄的图像中选择所述第二图像,在所述双目摄像头中拍摄所述第一图像的摄像头在先拍摄的图像中没有满足所述预设匹配度的图像时,从 所述双目摄像头的另一摄像头所拍摄的图像中选择所述第二图像。The occlusion detection and repair device according to claim 1, wherein when the camera unit is a binocular camera, the processor is further configured to preferentially capture the camera of the first image from the binocular camera. Selecting the second image from the image captured first, and when the image captured by the camera of the first image in the binocular camera does not satisfy the preset matching degree, The second image is selected from an image captured by another camera of the binocular camera.
  11. 一种基于摄像设备的遮挡处理方法,其特征在于,所述遮挡处理方法包括步骤:An occlusion processing method based on an imaging device, characterized in that the occlusion processing method comprises the steps of:
    拍摄第一图像;Taking the first image;
    侦测在一摄像单元的取景范围内是否存在与所述摄像单元在预设距离内的目标物;Detecting whether there is a target within a preset range of the camera unit within a preset distance from the camera unit;
    在侦测到所述摄像单元的所述取景范围内存在与所述摄像单元在所述预设距离内的目标物时,计算所述第一图像的遮挡区域;Calculating an occlusion region of the first image when detecting that the target object within the preset distance is within the framing range of the camera unit;
    提取所述第一图像的第一特征点;Extracting a first feature point of the first image;
    获取多个第二图像,并从每个所述第二图像中提取与所述第一特征点相对应的第二特征点;Acquiring a plurality of second images, and extracting, from each of the second images, a second feature point corresponding to the first feature point;
    计算每个所述第二图像的第二特征点与所述第一图像的第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像;及Calculating a matching degree between the second feature point of each of the second images and the first feature point of the first image, and selecting one of the second matching points that satisfies the second matching degree Image as a repair source image; and
    计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。Calculating a position of the occlusion area corresponding to the repair source image, and repairing an occlusion area of the first image by using an image of the occlusion area corresponding to the first image in the repair source image.
  12. 如权利要求11所述的遮挡处理方法,其特征在于,所述遮挡处理方法还包括步骤:The occlusion processing method according to claim 11, wherein the occlusion processing method further comprises the steps of:
    在侦测到有目标物即将靠近所述摄像单元或者已经遮住所述摄像单元的部分或者全部摄像头时,产生包含遮挡位置信息的感应信号;及A sensing signal including occlusion position information is generated when detecting that a target object is approaching the camera unit or has partially blocked all or part of the camera unit; and
    根据所述感应信号中的遮挡位置信息计算所述第一图像的遮挡区域。And calculating an occlusion region of the first image according to the occlusion position information in the sensing signal.
  13. 如权利要求11所述的遮挡处理方法,其特征在于,所述摄像单元包括至少一个摄像头,所述遮挡处理方法还包括步骤:The occlusion processing method according to claim 11, wherein the image capturing unit comprises at least one camera, and the occlusion processing method further comprises the steps of:
    至少一个接近传感器或距离传感器设置在所述至少一个摄像头的周围预设距离范围内;At least one proximity sensor or distance sensor is disposed within a preset distance range around the at least one camera;
    在所述至少一个接近传感器或距离传感器侦测到有目标物即将靠近所述至少一个摄像头或者已经遮住所述至少一个摄像头的部分或者全部时,产生包含遮挡位置信息的感应信号;及 Generating an inductive signal including occlusion position information when the at least one proximity sensor or distance sensor detects that a target is approaching the at least one camera or has partially or completely blocked the at least one camera; and
    根据所述感应信号中的遮挡位置信息计算所述第一图像的遮挡区域。And calculating an occlusion region of the first image according to the occlusion position information in the sensing signal.
  14. 如权利要求12所述的遮挡处理方法,其特征在于,所述摄像单元包括至少一个摄像头,所述遮挡处理方法还包括步骤:The occlusion processing method according to claim 12, wherein the image capturing unit comprises at least one camera, and the occlusion processing method further comprises the steps of:
    安装在所述摄像单元的所述至少一个摄像头上的触控模块在感应到有目标物与其接触时产生包含触控位置坐标的感应信号;及a touch module mounted on the at least one camera of the camera unit generates a sensing signal including coordinates of a touch position when sensing that a target object is in contact therewith; and
    根据所述感应信号中的触控位置坐标计算所述第一图像的遮挡区域。And calculating an occlusion region of the first image according to the touch position coordinates in the sensing signal.
  15. 如权利要求14所述的遮挡处理方法,其特征在于,所述遮挡处理方法还包括步骤:The occlusion processing method according to claim 14, wherein the occlusion processing method further comprises the steps of:
    安装在所述摄像单元的所述至少一个摄像头周围的预设距离范围内的触控模块在感应到目标物时产生包含触控位置坐标的感应信号;及a touch module installed within a preset distance range around the at least one camera of the camera unit generates a sensing signal including touch position coordinates when sensing a target object; and
    根据所述触控位置坐标判断所述摄像单元即将被遮挡,并发出遮挡提醒。The camera unit is determined to be occluded according to the touch position coordinates, and an occlusion reminder is issued.
  16. 如权利要求11所述的遮挡处理方法,其特征在于,所述遮挡处理方法还包括步骤:The occlusion processing method according to claim 11, wherein the occlusion processing method further comprises the steps of:
    拍摄第三图像,所述第三图像为所述摄像单元先于所述第一图像所拍摄的图像;Taking a third image, where the third image is an image captured by the imaging unit prior to the first image;
    将所述第一图像按照预定尺寸切割为若干第一小块;Cutting the first image into a plurality of first patches according to a predetermined size;
    将所述第三图像按照所述预定尺寸切割为若干第二小块,每个所述第一小块与其中一个所述第二小块对应;Cutting the third image into a plurality of second patches according to the predetermined size, each of the first patches corresponding to one of the second patches;
    计算每个所述第一小块和每个所述第二小块的颜色平均值;Calculating a color average of each of the first small block and each of the second small blocks;
    判断每个所述第一小块的颜色平均值是否小于所述颜色阈值,且每个所述第一小块的颜色平均值与对应的所述第二小块的颜色平均值之间的差值是否小于所述第一差值阈值;Determining whether a color average value of each of the first small blocks is smaller than the color threshold, and a difference between a color average value of each of the first small blocks and a color average value of the corresponding second small block Whether the value is less than the first difference threshold;
    在所述第一小块的颜色平均值小于所述颜色阈值,且所述第一小块对应的所述差值小于所述第一差值阈值时,将所述第一小块做标记;及When the color average of the first small block is smaller than the color threshold, and the difference corresponding to the first small block is smaller than the first difference threshold, marking the first small block; and
    在做过标记且相互连通的所述第一小块的个数大于或等于所述连通个数时,确定该些所述第一小块所在的区域为遮挡区域。When the number of the first small blocks that are marked and connected to each other is greater than or equal to the number of connected pieces, it is determined that the area where the first small blocks are located is an occlusion area.
  17. 如权利要求11所述的遮挡处理方法,其特征在于,所述摄像单元为双目摄像头,所述遮挡处理方法还包括步骤: The occlusion processing method according to claim 11, wherein the image capturing unit is a binocular camera, and the occlusion processing method further comprises the steps of:
    所述摄像单元拍摄第三图像,所述第一图像和所述第三图像为所述双目摄像头的两个摄像头针对同一拍摄场景在同一时刻分别拍摄的图像,,所述第一图像和所述第三图像按照一最小拍摄距离被划分为公共部分和非公共部分;The image capturing unit captures a third image, where the first image and the third image are images respectively captured by two cameras of the binocular camera at the same time for the same shooting scene, the first image and the image The third image is divided into a common portion and a non-public portion according to a minimum shooting distance;
    将所述第一图像的公共部分按照预定尺寸切割为若干第一小块;Cutting a common portion of the first image into a plurality of first patches according to a predetermined size;
    将所述第三图像的公共部分按照所述预定尺寸切割为若干第二小块,每个所述第一小块与其中一个所述第二小块对应;Cutting a common portion of the third image into a plurality of second patches according to the predetermined size, each of the first patches corresponding to one of the second patches;
    计算每个所述第一小块和每个所述第二小块的颜色平均值;Calculating a color average of each of the first small block and each of the second small blocks;
    判断每个所述第一小块的颜色平均值是否小于颜色阈值,且每个所述第一小块的颜色平均值与对应的所述第二小块的颜色平均值之间的差值是否大于第二差值阈值;Determining whether a color average value of each of the first small blocks is smaller than a color threshold, and whether a difference between a color average value of each of the first small blocks and a color average value of the corresponding second small block is Greater than the second difference threshold;
    在所述第一小块的颜色平均值小于所述颜色阈值,且所述第一小块对应的所述差值大于所述第二差值阈值时,将所述第一小块做标记;及When the average value of the color of the first small block is smaller than the color threshold, and the difference corresponding to the first small block is greater than the second difference threshold, marking the first small block; and
    在做过标记且相互连通的所述第一小块的个数大于或等于所述连通个数时,确定该些所述第一小块所在的区域为遮挡区域。When the number of the first small blocks that are marked and connected to each other is greater than or equal to the number of connected pieces, it is determined that the area where the first small blocks are located is an occlusion area.
  18. 如权利要求11所述的遮挡处理方法,其特征在于,在计算所述遮挡区域对应在所述修复源图像中的位置之前,所述遮挡处理方法还包括步骤:The occlusion processing method according to claim 11, wherein before the calculating the position of the occlusion region in the restoration source image, the occlusion processing method further comprises the steps of:
    将所述修复源图像进行色彩调整,使得所述修复源图像与所述第一图像的色彩一致,其中,所述色彩调整包括色域调整和亮度调整。The repair source image is color-adjusted such that the repair source image is consistent with the color of the first image, wherein the color adjustment includes color gamut adjustment and brightness adjustment.
  19. 如权利要求11所述的遮挡处理方法,其特征在于,所述摄像单元为双目摄像头,所述遮挡处理方法还包括步骤:The occlusion processing method according to claim 11, wherein the image capturing unit is a binocular camera, and the occlusion processing method further comprises the steps of:
    优先从所述双目摄像头中拍摄所述第一图像的摄像头在先拍摄的图像中选择所述第二图像;及Selecting the second image from the image captured by the camera that captures the first image from the binocular camera preferentially; and
    在所述双目摄像头中拍摄所述第一图像的摄像头在先拍摄的图像中没有满足所述预设匹配度的图像时,从所述双目摄像头的另一摄像头所拍摄的图像中选择所述第二图像。Selecting from an image taken by another camera of the binocular camera when an image of the preset matching degree is not included in the image captured by the camera of the first image in the binocular camera The second image is described.
  20. 一种计算机可读存储介质,所述计算机可读存储介质中存储有若干程序指令,所述若干程序指令供处理器调用执行后,执行步骤:A computer readable storage medium storing a plurality of program instructions, wherein the program instructions are executed by a processor, and the steps are:
    拍摄第一图像; Taking the first image;
    侦测在一摄像单元的取景范围内是否存在与所述摄像单元在预设距离内的目标物;Detecting whether there is a target within a preset range of the camera unit within a preset distance from the camera unit;
    在侦测到所述摄像单元的所述取景范围内存在与所述摄像单元在所述预设距离内的目标物时,计算所述第一图像的遮挡区域;Calculating an occlusion region of the first image when detecting that the target object within the preset distance is within the framing range of the camera unit;
    提取所述第一图像的第一特征点;Extracting a first feature point of the first image;
    获取多个第二图像,并从每个所述第二图像中提取与所述第一特征点相对应的第二特征点;Acquiring a plurality of second images, and extracting, from each of the second images, a second feature point corresponding to the first feature point;
    计算每个所述第二图像的第二特征点与所述第一图像的第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像;及Calculating a matching degree between the second feature point of each of the second images and the first feature point of the first image, and selecting one of the second matching points that satisfies the second matching degree Image as a repair source image; and
    计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。 Calculating a position of the occlusion area corresponding to the repair source image, and repairing an occlusion area of the first image by using an image of the occlusion area corresponding to the first image in the repair source image.
PCT/CN2017/107875 2017-10-26 2017-10-26 Camera device-based occlusion detection and repair device, and occlusion detection and repair method therefor WO2019080061A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780092103.7A CN110770786A (en) 2017-10-26 2017-10-26 Shielding detection and repair device based on camera equipment and shielding detection and repair method thereof
PCT/CN2017/107875 WO2019080061A1 (en) 2017-10-26 2017-10-26 Camera device-based occlusion detection and repair device, and occlusion detection and repair method therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/107875 WO2019080061A1 (en) 2017-10-26 2017-10-26 Camera device-based occlusion detection and repair device, and occlusion detection and repair method therefor

Publications (1)

Publication Number Publication Date
WO2019080061A1 true WO2019080061A1 (en) 2019-05-02

Family

ID=66246747

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/107875 WO2019080061A1 (en) 2017-10-26 2017-10-26 Camera device-based occlusion detection and repair device, and occlusion detection and repair method therefor

Country Status (2)

Country Link
CN (1) CN110770786A (en)
WO (1) WO2019080061A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989878A (en) * 2019-12-13 2021-06-18 Oppo广东移动通信有限公司 Pupil detection method and related product
CN113902677A (en) * 2021-09-08 2022-01-07 九天创新(广东)智能科技有限公司 Camera shielding detection method and device and intelligent robot

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298808B (en) * 2021-06-22 2022-03-18 哈尔滨工程大学 Method for repairing building shielding information in tilt-oriented remote sensing image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003044996A (en) * 2001-07-31 2003-02-14 Matsushita Electric Ind Co Ltd Obstacle detecting device
CN101266685A (en) * 2007-03-14 2008-09-17 中国科学院自动化研究所 A method for removing unrelated images based on multiple photos
CN101482968A (en) * 2008-01-07 2009-07-15 日电(中国)有限公司 Image processing method and equipment
JP2010237798A (en) * 2009-03-30 2010-10-21 Equos Research Co Ltd Image processor and image processing program
CN103679749A (en) * 2013-11-22 2014-03-26 北京奇虎科技有限公司 Moving target tracking based image processing method and device
CN104657993A (en) * 2015-02-12 2015-05-27 北京格灵深瞳信息技术有限公司 Lens shielding detection method and device
CN105827952A (en) * 2016-02-01 2016-08-03 维沃移动通信有限公司 Photographing method for removing specified object and mobile terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3654286B1 (en) * 2013-12-13 2024-01-17 Panasonic Intellectual Property Management Co., Ltd. Image capturing apparatus, monitoring system, image processing apparatus, image capturing method, and non-transitory computer readable recording medium
CN103731658B (en) * 2013-12-25 2015-09-30 深圳市墨克瑞光电子研究院 Binocular camera repositioning method and binocular camera resetting means
CN106331460A (en) * 2015-06-19 2017-01-11 宇龙计算机通信科技(深圳)有限公司 Image processing method and device, and terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003044996A (en) * 2001-07-31 2003-02-14 Matsushita Electric Ind Co Ltd Obstacle detecting device
CN101266685A (en) * 2007-03-14 2008-09-17 中国科学院自动化研究所 A method for removing unrelated images based on multiple photos
CN101482968A (en) * 2008-01-07 2009-07-15 日电(中国)有限公司 Image processing method and equipment
JP2010237798A (en) * 2009-03-30 2010-10-21 Equos Research Co Ltd Image processor and image processing program
CN103679749A (en) * 2013-11-22 2014-03-26 北京奇虎科技有限公司 Moving target tracking based image processing method and device
CN104657993A (en) * 2015-02-12 2015-05-27 北京格灵深瞳信息技术有限公司 Lens shielding detection method and device
CN105827952A (en) * 2016-02-01 2016-08-03 维沃移动通信有限公司 Photographing method for removing specified object and mobile terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989878A (en) * 2019-12-13 2021-06-18 Oppo广东移动通信有限公司 Pupil detection method and related product
CN113902677A (en) * 2021-09-08 2022-01-07 九天创新(广东)智能科技有限公司 Camera shielding detection method and device and intelligent robot

Also Published As

Publication number Publication date
CN110770786A (en) 2020-02-07

Similar Documents

Publication Publication Date Title
US10389948B2 (en) Depth-based zoom function using multiple cameras
US9325899B1 (en) Image capturing device and digital zooming method thereof
WO2018201809A1 (en) Double cameras-based image processing device and method
US10915998B2 (en) Image processing method and device
US7450756B2 (en) Method and apparatus for incorporating iris color in red-eye correction
TWI424361B (en) Object tracking method
US7868915B2 (en) Photographing apparatus, method and computer program product
JP2018510324A (en) Method and apparatus for multi-technology depth map acquisition and fusion
WO2021136386A1 (en) Data processing method, terminal, and server
US9100563B2 (en) Apparatus, method and computer-readable medium imaging through at least one aperture of each pixel of display panel
TWI637288B (en) Image processing method and system for eye-gaze correction
US20150138309A1 (en) Photographing device and stitching method of captured image
WO2019080061A1 (en) Camera device-based occlusion detection and repair device, and occlusion detection and repair method therefor
WO2022134957A1 (en) Camera occlusion detection method and system, electronic device, and storage medium
TWI451184B (en) Focus adjusting method and image capture device thereof
TWI749370B (en) Face recognition method and computer system using the same
JP5857712B2 (en) Stereo image generation apparatus, stereo image generation method, and computer program for stereo image generation
WO2018027527A1 (en) Optical system imaging quality detection method and apparatus
WO2021129806A1 (en) Image processing method, apparatus, electronic device, and readable storage medium
TW201835749A (en) Mobile device, method for mobile device, and non-transitory computer readable storage medium
WO2012147368A1 (en) Image capturing apparatus
CN107633498A (en) Image dark-state Enhancement Method, device and electronic equipment
JP7321772B2 (en) Image processing device, image processing method, and program
TW201833510A (en) Item size calculation system capable of capturing the image through using two cameras for obtaining the actual size
KR102430726B1 (en) Apparatus and method for processing information of multi camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17929912

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17929912

Country of ref document: EP

Kind code of ref document: A1