WO2017080237A1 - Procédé d'imagerie de caméra et dispositif de caméra - Google Patents

Procédé d'imagerie de caméra et dispositif de caméra Download PDF

Info

Publication number
WO2017080237A1
WO2017080237A1 PCT/CN2016/089030 CN2016089030W WO2017080237A1 WO 2017080237 A1 WO2017080237 A1 WO 2017080237A1 CN 2016089030 W CN2016089030 W CN 2016089030W WO 2017080237 A1 WO2017080237 A1 WO 2017080237A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
scene
area
data
image
Prior art date
Application number
PCT/CN2016/089030
Other languages
English (en)
Chinese (zh)
Inventor
张鹏
Original Assignee
乐视控股(北京)有限公司
乐视移动智能信息技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视移动智能信息技术(北京)有限公司 filed Critical 乐视控股(北京)有限公司
Publication of WO2017080237A1 publication Critical patent/WO2017080237A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Definitions

  • the embodiments of the present application relate to the field of electronic technologies, and in particular, to a camera imaging method and a camera device.
  • the camera in the smartphone can move the lens to different positions by using an Auto Focus (AF) algorithm to calculate the sharpness at the current position, and the position with the best definition as the final imaging position.
  • the lens is then placed in the imaging position for imaging.
  • the AF algorithm captures the sharpness of each region by dividing the image into multiple regions, but in reality each region may contain multiple scenes on the focus, which will affect the final definition calculation and can only be selected.
  • the image is imaged at the best position, and the image of the other focus is blurred, which makes the camera less effective.
  • the embodiment of the present application provides a camera imaging method and a camera device, which are used to solve the problem that the camera currently has an image blur through the AF algorithm and the imaging effect is poor.
  • an embodiment of the present application provides a camera imaging method, including:
  • the area images corresponding to each area are combined to obtain a target image of the target.
  • an embodiment of the present application provides a camera device, including:
  • An identification module configured to identify, from the collected data, each scene included in the target
  • a first acquiring module configured to acquire contour information corresponding to each scene
  • a dividing module configured to divide the target according to contour information of each scene
  • a second acquiring module configured to adjust a distance between the image sensor and the target, and obtain an imaging position with the best definition corresponding to each region
  • Extracting a module configured to extract an area image of the imaging position corresponding to each area
  • a merging module configured to merge the area images corresponding to each area to obtain a target image of the target.
  • embodiments of the present application provide a camera apparatus, including a memory, one or more processors, and one or more programs, wherein the one or more programs are executed by the one or more processors Performing the following operations: collecting data by the image sensor to be photographed; identifying each scene included in the target from the collected data; acquiring contour information corresponding to each scene; and according to contour information of each scene Performing area division on the target; adjusting a distance between the image sensor and the target, acquiring an imaging position with the best definition corresponding to each area; and extracting an area image of the imaging position corresponding to each area And merging the area images corresponding to each area to obtain a target image of the target.
  • embodiments of the present application provide a computer readable storage medium having computer executable instructions stored thereon, the computer executable instructions causing a camera device to perform an operation in response to execution, the operation
  • the method includes: collecting data of an object to be photographed by an image sensor; identifying each scene included in the object from the collected data; acquiring contour information corresponding to each scene; and performing the target according to contour information of each scene Dividing a region; adjusting a distance between the image sensor and the target, acquiring an image forming position corresponding to the sharpness corresponding to each region; extracting an image of the region at the imaging position corresponding to each region; The region images corresponding to the regions are merged to obtain a target image of the target.
  • the camera imaging method and the camera device of the embodiment of the present invention perform data collection on a target to be photographed, identify each scene included in the target from the collected data, and obtain contour information corresponding to each scene, according to each scene.
  • the contour information divides the target into regions, adjusts a distance between the lens and the target, acquires an image forming position with the best definition corresponding to each region, and extracts an image of the region at the imaging position corresponding to each region. And combining the area images corresponding to each area to obtain a target image of the target.
  • the region is divided by the scene contour, and the clearest region image in each region is taken out of the final image of the synthetic target, so that the final image is more clear and sharp, and the scene inside the target can be clearly presented.
  • FIG. 1 is a schematic flow chart of a camera imaging method according to Embodiment 1 of the present application.
  • FIG. 2 is a schematic flow chart of a camera imaging method according to Embodiment 2 of the present application.
  • FIG. 3 is a schematic structural diagram of a camera device according to Embodiment 3 of the present application.
  • FIG. 4 is a schematic structural diagram of a camera device according to Embodiment 4 of the present application.
  • FIG. 5 is a schematic structural diagram of still another embodiment of a camera device provided by the present application.
  • FIG. 6 is a schematic structural diagram of an embodiment of a computer program product for camera imaging provided by the present application.
  • FIG. 1 it is a schematic flowchart of a camera imaging method according to Embodiment 1 of the present application, and the camera imaging method includes:
  • Step 101 Perform data collection on an object to be photographed by an image sensor.
  • the camera icon can be clicked on the touch screen to generate an instruction to the smart phone, and after the smart phone detects the click operation, the camera can be started to perform the photographing mode.
  • the image sensor in the camera can perform data collection on the target to be photographed.
  • the user can point the camera of the camera to the target to be photographed, and the camera will collect data from the target.
  • Step 102 Identify each scene included in the target from the collected data.
  • Step 103 Obtain contour information corresponding to each scene.
  • the image signal processor After receiving the data collected by the image sensor, the image signal processor (Image Signal Processing, ISP for short) analyzes the collected data, and can identify each scene contained in the target from the target. After recognizing each scene, the camera can extract the contours of each scene through the ISP, and then obtain contour information corresponding to each scene.
  • ISP Image Signal Processing
  • Step 104 Perform area division on the target according to contour information of each scene.
  • the camera no longer divides the target to be imaged by vertically and horizontally dividing the data, but divides the position where each scene is located into one area. After acquiring the contour information of each scene, the camera can determine the location of each scene and the area covered, thereby completing the area division of the target to be photographed.
  • Step 105 Adjust a distance between the image sensor and the target, and obtain an imaging position with the best definition corresponding to each region.
  • the user can adjust the position between the image sensor and the target to be photographed to adjust the distance between the two.
  • the focal length between the camera and the target changes.
  • the corresponding sharpness of each area changes.
  • Step 106 Extract an area image on the imaging position corresponding to each area.
  • Step 107 Combine the area images corresponding to each area to obtain a target image of the target.
  • the camera can acquire an area image of each area at the imaging position with the best definition, and extract the area image corresponding to each area. Further, the camera combines the extracted region images corresponding to each region to obtain a target image of the target to be photographed.
  • the camera imaging method provided by the embodiment collects data of an object to be photographed by an image sensor, identifies each scene included in the object from the collected data, and acquires contour information corresponding to each scene, according to contour information of each scene. Divide the target into regions, adjust the distance between the image sensor and the target, obtain the optimal imaging position corresponding to each region, and extract the region image at the imaging position corresponding to each region, corresponding to each region. The area images are merged to obtain the target image of the target.
  • the target is divided into regions by the contour of the scene, and the clearest region image of each region is taken out of the final image of the composite target, so that the final image is more clear and sharp, and the scene inside the target can be clearly presented.
  • FIG. 2 is a schematic flowchart diagram of a camera imaging method according to Embodiment 2 of the present application, where the camera imaging method includes:
  • Step 201 Perform data collection on an object to be photographed by an image sensor.
  • the image sensor in the camera can perform data collection on the target to be photographed. Specifically, the user can point the camera of the camera to the target to be photographed, and the camera will collect data from the target.
  • Step 202 Perform binarization processing on the collected data according to the set threshold to generate matrix data.
  • the camera first sets the threshold used for binarization according to the Tuning parameter through the built-in ISP, and then uses the threshold to binarize the collected data to generate matrix data. Specifically, the camera compares the binarized data with a threshold table, and sets the binarized data greater than or equal to the threshold to 1 and will be smaller than The binarized data of the threshold is set to 0 to generate matrix data.
  • Step 203 Perform continuity identification on the matrix data and save area information of the continuous area.
  • the value is 1 and the top and bottom left and right coordinates of the coordinate with the value 1 and the pixel position of the upper left, lower right, upper right and lower right are searched, and the adjacent pixel points are found in the matrix data.
  • the value in all is 1, the position where these pixels are located is set as a continuous area, and the area information is saved.
  • Step 204 Filter out the area information corresponding to each scene included in the target.
  • the area information of each scene included in the target to be photographed may be filtered out, and the scenes in the middle target may be identified by the area information.
  • Step 205 Scan the area information of each scene to obtain the boundary value of each scene, and form the contour information of each scene.
  • Step 206 Perform area division on the target according to contour information of each scene.
  • the target After obtaining the contour information of each scene, the target can be divided according to the contour information, and each scene is an area.
  • Step 207 Adjust a distance between the image sensor and the target, and obtain an imaging position with the best definition corresponding to each region.
  • the user can adjust the distance between the camera and the target. After the distance changes, the corresponding sharpness of each area changes, so that the optimal resolution position corresponding to each area can be obtained. .
  • the high-frequency data of each region and the region weights are used for the calculation of the sharpness.
  • Step 208 Extract an area image of the imaging position corresponding to each area.
  • Step 209 Combine the area images corresponding to each area to obtain a target image of the target.
  • the embodiment further provides a program for executing the above method, the program is as follows:
  • the camera imaging method provided by the embodiment collects data of an object to be photographed by an image sensor, identifies each scene included in the object from the collected data, and acquires contour information corresponding to each scene, according to contour information of each scene. Divide the target into regions, adjust the distance between the image sensor and the target, obtain the optimal imaging position corresponding to each region, and extract the region image at the imaging position corresponding to each region, corresponding to each region. The area images are merged to obtain the target image of the target.
  • the target is divided into regions by the contour of the scene, and the clearest region image of each region is taken out of the final image of the synthetic target, so that the final imaging is more clear and sharp, and the scene inside the target can be clearly presented, and finally the focus is successful. The probability becomes larger, effectively reducing the probability of failure of focusing.
  • FIG. 3 is a schematic structural diagram of a camera device according to Embodiment 3 of the present application.
  • the device includes: an acquisition module 11, an identification module 12, a first acquisition module 13, a division module 14 a second acquisition module 15, an extraction module 16, and a merge module 17.
  • the acquisition module 11 is configured to perform data collection on a target to be photographed.
  • the identification module 12 is configured to identify each scene included in the target from the collected data.
  • the first obtaining module 13 is configured to acquire contour information corresponding to each scene.
  • the dividing module 14 is configured to divide the target into regions according to the contour information of each scene.
  • the second obtaining module 15 is configured to adjust a distance between the image sensor and the target, and obtain an imaging position with the best definition corresponding to each region.
  • the extracting module 16 is configured to extract an area image on the imaging position corresponding to each area.
  • the merging module 17 is configured to combine the area images corresponding to each area to obtain a target image of the target.
  • the function modules of the camera device provided in this embodiment can be used to execute the process of the camera imaging method shown in FIG. 1.
  • the specific working principle is not described again. For details, refer to the description of the method embodiment.
  • the camera device collects data of an object to be photographed by an image sensor, identifies each scene included in the object from the collected data, and acquires contour information corresponding to each scene, according to contour information of each scene.
  • the target is divided into regions, the distance between the image sensor and the target is adjusted, the optimal imaging position corresponding to each region is obtained, and the region image at the imaging position corresponding to each region is extracted, and each region corresponds to The area images are merged to obtain the target image of the target.
  • the target is divided into regions by the contour of the scene, and the clearest region image of each region is taken out of the final image of the composite target, so that the final image is more clear and sharp, and the scene inside the target can be clearly presented.
  • FIG. 6 it is a schematic structural diagram of a camera device according to Embodiment 4 of the present application.
  • the device includes the acquisition module 11 , the identification module 12 , the first acquisition module 13 , and the second acquisition module of the partition module 14 in the third embodiment. 15. Extract module 16 and merge module 17.
  • the optional implementation structure of the identification module 12 includes: a generating unit 121, an identifying unit 122, and a screening unit 123.
  • the generating unit 121 is configured to perform binarization processing on the collected data according to the set threshold to generate matrix data.
  • the identifying unit 122 is configured to perform continuity identification on the matrix data and save area information of the continuous area.
  • the filtering unit 123 is configured to filter out the area information corresponding to each scene included in the target.
  • the first acquiring module 13 scans the area information of each scene in a specific row to obtain a boundary value of each scene, and forms the contour information of each scene.
  • the generating unit 121 is specifically configured to compare the binarized data with the threshold, and set the binarized data greater than or equal to the threshold to 1 and to be smaller than Data setting after binarization of the threshold The matrix data is generated for 0.
  • the identifying unit is specifically configured to find all the second pixel points adjacent to the first pixel point in the matrix data to generate the continuous region; wherein the first pixel point is a value in the matrix data 1 corresponding pixel, the value of the second pixel in the matrix data is 1.
  • the function modules of the camera device provided in this embodiment can be used to perform the process of the camera imaging method shown in FIG. 1 and FIG. 2, and the specific working principle is not described again. For details, refer to the description of the method embodiment.
  • the camera device collects data of an object to be photographed by an image sensor, identifies each scene included in the object from the collected data, and acquires contour information corresponding to each scene, according to contour information of each scene.
  • the target is divided into regions, the distance between the image sensor and the target is adjusted, the optimal imaging position corresponding to each region is obtained, and the region image at the imaging position corresponding to each region is extracted, and each region corresponds to The area images are merged to obtain the target image of the target.
  • the target is divided into regions by the contour of the scene, and the clearest region image of each region is taken out of the final image of the composite target, so that the final image is more clear and sharp, and the scene inside the target can be clearly presented.
  • the probability of successful focus is increased, which effectively reduces the probability of failure of focus.
  • FIG. 5 is a schematic structural diagram of still another embodiment of a camera device provided by the present application.
  • the camera device of the embodiment of the present application includes a memory 61, one or more processors 62, and one or more programs 63.
  • the one or more programs 63 when executed by one or more processors 62, perform any of the above-described embodiments.
  • the camera device of the embodiment of the present invention collects data of a target to be photographed, identifies each scene included in the target from the collected data, acquires contour information corresponding to each scene, and compares the contour information of each scene.
  • the target is divided into regions, and the distance between the lens and the target is adjusted to obtain the best resolution imaging position corresponding to each region. And extracting the image of the area at the imaging position corresponding to each area, and combining the area images corresponding to each area to obtain a target image of the target.
  • the region is divided by the scene contour, and the clearest region image in each region is taken out of the final image of the synthetic target, so that the final image is more clear and sharp, and the scene inside the target can be clearly presented.
  • FIG. 6 is a schematic structural diagram of an embodiment of a computer program product for camera imaging provided by the present application.
  • the computer program product 71 for camera imaging of the embodiment of the present application may include a signal bearing medium 72.
  • Signal bearing medium 72 may include one or more instructions 73 that, when executed by, for example, a processor, may provide the functionality described above with respect to Figures 1-4.
  • the instructions 73 can include: one or more instructions for data acquisition by the image sensor of the object to be captured; for identifying one or more of the scenes included in the target from the collected data One or more instructions for acquiring contour information corresponding to each scene; one or more instructions for dividing the target according to contour information of each scene; for adjusting the image sensor and the a distance between the targets, one or more instructions for obtaining a sharpest imaged position corresponding to each region; one or more instructions for extracting an image of the region at the imaged position corresponding to each region; And one or more instructions for combining the region images corresponding to each region to obtain a target image of the target.
  • the camera device can perform one or more of the steps shown in FIG. 1 in response to instruction 73.
  • signal bearing medium 72 can include computer readable media 74 such as, but not limited to, a hard disk drive, a compact disk (CD), a digital versatile disk (DVD), a digital tape, a memory, and the like.
  • the signal bearing medium 72 can include a recordable medium 75 such as, but not limited to, a memory, a read/write (R/W) CD, an R/W DVD, and the like.
  • the signal bearing medium 72 can include a communication medium 76 such as, but not limited to, a digital and/or analog communication medium (eg, fiber optic cable, waveguide, wired communication link, wireless communication link, etc.).
  • computer program product 71 can One or more modules that are transmitted to the identification device of the multi-finger swipe gesture by the RF signal bearing medium 72, wherein the signal bearing medium 72 is transmitted by a wireless communication medium (eg, a wireless communication medium compliant with the IEEE 802.11 standard).
  • a wireless communication medium eg, a wireless communication medium compliant with the IEEE 802.11 standard.
  • the computer program product of the embodiment of the present invention acquires the position coordinates of the corresponding touch point when the touch operation on the screen of the terminal device is detected, and corrects the position coordinates of the touch point according to the preset correction rule, and the corrected The position coordinates of the touch point are used for report output.
  • the computer program product of the embodiment of the invention ensures that the user accurately triggers the corresponding operation by correcting the screen report point.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

Selon des modes de réalisation, la présente invention concerne un procédé d'imagerie de caméra et un dispositif de caméra. Un capteur d'image recueille des données d'une cible de tir, identifie tous les objets inclus dans la cible à partir des données recueillies, acquiert des informations de profil correspondant à chaque objet, divise la cible en régions en fonction des informations de profil de chaque objet, règle la distance entre le capteur d'image et la cible en vue d'acquérir une position d'imagerie correspondant à chaque région au niveau de laquelle la netteté est la plus élevée, extrait une image régionale sur la position d'imagerie correspondant à chaque région, et fusionne les images régionales correspondant à chaque région afin d'obtenir une image cible de la cible. Le mode de réalisation de la présente invention divise la cible en régions au moyen des profils des objets et extrait l'image régionale la plus claire de chaque région afin de synthétiser une image finale de la cible, de sorte que l'image finale soit plus claire et plus nette, et que les objets se trouvant à l'intérieur de la cible puissent être présentés clairement.
PCT/CN2016/089030 2015-11-15 2016-07-07 Procédé d'imagerie de caméra et dispositif de caméra WO2017080237A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510786354.8A CN105898135A (zh) 2015-11-15 2015-11-15 相机成像方法及相机装置
CN201510786354.8 2015-11-15

Publications (1)

Publication Number Publication Date
WO2017080237A1 true WO2017080237A1 (fr) 2017-05-18

Family

ID=57002074

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/089030 WO2017080237A1 (fr) 2015-11-15 2016-07-07 Procédé d'imagerie de caméra et dispositif de caméra

Country Status (2)

Country Link
CN (1) CN105898135A (fr)
WO (1) WO2017080237A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112585941A (zh) * 2019-12-30 2021-03-30 深圳市大疆创新科技有限公司 对焦方法、装置、拍摄设备、可移动平台和存储介质
CN117253195A (zh) * 2023-11-13 2023-12-19 广东申立信息工程股份有限公司 一种ipc安全监测方法、监测系统、计算机设备和可读存储介质

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107872614A (zh) * 2016-09-27 2018-04-03 中兴通讯股份有限公司 一种拍摄方法及拍摄装置
CN108076278B (zh) * 2016-11-10 2021-03-19 斑马智行网络(香港)有限公司 一种自动对焦方法、装置及电子设备
CN107659985B (zh) * 2017-08-09 2021-03-09 Oppo广东移动通信有限公司 降低移动终端功耗的方法、装置、存储介质和移动终端
CN110596130A (zh) * 2018-05-25 2019-12-20 上海翌视信息技术有限公司 一种具有辅助照明的工业检测装置
CN110830709A (zh) * 2018-08-14 2020-02-21 Oppo广东移动通信有限公司 图像处理方法和装置、终端设备、计算机可读存储介质
CN109618092B (zh) * 2018-12-03 2020-11-06 广州图匠数据科技有限公司 一种拼接拍照方法、系统及存储介质
CN110046596B (zh) * 2019-04-23 2021-06-15 王雪燕 一种图像模块化处理及多图像模块自定义组合的方法、移动终端与可读存储介质
CN110636220A (zh) * 2019-09-20 2019-12-31 Tcl移动通信科技(宁波)有限公司 图像对焦方法、装置、移动终端及存储介质
CN112702538A (zh) * 2021-01-13 2021-04-23 上海臻面智能信息科技有限公司 一种深度相机及其成像方法
CN115696019A (zh) * 2021-07-30 2023-02-03 哲库科技(上海)有限公司 图像处理方法、装置、计算机设备及存储介质
CN113674638A (zh) * 2021-08-26 2021-11-19 西安热工研究院有限公司 一种lcd拼接屏与rgb相机工作距离的调节系统及方法
CN117115636B (zh) * 2023-09-12 2024-07-16 奥谱天成(厦门)光电有限公司 一种藻类与浮游生物分析方法、分析仪、介质及设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402191A (zh) * 2002-09-19 2003-03-12 上海交通大学 基于块分割的多聚焦图像融合方法
US20070188652A1 (en) * 2006-02-13 2007-08-16 Casio Computer Co., Ltd. Image capturing apparatus, image composing method and storage medium
CN101426093A (zh) * 2007-10-29 2009-05-06 株式会社理光 图像处理设备、图像处理方法及计算机程序产品
US20120069235A1 (en) * 2010-09-20 2012-03-22 Canon Kabushiki Kaisha Image capture with focus adjustment
CN103186894A (zh) * 2013-03-22 2013-07-03 南京信息工程大学 一种自适应分块的多聚焦图像融合方法
CN104184935A (zh) * 2013-05-27 2014-12-03 鸿富锦精密工业(深圳)有限公司 影像拍摄设备及方法
CN104270560A (zh) * 2014-07-31 2015-01-07 三星电子(中国)研发中心 一种多点对焦方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914687A (zh) * 2014-03-14 2014-07-09 常州大学 一种基于多通道和多阈值的矩形目标识别方法
CN104869316B (zh) * 2015-05-29 2018-07-03 北京京东尚科信息技术有限公司 一种多目标的摄像方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402191A (zh) * 2002-09-19 2003-03-12 上海交通大学 基于块分割的多聚焦图像融合方法
US20070188652A1 (en) * 2006-02-13 2007-08-16 Casio Computer Co., Ltd. Image capturing apparatus, image composing method and storage medium
CN101426093A (zh) * 2007-10-29 2009-05-06 株式会社理光 图像处理设备、图像处理方法及计算机程序产品
US20120069235A1 (en) * 2010-09-20 2012-03-22 Canon Kabushiki Kaisha Image capture with focus adjustment
CN103186894A (zh) * 2013-03-22 2013-07-03 南京信息工程大学 一种自适应分块的多聚焦图像融合方法
CN104184935A (zh) * 2013-05-27 2014-12-03 鸿富锦精密工业(深圳)有限公司 影像拍摄设备及方法
CN104270560A (zh) * 2014-07-31 2015-01-07 三星电子(中国)研发中心 一种多点对焦方法和装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112585941A (zh) * 2019-12-30 2021-03-30 深圳市大疆创新科技有限公司 对焦方法、装置、拍摄设备、可移动平台和存储介质
CN117253195A (zh) * 2023-11-13 2023-12-19 广东申立信息工程股份有限公司 一种ipc安全监测方法、监测系统、计算机设备和可读存储介质
CN117253195B (zh) * 2023-11-13 2024-02-27 广东申立信息工程股份有限公司 一种ipc安全监测方法、监测系统、计算机设备和可读存储介质

Also Published As

Publication number Publication date
CN105898135A (zh) 2016-08-24

Similar Documents

Publication Publication Date Title
WO2017080237A1 (fr) Procédé d'imagerie de caméra et dispositif de caméra
EP2768214A2 (fr) Procédé de suivi d'objet utilisant une caméra et système de caméra de suivi d'objet
US9792698B2 (en) Image refocusing
KR101809543B1 (ko) 비접촉식 지문 인식하는 방법 및 이를 수행하기 위한 전자 기기
KR20140013407A (ko) 객체 추적 장치 및 방법
JP5825172B2 (ja) 画像判定装置、画像判定方法及び画像判定用コンピュータプログラム
JP2010045613A (ja) 画像識別方法および撮像装置
CN104463817A (zh) 一种图像处理方法及装置
CN108076278A (zh) 一种自动对焦方法、装置及电子设备
US10455163B2 (en) Image processing apparatus that generates a combined image, control method, and storage medium
WO2017106084A1 (fr) Détection de focalisation
CN110365897B (zh) 图像修正方法和装置、电子设备、计算机可读存储介质
US9995905B2 (en) Method for creating a camera capture effect from user space in a camera capture system
US9706121B2 (en) Image processing apparatus and image processing method
JP6320053B2 (ja) 画像処理装置、画像処理方法、及びコンピュータプログラム
US20130293741A1 (en) Image processing apparatus, image capturing apparatus, and storage medium storing image processing program
US10373329B2 (en) Information processing apparatus, information processing method and storage medium for determining an image to be subjected to a character recognition processing
CN111669492A (zh) 一种终端对拍摄的数字图像进行处理的方法及终端
CN109598195B (zh) 一种基于监控视频的清晰人脸图像处理方法与装置
CN115334241B (zh) 对焦控制方法、装置、存储介质及摄像设备
JP2009159525A (ja) 撮像装置及び画像合成プログラム
JP2010200270A (ja) 画像処理装置、カメラおよびプログラム
KR102628714B1 (ko) 모바일 단말용 사진 촬영 지원 카메라 시스템 및 방법
JP5853369B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP5928024B2 (ja) 画像処理装置、画像処理方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16863415

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16863415

Country of ref document: EP

Kind code of ref document: A1