WO2021189853A1 - 闪光灯光斑位置识别方法、装置、电子设备及存储介质 - Google Patents

闪光灯光斑位置识别方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2021189853A1
WO2021189853A1 PCT/CN2020/125448 CN2020125448W WO2021189853A1 WO 2021189853 A1 WO2021189853 A1 WO 2021189853A1 CN 2020125448 W CN2020125448 W CN 2020125448W WO 2021189853 A1 WO2021189853 A1 WO 2021189853A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
photo
spot
value
recognized
Prior art date
Application number
PCT/CN2020/125448
Other languages
English (en)
French (fr)
Inventor
周建伟
李影
张国辉
宋晨
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021189853A1 publication Critical patent/WO2021189853A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Definitions

  • This application relates to the field of data processing, and in particular to a method, device, electronic equipment, and storage medium for identifying the position of a flashlight spot.
  • the method for identifying the position of a flashlight spot includes:
  • the area value of each spot in the second spot set is calculated, the target spot is determined based on the area value, the center point coordinates of the target spot are calculated, and the center point coordinates are used as the flash light spot position coordinates.
  • the present application also provides a device for identifying the position of a flashlight spot, and the device includes:
  • the parsing module is used to analyze the flash spot position recognition request sent by the user based on the client, and obtain the photo to be recognized carried in the recognition request;
  • a processing module configured to perform light enhancement processing on the photo to be recognized to obtain a first photo, and perform grayscale and binarization processing on the first photo to obtain a second photo;
  • a determining module configured to determine a plurality of light spots according to the pixel value of each pixel in the second photo to obtain a first light spot set, and perform smoothing processing on the first light spot set to obtain a second light spot set;
  • the calculation module is configured to calculate the area value of each spot in the second spot set, determine the target spot based on the area value, calculate the center point coordinates of the target spot, and use the center point coordinates as the flash light spot position coordinates.
  • This application also provides an electronic device, which includes:
  • At least one processor and,
  • a memory communicatively connected with the at least one processor; wherein,
  • the memory stores a flash spot position recognition program that can be executed by the at least one processor, and the flash spot position identification program is executed by the at least one processor, so that the at least one processor can execute the following steps:
  • the area value of each spot in the second spot set is calculated, the target spot is determined based on the area value, the center point coordinates of the target spot are calculated, and the center point coordinates are used as the flash light spot position coordinates.
  • the present application also provides a computer-readable storage medium on which a flashlight spot position recognition program is stored, and the flashlight spot position recognition program can be executed by one or more processors to implement the following steps:
  • the area value of each spot in the second spot set is calculated, the target spot is determined based on the area value, the center point coordinates of the target spot are calculated, and the center point coordinates are used as the flash light spot position coordinates.
  • FIG. 1 is a schematic flowchart of a method for identifying the position of a flashlight spot provided by an embodiment of the application;
  • FIG. 2 is a schematic diagram of modules of a flash spot position recognition device provided by an embodiment of the application.
  • FIG. 3 is a schematic structural diagram of an electronic device for implementing a method for identifying a position of a flashlight spot provided by an embodiment of the application;
  • This application provides a method for identifying the position of a flashlight spot.
  • FIG. 1 it is a schematic flowchart of a method for identifying a position of a flashlight spot provided by an embodiment of the application.
  • the method can be executed by an electronic device, and the electronic device can be implemented by software and/or hardware.
  • the method for identifying the position of a flashlight spot includes:
  • the photo to be identified is a photo taken by turning on a flash.
  • the performing illumination enhancement processing on the photo to be recognized to obtain the first photo includes:
  • Color images include three channels of RGB, and each color is a combination of red, green and blue. For example, red is (255,0,0), pink is (255,192,203), and the dark channel refers to the image At least one channel value of any local pixel except the sky area is very low.
  • the calculation formula of the dark channel pixel value is:
  • H ij is the pixel value of the pixel in the i-th row and j-th column of the photo to be recognized
  • c ij is the RGB three-channel pixel value of the pixel in the i-th row and j-th column of the photo to be recognized
  • M ij is the pixel value in the photo to be recognized The dark channel pixel value of the pixel in the i-th row and j-th column.
  • A2 Perform filtering processing on the dark channel pixel value to obtain the dark channel standard pixel value of each pixel in the photo to be identified;
  • the dark channel standard pixel value of each pixel is obtained by calculating the average value of the dark channel pixel values of its neighboring 8 pixels (that is, the dark channel standard pixel value of the central pixel of the Jiugong grid is in the Jiugong grid.
  • the average value of the dark channel pixel values of the other 8 pixels is to reduce the influence of noise on the picture.
  • A4 Calculate the atmospheric transmittance and global atmospheric light value of each pixel in the photo to be recognized based on the average value and the dark channel standard pixel value;
  • q is the average value of the dark channel pixel values of all pixels in the photo to be recognized
  • P ij is the dark channel standard pixel value of the pixel in the i-th row and j-th column of the photo to be recognized
  • M ij is the pixel value of the dark channel in the photo to be recognized.
  • the dark channel pixel value of the pixel in the i row and the j column, and Lij is the atmospheric refractive index of the pixel in the i row and the j column in the photo to be recognized.
  • H ij is the pixel value of the pixel in the i-th row and j-th column of the photo to be recognized
  • c ij is the RGB three-channel pixel value of the pixel in the i-th row and j-th column of the photo to be recognized
  • P ij is the pixel value in the photo to be recognized
  • A is the global atmospheric light value.
  • A5. Perform illumination enhancement processing on each pixel in the photo to be recognized based on the atmospheric transmittance and the global atmospheric light value to obtain a first photo.
  • the calculation formula corresponding to the illumination enhancement processing is:
  • H ij is the pixel value of the pixel in the i-th row and j-th column of the photo to be recognized
  • Li ij is the atmospheric refractive index of the pixel in the i-th row and j-th column of the photo to be recognized
  • A is the global atmospheric light value
  • F ij It is the pixel value of the pixel in the i-th row and j-th column of the photo to be recognized after illumination enhancement.
  • the present application enhances both the strong light spot and the dark light spot through the atmospheric transmittance and the global atmospheric light value, which avoids the possibility that the weak light spot cannot be identified, so that the follow-up
  • the detection of the flash spot position is more accurate.
  • the calculation formula for the gray-scale processing is:
  • R ij is the R channel pixel value of the pixel in the i-th row and j-th column of the first photo
  • G ij is the G channel pixel value of the pixel in the i-th row and j-th column of the first photo
  • B ij is the first photo
  • Y ij is the gray value of the pixel in the i-th row and the j-th column in the first photo.
  • the gray scale of a pixel has 256 dimensions, and the RGB color image has more than 16 million dimensions.
  • the image dimension (gray scale) can be reduced, thereby greatly reducing the amount of calculation.
  • Y ij is the gray value of the pixel in the i-th row and j-th column in the first photo
  • Wi ij is the pixel value of the pixel in the i-th row and j-th column in the second photo (that is, the pixel value of the i-th row in the first photo)
  • the pixel value of column j after binarization processing is the gray value of the pixel in the i-th row and j-th column in the first photo
  • Wi ij is the pixel value of the pixel in the i-th row and j-th column in the second photo (that is, the pixel value of the i-th row in the first photo)
  • the photo is converted into two colors of black and white through the binarization process, which makes the image simpler, the amount of data is smaller, and the outline of the light spot can be highlighted.
  • the determining multiple light spots according to the pixel value of each pixel in the second photo includes:
  • Wij Wij is the starting point of the spot boundary (that is, when the pixel in the i-th row and j-1th column of the second photo is When the value is 0 and the pixel value of the pixel in the i-th row and j-th column is 1, the pixel in the i-th row and j-th column is the starting point of the spot boundary).
  • the starting point of each boundary and the first boundary ending on the right side thereof are regarded as a boundary pair.
  • the last pixel of the row shall be regarded as the last A boundary start point corresponds to a boundary end point; when the first boundary end point of the line does not have a corresponding boundary start point, the first pixel of the line is used as the boundary start point corresponding to the first boundary end point.
  • the boundary pairs of all rows are summarized to obtain multiple light spots.
  • the performing smoothing processing on the first light spot set to obtain the second light spot set includes:
  • the preset convolution kernel G is:
  • a pixel point is selected from the first spot set, and its pixel value is ANDed with the convolution kernel G to obtain a pixel value matrix.
  • the smallest value in the pixel value matrix is used as the target pixel value of the selected pixel point.
  • the first condition and the second condition re-determine the boundary of the light spot to obtain the second light spot set.
  • the determining the target light spot based on the area value includes:
  • the light spot with the largest concentration area value of the third light spot is used as the target light spot.
  • the method also includes:
  • the photo to be recognized is taken as the target photo.
  • the preset coordinates in this embodiment are the center coordinates of the authentication area.
  • the photo to be identified can be used to determine the identity in the photo. The authenticity of the certificate.
  • the flash spot position recognition method proposed in this application firstly performs illumination enhancement processing on the photo to be recognized to obtain the first photo, and performs grayscale and binarization processing on the first photo to obtain the second photo.
  • the strong and weak spots are enhanced at the same time, avoiding the possibility that the weak spots are not recognized, making the subsequent flash spot recognition results more accurate, and reducing the image through grayscale and binarization.
  • FIG. 2 it is a schematic diagram of modules of a flash spot position recognition device provided by an embodiment of the application.
  • the flash spot position identification device 100 described in this application can be installed in an electronic device.
  • the flash light spot position recognition device 100 may include an analysis module 110, a processing module 120, a determination module 130, and a calculation module 140.
  • the module described in this application can also be called a unit, which refers to a series of computer program segments that can be executed by the processor of an electronic device and can complete fixed functions, and are stored in the memory of the electronic device.
  • each module/unit is as follows:
  • the parsing module 110 is configured to analyze the flash spot position recognition request sent by the user based on the client, and obtain the photo to be recognized carried in the recognition request;
  • the processing module 120 is configured to perform illumination enhancement processing on the photo to be recognized to obtain a first photo, and perform grayscale and binarization processing on the first photo to obtain a second photo.
  • the photo to be identified is a photo taken by turning on a flash.
  • the performing illumination enhancement processing on the photo to be recognized to obtain the first photo includes:
  • Color images include three channels of RGB, and each color is a combination of red, green and blue. For example, red is (255,0,0), pink is (255,192,203), and the dark channel refers to the image At least one channel value of any local pixel except the sky area is very low.
  • the calculation formula of the dark channel pixel value is:
  • H ij is the pixel value of the pixel in the i-th row and j-th column of the photo to be recognized
  • c ij is the RGB three-channel pixel value of the pixel in the i-th row and j-th column of the photo to be recognized
  • M ij is the pixel value in the photo to be recognized The dark channel pixel value of the pixel in the i-th row and j-th column.
  • A2 Perform filtering processing on the dark channel pixel value to obtain the dark channel standard pixel value of each pixel in the photo to be identified;
  • the dark channel standard pixel value of each pixel is obtained by calculating the average value of the dark channel pixel values of its neighboring 8 pixels (that is, the dark channel standard pixel value of the central pixel of the Jiugong grid is in the Jiugong grid.
  • the average value of the dark channel pixel values of the other 8 pixels is to reduce the influence of noise on the picture.
  • A4 Calculate the atmospheric transmittance and global atmospheric light value of each pixel in the photo to be recognized based on the average value and the dark channel standard pixel value;
  • q is the average value of the dark channel pixel values of all pixels in the photo to be recognized
  • P ij is the dark channel standard pixel value of the pixel in the i-th row and j-th column of the photo to be recognized
  • M ij is the pixel value of the dark channel in the photo to be recognized.
  • the dark channel pixel value of the pixel in the i row and the j column, and Lij is the atmospheric refractive index of the pixel in the i row and the j column in the photo to be recognized.
  • H ij is the pixel value of the pixel in the i-th row and j-th column of the photo to be recognized
  • c ij is the RGB three-channel pixel value of the pixel in the i-th row and j-th column of the photo to be recognized
  • P ij is the pixel value in the photo to be recognized
  • A is the global atmospheric light value.
  • A5. Perform illumination enhancement processing on each pixel in the photo to be recognized based on the atmospheric transmittance and the global atmospheric light value to obtain a first photo.
  • the calculation formula corresponding to the illumination enhancement processing is:
  • H ij is the pixel value of the pixel in the i-th row and j-th column in the photo to be recognized
  • Li ij is the atmospheric refractive index of the pixel in the i-th row and j-th column in the photo to be recognized
  • A is the global atmospheric light value
  • F ij It is the pixel value of the pixel in the i-th row and j-th column of the photo to be recognized after illumination enhancement.
  • the present application enhances both the strong light spot and the dark light spot through the atmospheric transmittance and the global atmospheric light value, which avoids the possibility that the weak light spot cannot be identified, so that the follow-up
  • the detection of the flash spot position is more accurate.
  • the calculation formula for the gray-scale processing is:
  • R ij is the R channel pixel value of the pixel in the i-th row and j-th column of the first photo
  • G ij is the G channel pixel value of the pixel in the i-th row and j-th column of the first photo
  • B ij is the first photo
  • Y ij is the gray value of the pixel in the i-th row and the j-th column in the first photo.
  • the gray scale of a pixel has 256 dimensions, and the RGB color image has more than 16 million dimensions.
  • the image dimension (gray scale) can be reduced, thereby greatly reducing the amount of calculation.
  • Y ij is the gray value of the pixel in the i-th row and j-th column in the first photo
  • Wi ij is the pixel value of the pixel in the i-th row and j-th column in the second photo (that is, the pixel value of the i-th row in the first photo)
  • the pixel value of column j after binarization processing is the gray value of the pixel in the i-th row and j-th column in the first photo
  • Wi ij is the pixel value of the pixel in the i-th row and j-th column in the second photo (that is, the pixel value of the i-th row in the first photo)
  • the photo is converted into two colors of black and white through the binarization process, which makes the image simpler, the amount of data is smaller, and the outline of the light spot can be highlighted.
  • the determining module 130 is configured to determine a plurality of light spots according to the pixel value of each pixel in the second photo to obtain a first light spot set, and perform smoothing processing on the first light spot set to obtain a second light spot set.
  • the determining multiple light spots according to the pixel value of each pixel in the second photo includes:
  • Wij Wij is the starting point of the spot boundary (that is, when the pixel in the i-th row and j-1th column of the second photo is When the value is 0 and the pixel value of the pixel in the i-th row and j-th column is 1, the pixel in the i-th row and j-th column is the starting point of the spot boundary).
  • the starting point of each boundary and the first boundary end on the right side thereof are regarded as a boundary pair.
  • the last pixel of the row shall be regarded as the last A boundary start point corresponds to a boundary end point; when the first boundary end point of the line does not have a corresponding boundary start point, the first pixel of the line is used as the boundary start point corresponding to the first boundary end point.
  • the boundary pairs of all rows are summarized to obtain multiple light spots.
  • the performing smoothing processing on the first light spot set to obtain the second light spot set includes:
  • the preset convolution kernel G is:
  • a pixel point is selected from the first spot set, and its pixel value is ANDed with the convolution kernel G to obtain a pixel value matrix.
  • the smallest value in the pixel value matrix is used as the target pixel value of the selected pixel point.
  • the first condition and the second condition re-determine the boundary of the light spot to obtain the second light spot set.
  • the calculation module 140 is configured to calculate the area value of each light spot in the second light spot set, determine a target light spot based on the area value, calculate the center point coordinates of the target light spot, and use the center point coordinates as the flash light spot position coordinates.
  • the determining the target light spot based on the area value includes:
  • the light spot with the largest concentration area value of the third light spot is used as the target light spot.
  • the method also includes:
  • the photo to be recognized is taken as the target photo.
  • the preset coordinates in this embodiment are the center coordinates of the authentication area.
  • the photo to be identified can be used to determine the identity in the photo. The authenticity of the certificate.
  • FIG. 3 it is a schematic structural diagram of an electronic device that implements a method for identifying a position of a flashlight spot provided by an embodiment of the application.
  • the electronic device 1 is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions.
  • the electronic device 1 may be a computer, a single web server, a server group composed of multiple web servers, or a cloud composed of a large number of hosts or web servers based on cloud computing, where cloud computing is a type of distributed computing, A super virtual computer composed of a group of loosely coupled computer sets.
  • the electronic device 1 includes, but is not limited to, a memory 11, a processor 12, and a network interface 13 that are communicatively connected to each other through a system bus.
  • the memory 11 stores a flash spot position recognition program 10, and the flash The spot position recognition program 10 can be executed by the processor 12.
  • FIG. 3 only shows the electronic device 1 with the components 11-13 and the flash spot position recognition program 10. Those skilled in the art can understand that the structure shown in FIG. 3 does not constitute a limitation on the electronic device 1, and may include Fewer or more parts than shown, or some parts in combination, or different parts arrangement.
  • the memory 11 includes a memory and at least one type of readable storage medium.
  • the memory provides a cache for the operation of the electronic device 1;
  • the readable storage medium can be, for example, flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM) ), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks and other non-volatile storage media.
  • the readable storage medium may be an internal storage unit of the electronic device 1, such as the hard disk of the electronic device 1.
  • the non-volatile storage medium may also be an external storage unit of the electronic device 1.
  • Storage devices such as plug-in hard disks, Smart Media Card (SMC), Secure Digital (SD) cards, flash memory cards (Flash Card), etc., equipped on the electronic device 1.
  • the readable storage medium of the memory 11 is generally used to store the operating system and various application software installed in the electronic device 1, for example, to store the code of the flash spot position recognition program 10 in an embodiment of the present application.
  • the memory 11 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 12 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips.
  • the processor 12 is generally used to control the overall operation of the electronic device 1, such as performing data interaction or communication-related control and processing with other devices.
  • the processor 12 is used to run the program code or processing data stored in the memory 11, for example, to run the flash spot position recognition program 10 and so on.
  • the network interface 13 may include a wireless network interface or a wired network interface, and the network interface 13 is used to establish a communication connection between the electronic device 1 and a client (not shown in the figure).
  • the electronic device 1 may further include a user interface.
  • the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the electronic device 1 and to display a visualized user interface.
  • the flash spot position recognition program 10 stored in the memory 11 in the electronic device 1 is a combination of multiple instructions. When running in the processor 12, it can realize:
  • the area value of each spot in the second spot set is calculated, the target spot is determined based on the area value, the center point coordinates of the target spot are calculated, and the center point coordinates are used as the flash light spot position coordinates.
  • the photos to be identified can also be stored in a node of a blockchain.
  • the integrated module/unit of the electronic device 1 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer-readable medium may be non-volatile or non-volatile.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) .
  • the computer-readable storage medium stores a flashlight spot position recognition program 10, and the flashlight spot position recognition program 10 can be executed by one or more processors.
  • the embodiments of the identification method are basically the same, and will not be repeated here.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional modules.
  • the blockchain referred to in this application is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种闪光灯光斑位置识别方法、装置、电子设备及存储介质,涉及数据处理,所述方法包括:解析用户基于客户端发出的闪光灯光斑位置识别请求,获取所述识别请求携带的待识别照片(S1);对待识别照片执行光照增强处理得到第一照片,对第一照片执行灰度化及二值化处理得到第二照片(S2);根据第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对第一光斑集执行平滑处理得到第二光斑集(S3);计算第二光斑集中各个光斑的面积值,基于面积值确定目标光斑,计算目标光斑的中心点坐标,将中心点坐标作为闪光灯光斑位置坐标(S4)。可提高闪光灯光斑位置识别的准确度。

Description

闪光灯光斑位置识别方法、装置、电子设备及存储介质
本申请要求于2020年9月23日提交中国专利局、申请号为CN202011013971.1、名称为“闪光灯光斑位置识别方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据处理领域,尤其涉及一种闪光灯光斑位置识别方法、装置、电子设备及存储介质。
背景技术
随着科技的发展,线上业务以其不受时间、空间的约束而被广泛应用于人们的生活中,例如,用户通过手机APP线上开户,开户时需要上传身份证照片以验证用户身份,验证照片中身份证真伪的方法是比对闪光灯下和无闪光灯时照片中身份证的鉴伪区的像素值变化,然而各个摄像设备的闪光灯位置并不相同,为获取闪光灯光斑照射到鉴伪区的照片,需先识别闪光灯光斑轮廓及光斑中心位置。
发明人意识到当前通常通过霍夫变换检测光斑边缘,然而不同摄像设备的闪光灯光照强度不同,而霍夫变换对光照强度不敏感,无法准确区分强光斑、暗光斑和白色噪点,导致光斑位置识别准确度不高,因此,亟需一种闪光灯光斑位置识别方法,以提高识别准确度。
发明内容
本申请提供的闪光灯光斑位置识别方法,包括:
解析用户基于客户端发出的闪光灯光斑位置识别请求,获取所述识别请求携带的待识别照片;
对所述待识别照片执行光照增强处理得到第一照片,对所述第一照片执行灰度化及二值化处理得到第二照片;
根据所述第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对所述第一光斑集执行平滑处理得到第二光斑集;
计算所述第二光斑集中各个光斑的面积值,基于所述面积值确定目标光斑,计算所述目标光斑的中心点坐标,将所述中心点坐标作为闪光灯光斑位置坐标。
本申请还提供一种闪光灯光斑位置识别装置,所述装置包括:
解析模块,用于解析用户基于客户端发出的闪光灯光斑位置识别请求,获取所述识别请求携带的待识别照片;
处理模块,用于对所述待识别照片执行光照增强处理得到第一照片,对所述第一照片执行灰度化及二值化处理得到第二照片;
确定模块,用于根据所述第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对所述第一光斑集执行平滑处理得到第二光斑集;
计算模块,用于计算所述第二光斑集中各个光斑的面积值,基于所述面积值确定目标光斑,计算所述目标光斑的中心点坐标,将所述中心点坐标作为闪光灯光斑位置坐标。
本申请还提供一种电子设备,所述电子设备包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的闪光灯光斑位置识别程序,所述闪光灯光斑位置识别程序被所述至少一个处理器执行,以使所述至少一个处理器能够执行如下步骤:
解析用户基于客户端发出的闪光灯光斑位置识别请求,获取所述识别请求携带的待识别照片;
对所述待识别照片执行光照增强处理得到第一照片,对所述第一照片执行灰度化及二值化处理得到第二照片;
根据所述第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对所述第一光斑集执行平滑处理得到第二光斑集;
计算所述第二光斑集中各个光斑的面积值,基于所述面积值确定目标光斑,计算所述目标光斑的中心点坐标,将所述中心点坐标作为闪光灯光斑位置坐标。
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有闪光灯光斑位置识别程序,所述闪光灯光斑位置识别程序可被一个或者多个处理器执行,以实现如下步骤:
解析用户基于客户端发出的闪光灯光斑位置识别请求,获取所述识别请求携带的待识别照片;
对所述待识别照片执行光照增强处理得到第一照片,对所述第一照片执行灰度化及二值化处理得到第二照片;
根据所述第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对所述第一光斑集执行平滑处理得到第二光斑集;
计算所述第二光斑集中各个光斑的面积值,基于所述面积值确定目标光斑,计算所述目标光斑的中心点坐标,将所述中心点坐标作为闪光灯光斑位置坐标。
附图说明
图1为本申请一实施例提供的闪光灯光斑位置识别方法的流程示意图;
图2为本申请一实施例提供的闪光灯光斑位置识别装置的模块示意图;
图3为本申请一实施例提供的实现闪光灯光斑位置识别方法的电子设备的结构示意图;
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,在本申请中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。
本申请提供一种闪光灯光斑位置识别方法。参照图1所示,为本申请一实施例提供的闪光灯光斑位置识别方法的流程示意图。该方法可以由一个电子设备执行,该电子设备可以由软件和/或硬件实现。
本实施例中,闪光灯光斑位置识别方法包括:
S1、解析用户基于客户端发出的闪光灯光斑位置识别请求,获取所述识别请求携带的待识别照片;
S2、对所述待识别照片执行光照增强处理得到第一照片,对所述第一照片执行灰度化及二值化处理得到第二照片。
本实施例中,所述待识别照片为开启闪光灯拍摄的照片。所述对所述待识别照片执行光照增强处理得到第一照片包括:
A1、计算所述待识别照片中各个像素点的暗通道像素值;
彩色图像皆包括RGB三个通道,每一种颜色都是由红绿蓝三种颜色组合而成的,例如红色为(255,0,0),粉红色为(255,192,203),暗通道是指图像中除天空区域外的任一局部像素至少有一个通道值很低。
所述暗通道像素值的计算公式为:
Figure PCTCN2020125448-appb-000001
其中,H ij为待识别照片中第i行第j列像素点的像素值,c ij为待识别照片中第i行第j列像素点RGB三通道的像素值,M ij为待识别照片中第i行第j列像素点的暗通道像素值。
A2、对所述暗通道像素值执行滤波处理得到所述待识别照片中各个像素点的暗通道标准像素值;
本实施例中,各个像素点的暗通道标准像素值是通过计算其相邻的8个像素点的暗通道像素值的平均值得到的(即九宫格中心像素点的暗通道标准像素值为九宫格中其他8个像素点的暗通道像素值的平均值),本步骤的目的是减少噪点对图片的影响。
A3、计算所述待识别照片中所有像素点的暗通道像素值的平均值;
A4、基于所述平均值及暗通道标准像素值计算所述待识别照片中每个像素点的大气透射率及全局大气光值;
所述大气透射率的计算公式为:
L ij=min((min(q,0.9))*P ij,M ij)
其中,q为待识别照片中所有像素点的暗通道像素值的平均值,P ij为待识别照片中第i行第j列像素点的暗通道标准像素值,M ij为待识别照片中第i行第j列像素点的暗通道像素值,L ij为待识别照片中第i行第j列像素点的大气折射率。
所述全局大气光值的计算公式为:
Figure PCTCN2020125448-appb-000002
其中,H ij为待识别照片中第i行第j列像素点的像素值,c ij为待识别照片中第i行第j列像素点RGB三通道的像素值,P ij为待识别照片中第i行第j列像素点的暗通道标准像素值,A为全局大气光值。
A5、基于所述大气透射率及全局大气光值对所述待识别照片中各个像素点执行光照增强处理得到第一照片。
所述光照增强处理对应的计算公式为:
Figure PCTCN2020125448-appb-000003
其中,H ij为待识别照片中第i行第j列像素点的像素值,L ij为待识别照片中第i行第j列像素点的大气折射率,A为全局大气光值,F ij为待识别照片中第i行第j列像素点光照增强后的像素值。
相较于现有技术中仅对强光斑进行光照增强,本申请通过大气透射率及全局大气光值将强光斑和暗光斑都进行了增强,避免了弱光斑识别不到的可能性,使得后续闪光灯光斑位置的检测更为准确。
本实施例中,所述灰度化处理的计算公式为:
Y ij=0.299R ij+0.587G ij+0.114B ij
其中,R ij为第一照片中第i行第j列像素点的R通道像素值,G ij为第一照片中第i 行第j列像素点的G通道像素值,B ij为第一照片中第i行第j列像素点的B通道像素值,Y ij为第一照片中第i行第j列像素点的灰度值。
一个像素点的灰度有256个维度,RGB彩色图像就有1600万以上的维度,通过灰度化处理可使得图像降维(灰度),从而大大降低了计算量。
二值化处理的计算公式为:
Figure PCTCN2020125448-appb-000004
其中,Y ij为第一照片中第i行第j列像素点的灰度值,W ij为第二照片中第i行第j列像素点的像素值(即第一照片中第i行第j列像素点二值化处理后的像素值)。
通过二值化处理将照片转换为黑白二种颜色,使图像更为简单、数据量更小,更能凸显光斑轮廓。
S3、根据所述第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对所述第一光斑集执行平滑处理得到第二光斑集。
本实施例中,所述根据所述第二照片中各个像素点的像素值确定多个光斑包括:
B1、逐行判断所述第二照片中每个像素点的像素值是否满足第一条件或第二条件,将满足第一条件的像素点作为光斑边界起点,将满足第二条件的像素点作为光斑边界终点;
所述第一条件为:当W i(j-1)=0且W ij=1时,W ij为光斑边界起点(即:当第二照片中第i行第j-1列像素点的像素值为0且第i行第j列像素点的像素值为1时,第i行第j列像素点为光斑边界起点)。
所述第二条件为:当W mn=1且W m(n+1)=0时,W mn为光斑边界终点(即:当第二照片中第m行第n列像素点的像素值为1且第m行第n+1列像素点的像素值为0时,第m行第n列像素点为光斑边界终点)。
B2、根据所述光斑边界起点和光斑边界终点确定多个光斑得到第一光斑集。
对于每行像素点,将每个边界起点及其右侧第一个边界终点作为一个边界对,当该行最后一个边界起点无对应的边界终点时,将该行最后一个像素点作为所述最后一个边界起点对应的边界终点;当该行第一个边界终点无对应的边界起点时,将该行第一个像素点作为所述第一个边界终点对应的边界起点。
获取每行的边界对后,汇总所有行的边界对,得到多个光斑。
所述对所述第一光斑集执行平滑处理得到第二光斑集包括:
C1、将预设卷积核与所述第一光斑集中各个像素点的像素值执行相与运算得到各个像素点的目标像素值;
C2、根据所述目标像素值重新确定所述第一光斑集中各个光斑的边界,得到第二光斑集。
本实施例中,所述预设卷积核G为:
Figure PCTCN2020125448-appb-000005
从第一光斑集中选择一个像素点,将其像素值与卷积核G做相与运算后得到像素值矩阵,将像素值矩阵中最小的值作为选择的像素点的目标像素值,根据上述第一条件、第二条件重新确定光斑边界,得到第二光斑集。本申请通过平滑处理消除了光斑区域的毛刺并重新确定了光斑边界,剔除了噪声光斑。
S4、计算所述第二光斑集中各个光斑的面积值,基于所述面积值确定目标光斑,计算所述目标光斑的中心点坐标,将所述中心点坐标作为闪光灯光斑位置坐标。
所述基于所述面积值确定目标光斑包括:
D1、删除所述第二光斑集中面积值小于预设阈值的光斑得到第三光斑集;
D2、将所述第三光斑集中面积值最大的光斑作为目标光斑。
所述方法还包括:
当所述闪光灯光斑位置坐标与预设坐标的差值绝对值小于差值阈值时,将所述待识别照片作为目标照片。
本实施例中预设坐标为鉴伪区中心坐标,当待识别照片中闪光灯光斑位置坐标与鉴伪区中心坐标的差值绝对值小于差值阈值时,可用所述待识别照片判断照片中身份证的真伪。
由上述实施例可知,本申请提出的闪光灯光斑位置识别方法,首先,对待识别照片执行光照增强处理得到第一照片,对第一照片执行灰度化及二值化处理得到第二照片,本步骤中通过光照增强处理同时增强了强光斑和弱光斑,避免了弱光斑识别不到的可能性,使得后续的闪光灯光斑位置识别的结果更为准确,通过灰度化及二值化对图像进行降维,使得图像更为简单,更能凸显光斑轮廓;接着,根据第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对第一光斑集执行平滑处理得到第二光斑集,本步骤通过平滑处理消除光斑区域中的毛刺并根据平滑处理后的像素值重新确定光斑的边界,剔除了第一光斑集中的噪点光斑,使得识别到的光斑更为准确;最后,计算所述第二光斑集中各个光斑的面积值,基于所述面积值确定目标光斑,计算目标光斑的中心点坐标,将所述中心点坐标作为闪光灯光斑位置坐标,本步骤进一步剔除了噪声光斑。故而,本申请提高了闪光灯光斑位置识别的准确度。
如图2所示,为本申请一实施例提供的闪光灯光斑位置识别装置的模块示意图。
本申请所述闪光灯光斑位置识别装置100可以安装于电子设备中。根据实现的功能,所述闪光灯光斑位置识别装置100可以包括解析模块110、处理模块120、确定模块130及计算模块140。本申请所述模块也可以称之为单元,是指一种能够被电子设备处理器所执行,并且能够完成固定功能的一系列计算机程序段,其存储在电子设备的存储器中。
在本实施例中,关于各模块/单元的功能如下:
解析模块110,用于解析用户基于客户端发出的闪光灯光斑位置识别请求,获取所述识别请求携带的待识别照片;
处理模块120,用于对所述待识别照片执行光照增强处理得到第一照片,对所述第一照片执行灰度化及二值化处理得到第二照片。
本实施例中,所述待识别照片为开启闪光灯拍摄的照片。所述对所述待识别照片执行光照增强处理得到第一照片包括:
A1、计算所述待识别照片中各个像素点的暗通道像素值;
彩色图像皆包括RGB三个通道,每一种颜色都是由红绿蓝三种颜色组合而成的,例如红色为(255,0,0),粉红色为(255,192,203),暗通道是指图像中除天空区域外的任一局部像素至少有一个通道值很低。
所述暗通道像素值的计算公式为:
Figure PCTCN2020125448-appb-000006
其中,H ij为待识别照片中第i行第j列像素点的像素值,c ij为待识别照片中第i行第j列像素点RGB三通道的像素值,M ij为待识别照片中第i行第j列像素点的暗通道像素值。
A2、对所述暗通道像素值执行滤波处理得到所述待识别照片中各个像素点的暗通道标准像素值;
本实施例中,各个像素点的暗通道标准像素值是通过计算其相邻的8个像素点的暗通道像素值的平均值得到的(即九宫格中心像素点的暗通道标准像素值为九宫格中其他8个像素点的暗通道像素值的平均值),本步骤的目的是减少噪点对图片的影响。
A3、计算所述待识别照片中所有像素点的暗通道像素值的平均值;
A4、基于所述平均值及暗通道标准像素值计算所述待识别照片中每个像素点的大气透射率及全局大气光值;
所述大气透射率的计算公式为:
L ij=min((min(q,0.9))*P ij,M ij)
其中,q为待识别照片中所有像素点的暗通道像素值的平均值,P ij为待识别照片中第i行第j列像素点的暗通道标准像素值,M ij为待识别照片中第i行第j列像素点的暗通道像素值,L ij为待识别照片中第i行第j列像素点的大气折射率。
所述全局大气光值的计算公式为:
Figure PCTCN2020125448-appb-000007
其中,H ij为待识别照片中第i行第j列像素点的像素值,c ij为待识别照片中第i行第j列像素点RGB三通道的像素值,P ij为待识别照片中第i行第j列像素点的暗通道标准像素值,A为全局大气光值。
A5、基于所述大气透射率及全局大气光值对所述待识别照片中各个像素点执行光照增强处理得到第一照片。
所述光照增强处理对应的计算公式为:
Figure PCTCN2020125448-appb-000008
其中,H ij为待识别照片中第i行第j列像素点的像素值,L ij为待识别照片中第i行第j列像素点的大气折射率,A为全局大气光值,F ij为待识别照片中第i行第j列像素点光照增强后的像素值。
相较于现有技术中仅对强光斑进行光照增强,本申请通过大气透射率及全局大气光值将强光斑和暗光斑都进行了增强,避免了弱光斑识别不到的可能性,使得后续闪光灯光斑位置的检测更为准确。
本实施例中,所述灰度化处理的计算公式为:
Y ij=0.299R ij+0.587G ij+0.114B ij
其中,R ij为第一照片中第i行第j列像素点的R通道像素值,G ij为第一照片中第i行第j列像素点的G通道像素值,B ij为第一照片中第i行第j列像素点的B通道像素值,Y ij为第一照片中第i行第j列像素点的灰度值。
一个像素点的灰度有256个维度,RGB彩色图像就有1600万以上的维度,通过灰度化处理可使得图像降维(灰度),从而大大降低了计算量。
二值化处理的计算公式为:
Figure PCTCN2020125448-appb-000009
其中,Y ij为第一照片中第i行第j列像素点的灰度值,W ij为第二照片中第i行第j列像素点的像素值(即第一照片中第i行第j列像素点二值化处理后的像素值)。
通过二值化处理将照片转换为黑白二种颜色,使图像更为简单、数据量更小,更能凸显光斑轮廓。
确定模块130,用于根据所述第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对所述第一光斑集执行平滑处理得到第二光斑集。
本实施例中,所述根据所述第二照片中各个像素点的像素值确定多个光斑包括:
B1、逐行判断所述第二照片中每个像素点的像素值是否满足第一条件或第二条件,将满足第一条件的像素点作为光斑边界起点,将满足第二条件的像素点作为光斑边界终点;
所述第一条件为:当W i(j-1)=0且W ij=1时,W ij为光斑边界起点(即:当第二照片中第i行第j-1列像素点的像素值为0且第i行第j列像素点的像素值为1时,第i行第j列像素点为光斑边界起点)。
所述第二条件为:当W mn=1且W m(n+1)=0时,W mn为光斑边界终点(即:当第二照片中第m行第n列像素点的像素值为1且第m行第n+1列像素点的像素值为0时,第m行第n列像素点为光斑边界终点)。
B2、根据所述光斑边界起点和光斑边界终点确定多个光斑得到第一光斑集。
对于每行像素点,将每个边界起点及其右侧第一个边界终点作为一个边界对,当该行最后一个边界起点无对应的边界终点时,将该行最后一个像素点作为所述最后一个边界起点对应的边界终点;当该行第一个边界终点无对应的边界起点时,将该行第一个像素点作为所述第一个边界终点对应的边界起点。
获取每行的边界对后,汇总所有行的边界对,得到多个光斑。
所述对所述第一光斑集执行平滑处理得到第二光斑集包括:
C1、将预设卷积核与所述第一光斑集中各个像素点的像素值执行相与运算得到各个像素点的目标像素值;
C2、根据所述目标像素值重新确定所述第一光斑集中各个光斑的边界,得到第二光斑集。
本实施例中,所述预设卷积核G为:
Figure PCTCN2020125448-appb-000010
从第一光斑集中选择一个像素点,将其像素值与卷积核G做相与运算后得到像素值矩阵,将像素值矩阵中最小的值作为选择的像素点的目标像素值,根据上述第一条件、第二条件重新确定光斑边界,得到第二光斑集。本申请通过平滑处理消除了光斑区域的毛刺并重新确定了光斑边界,剔除了噪声光斑。
计算模块140,用于计算所述第二光斑集中各个光斑的面积值,基于所述面积值确定目标光斑,计算所述目标光斑的中心点坐标,将所述中心点坐标作为闪光灯光斑位置坐标。
所述基于所述面积值确定目标光斑包括:
D1、删除所述第二光斑集中面积值小于预设阈值的光斑得到第三光斑集;
D2、将所述第三光斑集中面积值最大的光斑作为目标光斑。
所述方法还包括:
当所述闪光灯光斑位置坐标与预设坐标的差值绝对值小于差值阈值时,将所述待识别照片作为目标照片。
本实施例中预设坐标为鉴伪区中心坐标,当待识别照片中闪光灯光斑位置坐标与鉴伪区中心坐标的差值绝对值小于差值阈值时,可用所述待识别照片判断照片中身份证的真伪。
如图3所示,为本申请一实施例提供的实现闪光灯光斑位置识别方法的电子设备的结构示意图。
所述电子设备1是一种能够按照事先设定或者存储的指令,自动进行数值计算和/或信息处理的设备。所述电子设备1可以是计算机、也可以是单个网络服务器、多个网络服务器组成的服务器组或者基于云计算的由大量主机或者网络服务器构成的云,其中云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个超级虚拟计算机。
在本实施例中,电子设备1包括,但不仅限于,可通过系统总线相互通信连接的存储器11、处理器12、网络接口13,该存储器11中存储有闪光灯光斑位置识别程序10,所述闪光灯光斑位置识别程序10可被所述处理器12执行。图3仅示出了具有组件11-13以及闪光灯光斑位置识别程序10的电子设备1,本领域技术人员可以理解的是,图3示出的结构并不构成对电子设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
其中,存储器11包括内存及至少一种类型的可读存储介质。内存为电子设备1的运行提供缓存;可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX 存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等的非易失性存储介质。在一些实施例中,可读存储介质可以是电子设备1的内部存储单元,例如该电子设备1的硬盘;在另一些实施例中,该非易失性存储介质也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。本实施例中,存储器11的可读存储介质通常用于存储安装于电子设备1的操作系统和各类应用软件,例如存储本申请一实施例中的闪光灯光斑位置识别程序10的代码等。此外,存储器11还可以用于暂时地存储已经输出或者将要输出的各类数据。
处理器12在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器12通常用于控制所述电子设备1的总体操作,例如执行与其他设备进行数据交互或者通信相关的控制和处理等。本实施例中,所述处理器12用于运行所述存储器11中存储的程序代码或者处理数据,例如运行闪光灯光斑位置识别程序10等。
网络接口13可包括无线网络接口或有线网络接口,该网络接口13用于在所述电子设备1与客户端(图中未画出)之间建立通信连接。
可选的,所述电子设备1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选的,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备1中处理的信息以及用于显示可视化的用户界面。
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。
所述电子设备1中的所述存储器11存储的闪光灯光斑位置识别程序10是多个指令的组合,在所述处理器12中运行时,可以实现:
解析用户基于客户端发出的闪光灯光斑位置识别请求,获取所述识别请求携带的待识别照片;
对所述待识别照片执行光照增强处理得到第一照片,对所述第一照片执行灰度化及二值化处理得到第二照片;
根据所述第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对所述第一光斑集执行平滑处理得到第二光斑集;
计算所述第二光斑集中各个光斑的面积值,基于所述面积值确定目标光斑,计算所述目标光斑的中心点坐标,将所述中心点坐标作为闪光灯光斑位置坐标。
具体地,所述处理器12对上述闪光灯光斑位置识别程序10的具体实现方法可参考图1对应实施例中相关步骤的描述,在此不赘述。需要强调的是,为进一步保证上述待识别照片的私密和安全性,上述待识别照片还可以存储于一区块链的节点中。
进一步地,所述电子设备1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。所述计算机可读介质可以是非易失性的,也可以是非易失性的。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)。
所述计算机可读存储介质上存储有闪光灯光斑位置识别程序10,所述闪光灯光斑位置识别程序10可被一个或者多个处理器执行,本申请计算机可读存储介质具体实施方式与上述闪光灯光斑位置识别方法各实施例基本相同,在此不作赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备,装置和方法,可以通 过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。
因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附关联图标记视为限制所涉及的权利要求。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。系统权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种闪光灯光斑位置识别方法,其中,所述方法包括:
    解析用户基于客户端发出的闪光灯光斑位置识别请求,获取所述识别请求携带的待识别照片;
    对所述待识别照片执行光照增强处理得到第一照片,对所述第一照片执行灰度化及二值化处理得到第二照片;
    根据所述第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对所述第一光斑集执行平滑处理得到第二光斑集;
    计算所述第二光斑集中各个光斑的面积值,基于所述面积值确定目标光斑,计算所述目标光斑的中心点坐标,将所述中心点坐标作为闪光灯光斑位置坐标。
  2. 如权利要求1所述的闪光灯光斑位置识别方法,其中,所述对所述待识别照片执行光照增强处理得到第一照片包括:
    计算所述待识别照片中各个像素点的暗通道像素值;
    对所述暗通道像素值执行滤波处理得到所述待识别照片中各个像素点的暗通道标准像素值;
    计算所述待识别照片中所有像素点的暗通道像素值的平均值;
    基于所述平均值及暗通道标准像素值计算所述待识别照片中每个像素点的大气透射率及全局大气光值;
    基于所述大气透射率及全局大气光值对所述待识别照片中各个像素点执行光照增强处理得到第一照片。
  3. 如权利要求2所述的闪光灯光斑位置识别方法,其中,所述大气透射率的计算公式为:
    L ij=min((min(q,0.9))*P ij,M ij)
    其中,q为待识别照片中所有像素点的暗通道像素值的平均值,P ij为待识别照片中第i行第j列像素点的暗通道标准像素值,M ij为待识别照片中第i行第j列像素点的暗通道像素值,L ij为待识别照片中第i行第j列像素点的大气折射率;
    所述全局大气光值的计算公式为:
    Figure PCTCN2020125448-appb-100001
    其中,H ij为待识别照片中第i行第j列像素点的像素值,c ij为待识别照片中第i行第j列像素点RGB三通道的像素值,P ij为待识别照片中第i行第j列像素点的暗通道标准像素值,A为全局大气光值;
    所述光照增强处理对应的计算公式为:
    Figure PCTCN2020125448-appb-100002
    其中,H ij为待识别照片中第i行第j列像素点的像素值,L ij为待识别照片中第i行第j列像素点的大气折射率,A为全局大气光值,F ij为待识别照片中第i行第j列像素点光照增强后的像素值。
  4. 如权利要求1所述的闪光灯光斑位置识别方法,其中,所述根据所述第二照片中各个像素点的像素值确定多个光斑包括:
    逐行判断所述第二照片中每个像素点的像素值是否满足第一条件或第二条件,将满足第一条件的像素点作为光斑边界起点,将满足第二条件的像素点作为光斑边界终点;
    根据所述光斑边界起点及光斑边界终点确定多个光斑。
  5. 如权利要求4所述的闪光灯光斑位置识别方法,其中,所述第一条件为:当所述第二照片中第i行第j-1列像素点的像素值为0且第i行第j列像素点的像素值为1时,第i行第j列像素点为光斑边界起点;
    所述第二条件为:当所述第二照片中第m行第n列像素点的像素值为1且第m行第n+1列像素点的像素值为0时,第m行第n列像素点为光斑边界终点。
  6. 如权利要求1所述的闪光灯光斑位置识别方法,其中,所述对所述第一光斑集执行平滑处理得到第二光斑集包括:
    将预设卷积核与所述第一光斑集中各个像素点的像素值执行相与运算得到各个像素点的目标像素值;
    根据所述目标像素值重新确定所述第一光斑集中各个光斑的边界,得到第二光斑集。
  7. 如权利要求1所述的闪光灯光斑位置识别方法,其中,所述基于所述面积值确定目标光斑包括:
    删除所述第二光斑集中面积值小于预设阈值的光斑得到第三光斑集;
    将所述第三光斑集中面积值最大的光斑作为目标光斑。
  8. 一种闪光灯光斑位置识别装置,其中,所述装置包括:
    解析模块,用于解析用户基于客户端发出的闪光灯光斑位置识别请求,获取所述识别请求携带的待识别照片;
    处理模块,用于对所述待识别照片执行光照增强处理得到第一照片,对所述第一照片执行灰度化及二值化处理得到第二照片;
    确定模块,用于根据所述第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对所述第一光斑集执行平滑处理得到第二光斑集;
    计算模块,用于计算所述第二光斑集中各个光斑的面积值,基于所述面积值确定目标光斑,计算所述目标光斑的中心点坐标,将所述中心点坐标作为闪光灯光斑位置坐标。
  9. 一种电子设备,其中,所述电子设备包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的闪光灯光斑位置识别程序,所述闪光灯光斑位置识别程序被所述至少一个处理器执行,以使所述至少一个处理器能够执行如下步骤:
    解析用户基于客户端发出的闪光灯光斑位置识别请求,获取所述识别请求携带的待识别照片;
    对所述待识别照片执行光照增强处理得到第一照片,对所述第一照片执行灰度化及二值化处理得到第二照片;
    根据所述第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对所述第一光斑集执行平滑处理得到第二光斑集;
    计算所述第二光斑集中各个光斑的面积值,基于所述面积值确定目标光斑,计算所述目标光斑的中心点坐标,将所述中心点坐标作为闪光灯光斑位置坐标。
  10. 如权利要求9所述的电子设备,其中,所述对所述待识别照片执行光照增强处理得到第一照片包括:
    计算所述待识别照片中各个像素点的暗通道像素值;
    对所述暗通道像素值执行滤波处理得到所述待识别照片中各个像素点的暗通道标准像素值;
    计算所述待识别照片中所有像素点的暗通道像素值的平均值;
    基于所述平均值及暗通道标准像素值计算所述待识别照片中每个像素点的大气透射 率及全局大气光值;
    基于所述大气透射率及全局大气光值对所述待识别照片中各个像素点执行光照增强处理得到第一照片。
  11. 如权利要求10所述的电子设备,其中,所述大气透射率的计算公式为:
    L ij=min((min(q,0.9))*P ij,M ij)
    其中,q为待识别照片中所有像素点的暗通道像素值的平均值,P ij为待识别照片中第i行第j列像素点的暗通道标准像素值,M ij为待识别照片中第i行第j列像素点的暗通道像素值,L ij为待识别照片中第i行第j列像素点的大气折射率;
    所述全局大气光值的计算公式为:
    Figure PCTCN2020125448-appb-100003
    其中,H ij为待识别照片中第i行第j列像素点的像素值,c ij为待识别照片中第i行第j列像素点RGB三通道的像素值,P ij为待识别照片中第i行第j列像素点的暗通道标准像素值,A为全局大气光值;
    所述光照增强处理对应的计算公式为:
    Figure PCTCN2020125448-appb-100004
    其中,H ij为待识别照片中第i行第j列像素点的像素值,L ij为待识别照片中第i行第j列像素点的大气折射率,A为全局大气光值,F ij为待识别照片中第i行第j列像素点光照增强后的像素值。
  12. 如权利要求9所述的电子设备,其中,所述根据所述第二照片中各个像素点的像素值确定多个光斑包括:
    逐行判断所述第二照片中每个像素点的像素值是否满足第一条件或第二条件,将满足第一条件的像素点作为光斑边界起点,将满足第二条件的像素点作为光斑边界终点;
    根据所述光斑边界起点及光斑边界终点确定多个光斑。
  13. 如权利要求12所述的电子设备,其中,所述第一条件为:当所述第二照片中第i行第j-1列像素点的像素值为0且第i行第j列像素点的像素值为1时,第i行第j列像素点为光斑边界起点;
    所述第二条件为:当所述第二照片中第m行第n列像素点的像素值为1且第m行第n+1列像素点的像素值为0时,第m行第n列像素点为光斑边界终点。
  14. 如权利要求9所述的电子设备,其中,所述对所述第一光斑集执行平滑处理得到第二光斑集包括:
    将预设卷积核与所述第一光斑集中各个像素点的像素值执行相与运算得到各个像素点的目标像素值;
    根据所述目标像素值重新确定所述第一光斑集中各个光斑的边界,得到第二光斑集。
  15. 如权利要求9所述的电子设备,其中,所述基于所述面积值确定目标光斑包括:
    删除所述第二光斑集中面积值小于预设阈值的光斑得到第三光斑集;
    将所述第三光斑集中面积值最大的光斑作为目标光斑。
  16. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有闪光灯光斑位置识别程序,所述闪光灯光斑位置识别程序可被一个或者多个处理器执行,以实现如下步骤:
    解析用户基于客户端发出的闪光灯光斑位置识别请求,获取所述识别请求携带的待识别照片;
    对所述待识别照片执行光照增强处理得到第一照片,对所述第一照片执行灰度化及二值化处理得到第二照片;
    根据所述第二照片中各个像素点的像素值确定多个光斑,得到第一光斑集,对所述第一光斑集执行平滑处理得到第二光斑集;
    计算所述第二光斑集中各个光斑的面积值,基于所述面积值确定目标光斑,计算所述目标光斑的中心点坐标,将所述中心点坐标作为闪光灯光斑位置坐标。
  17. 如权利要求16所述的计算机可读存储介质,其中,所述对所述待识别照片执行光照增强处理得到第一照片包括:
    计算所述待识别照片中各个像素点的暗通道像素值;
    对所述暗通道像素值执行滤波处理得到所述待识别照片中各个像素点的暗通道标准像素值;
    计算所述待识别照片中所有像素点的暗通道像素值的平均值;
    基于所述平均值及暗通道标准像素值计算所述待识别照片中每个像素点的大气透射率及全局大气光值;
    基于所述大气透射率及全局大气光值对所述待识别照片中各个像素点执行光照增强处理得到第一照片。
  18. 如权利要求17所述的计算机可读存储介质,其中,所述大气透射率的计算公式为:
    L ij=min((min(q,0.9))*P ij,M ij)
    其中,q为待识别照片中所有像素点的暗通道像素值的平均值,P ij为待识别照片中第i行第j列像素点的暗通道标准像素值,M ij为待识别照片中第i行第j列像素点的暗通道像素值,L ij为待识别照片中第i行第j列像素点的大气折射率;
    所述全局大气光值的计算公式为:
    Figure PCTCN2020125448-appb-100005
    其中,H ij为待识别照片中第i行第j列像素点的像素值,c ij为待识别照片中第i行第j列像素点RGB三通道的像素值,P ij为待识别照片中第i行第j列像素点的暗通道标准像素值,A为全局大气光值;
    所述光照增强处理对应的计算公式为:
    Figure PCTCN2020125448-appb-100006
    其中,H ij为待识别照片中第i行第j列像素点的像素值,L ij为待识别照片中第i行第j列像素点的大气折射率,A为全局大气光值,F ij为待识别照片中第i行第j列像素点光照增强后的像素值。
  19. 如权利要求16所述的计算机可读存储介质,其中,所述根据所述第二照片中各个像素点的像素值确定多个光斑包括:
    逐行判断所述第二照片中每个像素点的像素值是否满足第一条件或第二条件,将满足第一条件的像素点作为光斑边界起点,将满足第二条件的像素点作为光斑边界终点;
    根据所述光斑边界起点及光斑边界终点确定多个光斑。
  20. 如权利要求19所述的计算机可读存储介质,其中,所述第一条件为:当所述第二照片中第i行第j-1列像素点的像素值为0且第i行第j列像素点的像素值为1时,第i行第j列像素点为光斑边界起点;
    所述第二条件为:当所述第二照片中第m行第n列像素点的像素值为1且第m行第 n+1列像素点的像素值为0时,第m行第n列像素点为光斑边界终点。
PCT/CN2020/125448 2020-09-23 2020-10-30 闪光灯光斑位置识别方法、装置、电子设备及存储介质 WO2021189853A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011013971.1A CN112102402B (zh) 2020-09-23 2020-09-23 闪光灯光斑位置识别方法、装置、电子设备及存储介质
CN202011013971.1 2020-09-23

Publications (1)

Publication Number Publication Date
WO2021189853A1 true WO2021189853A1 (zh) 2021-09-30

Family

ID=73755249

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/125448 WO2021189853A1 (zh) 2020-09-23 2020-10-30 闪光灯光斑位置识别方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN112102402B (zh)
WO (1) WO2021189853A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283170A (zh) * 2021-12-24 2022-04-05 凌云光技术股份有限公司 一种光斑提取方法
CN115205246A (zh) * 2022-07-14 2022-10-18 中国南方电网有限责任公司超高压输电公司广州局 换流阀电晕放电紫外图像特征提取方法和装置
CN117315011A (zh) * 2023-11-30 2023-12-29 吉林珩辉光电科技有限公司 一种大气湍流中光斑中心定位方法及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686842B (zh) * 2020-12-21 2021-08-24 苏州炫感信息科技有限公司 一种光斑检测方法、装置、电子设备及可读存储介质
CN115393440B (zh) * 2022-10-27 2023-01-24 长春理工大学 一种光端机信标光斑中心定位方法、存储介质及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180322656A1 (en) * 2015-11-30 2018-11-08 Delphi Technologies, Llc Method for identification of candidate points as possible characteristic points of a calibration pattern within an image of the calibration pattern
CN109118441A (zh) * 2018-07-17 2019-01-01 厦门理工学院 一种低照度图像及视频增强方法、计算机装置及存储介质
CN109859130A (zh) * 2019-01-29 2019-06-07 杭州智诠科技有限公司 一种眼底照片清晰化处理方法、系统、装置及存储介质
CN110163851A (zh) * 2019-05-06 2019-08-23 歌尔股份有限公司 图像上亮斑的识别方法、装置及计算机存储介质
CN110992264A (zh) * 2019-11-28 2020-04-10 北京金山云网络技术有限公司 一种图像处理方法、处理装置、电子设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583287B (zh) * 2017-09-29 2024-04-12 浙江莲荷科技有限公司 实物识别方法及验证方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180322656A1 (en) * 2015-11-30 2018-11-08 Delphi Technologies, Llc Method for identification of candidate points as possible characteristic points of a calibration pattern within an image of the calibration pattern
CN109118441A (zh) * 2018-07-17 2019-01-01 厦门理工学院 一种低照度图像及视频增强方法、计算机装置及存储介质
CN109859130A (zh) * 2019-01-29 2019-06-07 杭州智诠科技有限公司 一种眼底照片清晰化处理方法、系统、装置及存储介质
CN110163851A (zh) * 2019-05-06 2019-08-23 歌尔股份有限公司 图像上亮斑的识别方法、装置及计算机存储介质
CN110992264A (zh) * 2019-11-28 2020-04-10 北京金山云网络技术有限公司 一种图像处理方法、处理装置、电子设备及存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283170A (zh) * 2021-12-24 2022-04-05 凌云光技术股份有限公司 一种光斑提取方法
CN114283170B (zh) * 2021-12-24 2024-05-03 北京元客视界科技有限公司 一种光斑提取方法
CN115205246A (zh) * 2022-07-14 2022-10-18 中国南方电网有限责任公司超高压输电公司广州局 换流阀电晕放电紫外图像特征提取方法和装置
CN115205246B (zh) * 2022-07-14 2024-04-09 中国南方电网有限责任公司超高压输电公司广州局 换流阀电晕放电紫外图像特征提取方法和装置
CN117315011A (zh) * 2023-11-30 2023-12-29 吉林珩辉光电科技有限公司 一种大气湍流中光斑中心定位方法及装置
CN117315011B (zh) * 2023-11-30 2024-04-02 吉林珩辉光电科技有限公司 一种大气湍流中光斑中心定位方法及装置

Also Published As

Publication number Publication date
CN112102402B (zh) 2023-08-22
CN112102402A (zh) 2020-12-18

Similar Documents

Publication Publication Date Title
WO2021189853A1 (zh) 闪光灯光斑位置识别方法、装置、电子设备及存储介质
WO2019174130A1 (zh) 票据识别方法、服务器及计算机可读存储介质
WO2021057848A1 (zh) 网络的训练方法、图像处理方法、网络、终端设备及介质
WO2021217851A1 (zh) 异常细胞自动标注方法、装置、电子设备及存储介质
Marciniak et al. Influence of low resolution of images on reliability of face detection and recognition
WO2018086543A1 (zh) 活体判别方法、身份认证方法、终端、服务器和存储介质
WO2021051554A1 (zh) 证件真伪验证方法、系统、计算机设备及可读存储介质
WO2019085064A1 (zh) 医疗理赔拒付方法、装置、终端设备及存储介质
WO2021017272A1 (zh) 病理图像标注方法、装置、计算机设备及存储介质
WO2018090641A1 (zh) 识别保险单号码的方法、装置、设备及计算机可读存储介质
US10339373B1 (en) Optical character recognition utilizing hashed templates
WO2020248848A1 (zh) 智能化异常细胞判断方法、装置及计算机可读存储介质
US10521580B1 (en) Open data biometric identity validation
WO2021189856A1 (zh) 证件校验方法、装置、电子设备及介质
WO2018233393A1 (zh) 投保校验的方法、装置、计算机设备及存储介质
US20210264583A1 (en) Detecting identification tampering using ultra-violet imaging
KR20220063127A (ko) 얼굴 생체 검출 방법, 장치, 전자 기기, 저장 매체, 및 컴퓨터 프로그램
CN111553251A (zh) 证件四角残缺检测方法、装置、设备及存储介质
CN112232336A (zh) 一种证件识别方法、装置、设备及存储介质
US20240193987A1 (en) Face liveness detection method, terminal device and non-transitory computer-readable storage medium
CN110321881A (zh) 识别包含身份证明证件的图像的系统和方法
CN112581344A (zh) 一种图像处理方法、装置、计算机设备及存储介质
CN113313114B (zh) 证件信息获取方法、装置、设备以及存储介质
CN112541899B (zh) 证件的残缺检测方法、装置、电子设备及计算机存储介质
WO2021151274A1 (zh) 图像文档处理方法、装置、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20927382

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20927382

Country of ref document: EP

Kind code of ref document: A1