WO2019128362A1 - Procédé, appareil et système de reconnaissance faciale humaine, et support - Google Patents

Procédé, appareil et système de reconnaissance faciale humaine, et support Download PDF

Info

Publication number
WO2019128362A1
WO2019128362A1 PCT/CN2018/108994 CN2018108994W WO2019128362A1 WO 2019128362 A1 WO2019128362 A1 WO 2019128362A1 CN 2018108994 W CN2018108994 W CN 2018108994W WO 2019128362 A1 WO2019128362 A1 WO 2019128362A1
Authority
WO
WIPO (PCT)
Prior art keywords
grayscale
face
light source
image frame
face image
Prior art date
Application number
PCT/CN2018/108994
Other languages
English (en)
Chinese (zh)
Inventor
冯玉娜
Original Assignee
北京京东尚科信息技术有限公司
北京京东世纪贸易有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京京东尚科信息技术有限公司, 北京京东世纪贸易有限公司 filed Critical 北京京东尚科信息技术有限公司
Publication of WO2019128362A1 publication Critical patent/WO2019128362A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present disclosure relates to the field of Internet technologies, and in particular, to a face recognition method, apparatus, system, and medium.
  • Face recognition has always been a hot research in the field of biometrics. Visible face recognition is easily affected by external lighting conditions, which may result in inaccurate face recognition. Although the near-infrared technology is not affected by the influence of ambient light, the application of the device is limited, and the recognition method is contrary to people's daily life habits.
  • there are some recognition methods combining visible light recognition and near-infrared light recognition mainly including a visible-near-infrared face recognition method based on face synthesis, and a visible-near-infrared face recognition method based on unified subspace. And a visible-near-infrared face recognition method based on invariant features,
  • the recognition method combining visible light recognition and near-infrared recognition is generally a face obtained under registration of a light source condition. Library. When identifying the authentication, when the face image is collected, the light source environment at the time of the acquisition is inconsistent with the registration time, and the collected face image processing needs to be converted and compared, and the essence is the face recognition under the different quality light sources.
  • the present disclosure provides a face recognition method, apparatus, system, and medium that can achieve face image comparison under a homogeneous light source.
  • the method includes acquiring a face image and performing face recognition according to a light source environment in which the face image is collected.
  • Performing face recognition according to the light source environment in which the face image is collected specifically includes: when the light source environment in which the face image is collected is visible light, the face image is in visible light condition The registered face map is compared and recognized; or, in the case where the light source environment in which the face image is collected is near-infrared light, the face image and the face map registered under the near-infrared light condition are registered. Perform alignment recognition.
  • the method further includes determining whether the light source environment in which the face image is acquired is visible light or near-infrared light.
  • acquiring a face image specifically includes acquiring a complete image frame including a face and a background environment, acquiring a face region from the complete image frame, and extracting the face image from the face region. . Determining whether the light source environment in which the face image is collected is visible light or near-infrared light, including graying out the complete image frame, obtaining a gray image frame, and judging according to the distribution of gray scales in the gray image frame
  • the light source environment in which the face image is acquired is visible light or near-infrared light.
  • determining, according to the distribution of gray scales in the grayscale image frame, whether the light source environment in which the face image is captured is visible light or near-infrared light includes the following operation process; calculating the grayscale image frame a first pixel number of the gray scale value in the initial closed interval, wherein one end point of the initial closed interval is a gray level minimum value of the gray image frame, and the other end point is the gray level minimum value And adding a preset value; calculating a second pixel number of the grayscale value in the grayscale image frame in the termination closed interval, wherein one end point of the termination closed interval is a grayscale maximum value of the grayscale image frame Subtracting the preset value, another endpoint is the grayscale maximum value; calculating a grayscale average value of the region other than the face region in the grayscale image frame; when the first pixel number is greater than When the second pixel number is less than the preset grayscale mean value, determining that the light source environment in which the face image is collected is near-in
  • the preset value includes 15.
  • the preset grayscale mean comprises 140.
  • the method further includes separately registering the face map under visible light conditions and near-infrared light conditions.
  • the device includes an acquisition module and an identification module.
  • the acquisition module is used to acquire a face image.
  • the identification module is configured to perform face recognition according to a light source environment in which the face image is collected.
  • the identification module is specifically configured to compare the face image with a face image registered under visible light conditions when the light source environment in which the face image is collected is visible light; or, in the collection center In the case where the light source environment in which the face image is located is near-infrared light, the face image is compared with the face map registered under the near-infrared light condition.
  • the apparatus further includes a light source determination module.
  • the light source determining module is configured to determine whether the light source environment in which the face image is collected is visible light or near-infrared light.
  • the acquisition module includes a complete image frame acquisition sub-module, a face region acquisition sub-module, and a face image extraction sub-module.
  • the complete image frame acquisition sub-module is used to acquire a complete image frame including a face and a background environment.
  • the face region acquisition sub-module is configured to acquire a face region from the complete image frame.
  • a face image extraction sub-module for extracting the face image from the face region.
  • the light source judging module includes a grayscale submodule and a light source judging submodule. Wherein, the grayscale sub-module is used to grayscale the complete image frame to obtain a grayscale image frame.
  • the light source determining sub-module is configured to determine, according to the distribution of gray scales in the grayscale image frame, whether the light source environment in which the face image is collected is visible light or near-infrared light.
  • determining, according to the distribution of gray scales in the grayscale image frame, whether the light source environment in which the face image is captured is visible light or near-infrared light comprises: calculating gray in the grayscale image frame a first number of pixels in the initial closed interval, wherein one end of the initial closed interval is a grayscale minimum of the grayscale image frame, and the other endpoint is the grayscale minimum plus a preset value; calculating a second pixel number of the grayscale value in the grayscale image frame in the termination closed interval, wherein one end point of the termination closed interval is a grayscale maximum value of the grayscale image frame minus The preset value, another endpoint is the grayscale maximum value; calculating a grayscale average value in the region other than the face region in the grayscale image frame; and when the first pixel number is greater than When the second pixel number is less than the preset grayscale mean value, determining that the light source environment in which the face image is collected is near-infrared light, otherwise determining the location
  • the preset value includes 15.
  • the preset grayscale mean comprises 140.
  • the apparatus further includes a registration module.
  • the registration module is configured to separately register a face map under visible light conditions and near-infrared light conditions.
  • Another aspect of the present disclosure provides a system for face recognition including one or more processors, and a memory.
  • the memory is used to store one or more programs. Wherein, when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the methods described above.
  • Another aspect of the present disclosure provides a computer readable medium having stored thereon executable instructions that, when executed by a processor, cause a processor to implement the methods described above.
  • Another aspect of the disclosure provides a computer program comprising computer executable instructions that, when executed, are used to implement a method as described above.
  • the recognition error caused by the excessive difference of the face image under different quality light sources can be at least partially avoided, and thus the face recognition of the person under the homogenous light source can be realized, thereby improving the accuracy of the face recognition.
  • FIG. 1 schematically illustrates an exemplary architecture of a face recognition method and apparatus according to an embodiment of the present disclosure
  • FIG. 2 schematically shows a flowchart of a face recognition method according to an embodiment of the present disclosure
  • FIG. 3 schematically shows a flowchart of a face recognition method according to another embodiment of the present disclosure
  • FIG. 4 schematically shows a flowchart of a face recognition method according to still another embodiment of the present disclosure
  • 5A and 5B respectively schematically illustrate gray scale distributions in grayscale image frames corresponding to images acquired in visible light and near-infrared light environments
  • FIG. 6 is a flow chart schematically illustrating determining a light source environment when a face image is acquired according to a gray scale distribution according to another embodiment of the present disclosure
  • FIG. 7 is a schematic diagram showing an application scenario of a face recognition method according to various embodiments of the present disclosure.
  • FIG. 8 is a block diagram schematically showing a face recognition device according to an embodiment of the present disclosure.
  • FIG. 9 schematically illustrates a block diagram of a computer system suitable for implementing a robot in accordance with an embodiment of the present disclosure.
  • Embodiments of the present disclosure provide a face recognition method, apparatus, system, and medium.
  • the method includes acquiring a face image and performing face recognition according to a light source environment in which the face image is collected.
  • Performing face recognition according to the light source environment in which the face image is collected specifically includes: when the light source environment in which the face image is collected is visible light, the face image is registered with the person under visible light conditions The face image is compared and recognized; or, when the light source environment in which the face image is collected is near-infrared light, the face image is compared with the face image registered under the near-infrared light condition.
  • FIG. 1 schematically illustrates an exemplary architecture 100 of a face recognition method and apparatus in accordance with an embodiment of the present disclosure. It should be noted that FIG. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied, to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be used for other Device, system, environment, or scenario.
  • the system architecture 100 may include a camera 101, a network 102, and a server 103.
  • the network 102 is used to provide a medium for communication links between the camera 101 and the server 103.
  • Network 102 can include a variety of connection types, such as wired, wireless communication links, fiber optic cables, and the like.
  • the camera 101 can collect a face image and transmit the obtained face image to the server 103 via the network 102.
  • the server 103 may be a server that provides various services, such as processing for analyzing data such as receiving a face image.
  • the terminal device 104 may also be included in the system architecture 100.
  • a face image collected by the camera 101 or other cameras can be stored in the terminal device 104.
  • the terminal device 104 can transmit the face image stored therein to the server 105 through the network 102 according to the user's operation, to request the server 105 to perform the identification authentication.
  • the server 105 can process according to the user request and feed back the processing result to the terminal device 104.
  • the terminal device 104 can be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop portable computers, desktop computers, and the like.
  • the terminal device 104 and the camera 101 can be combined or the camera 101 can be an integral part of the terminal device 104.
  • the face recognition method provided by the embodiment of the present disclosure may be generally performed by the server 103.
  • the face recognition device provided by the embodiment of the present disclosure may be generally disposed in the server 103.
  • the face recognition method provided by the embodiments of the present disclosure may also be performed by a server or server cluster different from the server 103 and capable of communicating with the server 103.
  • the face recognition device provided by the embodiment of the present disclosure may also be disposed in a server or server cluster different from the server 103 and capable of communicating with the server 103.
  • terminal devices, networks, and servers in Figure 1 is merely illustrative. Depending on the implementation needs, there can be any number of terminal devices, networks, and servers.
  • FIG. 2 schematically illustrates a flow chart of a face recognition method in accordance with an embodiment of the present disclosure.
  • the face recognition method includes an operation S210 and an operation S220.
  • a face image acquired by the camera in real time can be acquired.
  • the face image acquired by the camera acquisition can be obtained from a network or other image storage location or the like.
  • face recognition is performed according to the light source environment in which the face image is collected.
  • the operation S220 performs face recognition according to the light source environment in which the face image is collected, and includes: when the light source environment where the face image is collected is visible light, the face image and the visible light condition The registered face image is compared and recognized; or, when the light source environment in which the face image is collected is near-infrared light, the face image and the face map registered under the near-infrared light condition are performed. Alignment identification.
  • face recognition under a homogenous light source may be performed according to a light source environment in which the face image is collected, and the face image to be recognized is homogenous with the face database.
  • the face maps under the light source are compared, and the recognition error caused by the excessive difference of the face images under different quality light sources is avoided to some extent, and the accuracy of face recognition is improved.
  • FIG. 3 schematically shows a flowchart of a face recognition method according to another embodiment of the present disclosure.
  • the face recognition method further includes an operation S310 in addition to the operations S210 and S220.
  • the operation S310 may be before the operation S220, or the operation S310 may be performed in parallel with the operation S220.
  • operation S310 it is determined whether the light source environment in which the face image is collected is visible light or near-infrared light.
  • the light source environment in which the face image is collected may be determined, so that the face image to be recognized may be compared with the face image under the homogenous light source in the face database to achieve homogeneity. Face recognition under the light source.
  • FIG. 4 schematically illustrates a flow chart of a face recognition method according to still another embodiment of the present disclosure.
  • the face recognition method includes an operation S210, an operation S310, and an operation S220.
  • the operation S210 includes operations S211 to S213, and operation S310 includes operations S311 and S312.
  • acquiring the face image in operation S210 may include operations S211 to S213.
  • a full image frame containing the background environment surrounding the face and face is acquired in real time by the camera.
  • the dedicated camera may be fixed to a complete image frame containing the face collected in a direction of 0.5 degrees upward from the shooting face level of 0.5 m-0.8 m.
  • a complete image frame containing a human face captured by the camera can be obtained from other locations, such as the network.
  • a face area is acquired from the complete image frame in operation S212.
  • the face detection is performed using the Adaboost algorithm to obtain a face region.
  • the face image is extracted from the face region in operation S213.
  • the face image is extracted from a face region using the Adaboost algorithm.
  • operation S310 determines whether the light source environment in which the face image is collected is visible light or near-infrared light, and may include operations S311 and S312.
  • the complete image frame is grayed out to obtain a grayscale image frame.
  • operation S312 it is determined whether the light source environment when collecting the face image is visible light or near-infrared light according to the distribution of gray scales in the grayscale image frame.
  • FIG. 5A and 5B schematically illustrate gray scale distributions in grayscale image frames corresponding to images acquired in visible light and near-infrared light environments, respectively.
  • FIG. 5A shows the gray scale distribution in the grayscale image frame obtained by the grayscale processing of the complete image frame acquired by the visible light source according to the experiment.
  • FIG. 5B shows the gray scale distribution in the grayscale image frame obtained by the grayscale processing of the complete image frame acquired by the near-infrared light source obtained according to the experiment.
  • Gray begin refers to the grayscale minimum in the grayscale image frame.
  • Gray end refers to the grayscale maximum in the grayscale image frame.
  • 5A and 5B illustrate the number of pixels corresponding to respective grayscale values in the range of the minimum value to the maximum value of the grayscale value in the grayscale image frame.
  • the grayscale image frame can be scanned line by line, and the number of pixels corresponding to each grayscale value is counted, for example, labeled as N0, N1, ..., N255, and the minimum grayscale value of the mark is Gray begin , and the grayscale of the maximum is terminated.
  • the value is Gray end .
  • the grayscale image frame obtained by the complete image frame acquired under the visible light source has a grayscale value near the Gray end .
  • the grayscale image of the gray image frame obtained by the complete image frame acquired under the near-infrared light source has a larger number of pixels near the gray begin .
  • FIG. 6 is a flow chart schematically showing a light source environment when a face image is acquired according to a gray scale distribution in operation S312 according to another embodiment of the present disclosure.
  • operation S312 includes operations S3121 to S3124.
  • a grayscale average value of the region other than the face region in the grayscale image frame is calculated, for example, marked as Gray Ave.
  • the preset value includes 15. It will be appreciated that the preset values may vary in particular different embodiments.
  • the preset grayscale mean comprises 140.
  • FIG. 7 schematically illustrates an application scenario diagram of a face recognition method according to various embodiments of the present disclosure.
  • the specific application scenario may be as shown in FIG. 7, and the face image may be acquired in operation S210.
  • a complete image frame containing a face may be acquired by a dedicated camera in operation S211, and the type of the light source that collects the face image is unknown at this time.
  • the face area obtained by the Adaboost face detection is performed on the complete image frame in operation S212.
  • the face image is then extracted from the face region in operation S213.
  • the complete image frame is grayed out in operation S311 to obtain a corresponding grayscale image frame.
  • the amount of pixels corresponding to each grayscale value in the grayscale image frame may be scanned line by line, for example, labeled as N0, N1, ..., N255, respectively, and the minimum grayscale value of the marker is Gray begin , and the grayscale of the termination is maximum.
  • the preset value may take a value of 15, and the preset grayscale mean may take a value of 140.
  • the gradation mean Gray Ave of the region other than the face region in the grayscale image frame is calculated.
  • face recognition is performed according to the light source environment in which the face image is collected.
  • the feature extraction is performed on the face image obtained in operation S210, and compared with the face map under the homogeneous light source type, and the consistency between the light source condition at the time of registration and the light source environment at the time of authentication is realized.
  • the face recognition method further includes separately registering a face map under visible light conditions and near-infrared light conditions.
  • the consistency of the face map (including the expression, the position of the acquired position, etc.) obtained by registration under visible light and near-infrared conditions can be ensured, and the error caused by the recognition process due to the difference of the registered image can be reduced.
  • FIG. 8 schematically shows a block diagram of a face recognition device in accordance with an embodiment of the present disclosure.
  • device 800 includes an acquisition module 810 and an identification module 820.
  • the obtaining module 810 is configured to acquire a face image.
  • the identification module 820 is configured to perform face recognition according to a light source environment in which the face image is collected.
  • the identification module 820 is specifically configured to compare the face image with a face image registered under visible light conditions when the light source environment in which the face image is collected is visible light; or, collect the person In the case where the light source environment in which the face image is located is near-infrared light, the face image is compared with the face map registered under the near-infrared light condition.
  • the apparatus further includes a light source determination module 830.
  • the light source determining module 830 is configured to determine whether the light source environment in which the face image is collected is visible light or near-infrared light.
  • the acquisition module 810 includes a full image frame acquisition sub-module 811, a face region acquisition sub-module 812, and a face image extraction sub-module 813.
  • the complete image frame acquisition sub-module 811 is used to acquire a complete image frame including a face and a background environment.
  • the face region acquisition sub-module 812 is configured to acquire a face region from the complete image frame.
  • the face image extraction sub-module 813 is configured to extract the face image from the face region.
  • the light source determination mode 830 includes a grayscale sub-module 831 and a light source determination sub-module 832.
  • the grayscale sub-module 831 is used to grayscale the complete image frame to obtain a grayscale image frame.
  • the light source determining sub-module 832 is configured to determine, according to the distribution of gray scales in the grayscale image frame, whether the light source environment in which the face image is collected is visible light or near-infrared.
  • determining, according to the distribution of gray scales in the grayscale image frame, whether the light source environment in which the face image is captured is visible light or near infrared including: calculating a grayscale value in the grayscale image frame.
  • a first number of pixels in the initial closed interval wherein one end of the initial closed interval is a gray level minimum of the gray image frame, and the other end point is the gray level minimum plus a preset value
  • the second pixel number of the grayscale value in the ending closed interval wherein one end point of the termination closed interval is the grayscale maximum value of the grayscale image frame minus the preset value, and the other endpoint is the a grayscale maximum value
  • calculating a grayscale average value of the region other than the face region in the grayscale image frame and when the first pixel number is greater than the second pixel number, and the grayscale average value is less than a preset grayscale
  • the mean value it is judged that the light source environment in which the face image is collected is near-infrared light, otherwise the light source environment in which the face image is collected is visible light.
  • the preset value includes 15. According to an embodiment of the present disclosure, the preset grayscale mean comprises 140.
  • the apparatus further includes a registration module 840.
  • the registration module 840 is configured to separately register a face map under visible light conditions and near-infrared light conditions.
  • the apparatus 800 may be used to implement the face recognition method described with reference to FIGS. 2 to 7.
  • any of the modules, sub-modules, units, sub-units, or at least some of the functions of any of the plurality of the embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to an embodiment of the present disclosure may be implemented by being split into a plurality of modules.
  • any one or more of the modules, sub-modules, units, sub-units in accordance with embodiments of the present disclosure may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), System-on-a-chip, system on a substrate, system on a package, an application specific integrated circuit (ASIC), or any other reasonable means of hardware or firmware that integrates or encapsulates the circuit, or in software, hardware, and firmware. Any one of the implementations or in any suitable combination of any of them.
  • FPGA Field Programmable Gate Array
  • PLA Programmable Logic Array
  • ASIC application specific integrated circuit
  • one or more of the modules, sub-modules, units, sub-units in accordance with embodiments of the present disclosure may be implemented at least in part as a computer program module that, when executed, can perform the corresponding functions.
  • the light source determination sub-module 832 can be implemented in one module, or any one of the modules can be split into multiple modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of the other modules and implemented in one module.
  • At least one of the sub-module 831 and the light source determination sub-module 832 can be implemented at least in part as a hardware circuit, such as a field programmable gate array (FPGA), a programmable logic array (PLA), a system on a chip, a system on a substrate, A system, an application specific integrated circuit (ASIC) on the package, or hardware or firmware in any other reasonable manner to integrate or package the circuit, or in a suitable combination of software, hardware, and firmware implementations.
  • FPGA field programmable gate array
  • PLA programmable logic array
  • ASIC application specific integrated circuit
  • the obtaining module 810, the identifying module 820, the light source determining module 830, the registration module 840, the complete image frame obtaining sub-module 811, the face region obtaining sub-module 812, the face image extracting sub-module 813, the graying sub-module 831, and At least one of the light source determination sub-modules 832 may be implemented at least in part as a computer program module that, when executed by the computer, may perform the functions of the respective modules.
  • FIG. 9 schematically illustrates a block diagram of a computer system suitable for implementing a robot in accordance with an embodiment of the present disclosure.
  • the computer system shown in FIG. 9 is merely an example and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • a computer system 900 in accordance with an embodiment of the present disclosure includes a processor 901 that can be loaded into a random access memory (RAM) 903 according to a program stored in a read only memory (ROM) 902 or from a storage portion 908.
  • the program performs various appropriate actions and processes.
  • Processor 901 may, for example, comprise a general purpose microprocessor (e.g., a CPU), an instruction set processor, and/or a related chipset and/or a special purpose microprocessor (e.g., an application specific integrated circuit (ASIC)), and the like.
  • ASIC application specific integrated circuit
  • Processor 901 may also include an onboard memory for caching purposes.
  • the processor 901 may include a single processing unit or a plurality of processing units for performing different actions of the method flow according to the embodiments of the present disclosure described with reference to FIGS. 2-7.
  • the processor 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904.
  • the processor 901 performs various operations of the face recognition method described above with reference to FIGS. 2 to 7 by executing programs in the ROM 902 and/or the RAM 903. It is noted that the program can also be stored in one or more memories other than the ROM 902 and the RAM 903.
  • the processor 901 can also perform various operations of the face recognition method described above with reference to FIGS. 2 to 7 by executing a program stored in the one or more memories.
  • System 900 may also include an input/output (I/O) interface 905 to which an input/output (I/O) interface 905 is also coupled, in accordance with an embodiment of the present disclosure.
  • System 900 can also include one or more of the following components coupled to I/O interface 905: an input portion 906 including a keyboard, mouse, etc.; including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a speaker An output portion 907 of the like; a storage portion 908 including a hard disk or the like; and a communication portion 909 including a network interface card such as a LAN card, a modem, and the like.
  • I/O interface 905 to which an input/output (I/O) interface 905 is also coupled, in accordance with an embodiment of the present disclosure.
  • System 900 can also include one or more of the following components coupled to I/O interface 905: an input portion 906 including a keyboard, mouse, etc.;
  • the communication section 909 performs communication processing via a network such as the Internet.
  • Driver 610 is also coupled to I/O interface 905 as needed.
  • a removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 910 as needed so that a computer program read therefrom is installed into the storage portion 908 as needed.
  • an embodiment of the present disclosure includes a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for executing the method illustrated in the flowchart.
  • the computer program can be downloaded and installed from the network via the communication portion 909, and/or installed from the removable medium 911.
  • the above-described functions defined in the system of the embodiments of the present disclosure are executed when the computer program is executed by the processor 901.
  • the systems, devices, devices, modules, units, and the like described above may be implemented by a computer program module in accordance with an embodiment of the present disclosure.
  • the computer readable medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two.
  • the computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain or store a program, which can be used by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a data signal that is propagated in the baseband or as part of a carrier, carrying computer readable program code. Such propagated data signals can take a variety of forms including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer readable signal medium can also be any computer readable medium other than a computer readable storage medium, which can transmit, propagate, or transport a program for use by or in connection with the instruction execution system, apparatus, or device. .
  • Program code embodied on a computer readable medium can be transmitted by any suitable medium, including but not limited to wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
  • the computer readable medium may include one or more memories other than the ROM 902 and/or the RAM 903 and/or the ROM 902 and the RAM 903 described above.
  • each block of the flowchart or block diagrams can represent a module, a program segment, or a portion of code that includes one or more Executable instructions.
  • the functions noted in the blocks may also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams or flowcharts, and combinations of blocks in the block diagrams or flowcharts can be implemented by a dedicated hardware-based system that performs the specified function or operation, or can be used A combination of dedicated hardware and computer instructions is implemented.
  • the present disclosure also provides a computer readable medium, which may be included in the apparatus described in the above embodiments, or may be separately present and not incorporated into the apparatus.
  • the computer readable medium described above carries one or more programs that, when executed by one of the devices, cause the device to perform a face recognition method in accordance with an embodiment of the present disclosure.
  • the method includes acquiring a face image and performing face recognition according to a light source environment in which the face image is collected.
  • Performing face recognition according to the light source environment in which the face image is collected specifically includes: when the light source environment in which the face image is collected is visible light, the face image is registered with the person under visible light conditions The face image is compared and recognized; or, when the light source environment in which the face image is collected is near-infrared light, the face image is compared with the face image registered under the near-infrared light condition.
  • the method further includes determining whether the light source environment in which the face image is acquired is visible light or near-infrared light.
  • acquiring the face image specifically includes acquiring a complete image frame including a face and a background environment, acquiring a face region from the complete image frame, and extracting the face image from the face region. Determining whether the light source environment in which the face image is collected is visible light or near-infrared light, including graying out the complete image frame, obtaining a gray image frame, and determining the person according to the gray scale distribution in the gray image frame
  • the light source environment in which the face image is located is visible light or near-infrared light.
  • determining, according to the distribution of gray scales in the grayscale image frame, whether the light source environment in which the face image is captured is visible light or near-infrared light includes a plurality of operation processes; calculating grayscale in the grayscale image frame The first pixel number of the order value in the initial closed interval, wherein one end point of the initial closed interval is the gray level minimum value of the gray image frame, and the other end point is the gray level minimum value plus a preset value Calculating a second pixel number of the grayscale value in the grayscale image frame in the termination closed interval, wherein an endpoint of the termination closed interval is a grayscale maximum value of the grayscale image frame minus the preset value, and An endpoint is the grayscale maximum; calculating a grayscale average of the region of the grayscale image frame other than the face region; when the first pixel number is greater than the second pixel number, and the grayscale average is less than When the gray level mean value is set, it is judged that the light source environment in which the face
  • the preset value includes 15.
  • the preset grayscale mean comprises 140.
  • the method further includes separately registering the face map under visible light conditions and near-infrared light conditions.

Abstract

La présente invention concerne un procédé de reconnaissance faciale humaine. Le procédé consiste à : acquérir une image faciale humaine, et effectuer une reconnaissance faciale humaine selon un environnement de source lumineuse où l'image faciale humaine est collectée. La réalisation d'une reconnaissance faciale humaine selon un environnement de source lumineuse où l'image faciale humaine est collectée consiste spécifiquement : quand l'environnement de source lumineuse où l'image faciale humaine est collectée est une lumière visible, à effectuer une comparaison et une reconnaissance sur l'image faciale humaine et une image faciale humaine enregistrée sous une condition de lumière visible ; ou quand l'environnement de source lumineuse où l'image faciale humaine est collectée est une lumière proche infrarouge, à effectuer une comparaison et une reconnaissance sur l'image faciale humaine et une image faciale humaine enregistrée sous une condition de lumière proche infrarouge. L'invention concerne également un appareil et un système de reconnaissance faciale humaine, et un support.
PCT/CN2018/108994 2017-12-28 2018-09-30 Procédé, appareil et système de reconnaissance faciale humaine, et support WO2019128362A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711471179.9A CN109977741A (zh) 2017-12-28 2017-12-28 人脸识别方法、装置、系统及介质
CN201711471179.9 2017-12-28

Publications (1)

Publication Number Publication Date
WO2019128362A1 true WO2019128362A1 (fr) 2019-07-04

Family

ID=67065050

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/108994 WO2019128362A1 (fr) 2017-12-28 2018-09-30 Procédé, appareil et système de reconnaissance faciale humaine, et support

Country Status (2)

Country Link
CN (1) CN109977741A (fr)
WO (1) WO2019128362A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532992A (zh) * 2019-09-04 2019-12-03 深圳市捷顺科技实业股份有限公司 一种基于可见光和近红外的人脸识别方法
CN110826535A (zh) * 2019-12-02 2020-02-21 北京三快在线科技有限公司 一种人脸识别方法、系统及装置
CN111325139A (zh) * 2020-02-18 2020-06-23 浙江大华技术股份有限公司 一种唇语识别方法及装置
CN114898429A (zh) * 2022-05-10 2022-08-12 电子科技大学 一种热红外-可见光跨模态人脸识别的方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006003612A1 (fr) * 2004-07-02 2006-01-12 Koninklijke Philips Electronics N.V. Detection et/ou reconnaissance faciale(s)
CN101526991A (zh) * 2008-03-03 2009-09-09 联想(北京)有限公司 一种笔记本电脑及其人脸识别方法
CN202197300U (zh) * 2010-08-05 2012-04-18 北京海鑫智圣技术有限公司 移动人脸识别系统
CN102831379A (zh) * 2011-06-14 2012-12-19 汉王科技股份有限公司 人脸图像识别方法及装置
CN103400108A (zh) * 2013-07-10 2013-11-20 北京小米科技有限责任公司 人脸识别方法、装置和移动终端
CN103902962A (zh) * 2012-12-28 2014-07-02 汉王科技股份有限公司 一种遮挡或光源自适应人脸识别方法和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964056B (zh) * 2010-10-26 2012-06-27 徐勇 一种具有活体检测功能的双模态人脸认证方法和系统
US10452894B2 (en) * 2012-06-26 2019-10-22 Qualcomm Incorporated Systems and method for facial verification
CN106372615A (zh) * 2016-09-19 2017-02-01 厦门中控生物识别信息技术有限公司 一种人脸防伪识别方法以及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006003612A1 (fr) * 2004-07-02 2006-01-12 Koninklijke Philips Electronics N.V. Detection et/ou reconnaissance faciale(s)
CN101526991A (zh) * 2008-03-03 2009-09-09 联想(北京)有限公司 一种笔记本电脑及其人脸识别方法
CN202197300U (zh) * 2010-08-05 2012-04-18 北京海鑫智圣技术有限公司 移动人脸识别系统
CN102831379A (zh) * 2011-06-14 2012-12-19 汉王科技股份有限公司 人脸图像识别方法及装置
CN103902962A (zh) * 2012-12-28 2014-07-02 汉王科技股份有限公司 一种遮挡或光源自适应人脸识别方法和装置
CN103400108A (zh) * 2013-07-10 2013-11-20 北京小米科技有限责任公司 人脸识别方法、装置和移动终端

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532992A (zh) * 2019-09-04 2019-12-03 深圳市捷顺科技实业股份有限公司 一种基于可见光和近红外的人脸识别方法
CN110532992B (zh) * 2019-09-04 2023-01-10 深圳市捷顺科技实业股份有限公司 一种基于可见光和近红外的人脸识别方法
CN110826535A (zh) * 2019-12-02 2020-02-21 北京三快在线科技有限公司 一种人脸识别方法、系统及装置
CN111325139A (zh) * 2020-02-18 2020-06-23 浙江大华技术股份有限公司 一种唇语识别方法及装置
CN111325139B (zh) * 2020-02-18 2023-08-04 浙江大华技术股份有限公司 一种唇语识别方法及装置
CN114898429A (zh) * 2022-05-10 2022-08-12 电子科技大学 一种热红外-可见光跨模态人脸识别的方法
CN114898429B (zh) * 2022-05-10 2023-05-30 电子科技大学 一种热红外-可见光跨模态人脸识别的方法

Also Published As

Publication number Publication date
CN109977741A (zh) 2019-07-05

Similar Documents

Publication Publication Date Title
WO2019128362A1 (fr) Procédé, appareil et système de reconnaissance faciale humaine, et support
CN108898086B (zh) 视频图像处理方法及装置、计算机可读介质和电子设备
US11244435B2 (en) Method and apparatus for generating vehicle damage information
CN108701216B (zh) 一种人脸脸型识别方法、装置和智能终端
WO2019109526A1 (fr) Procédé et dispositif de reconnaissance de l'âge de l'image d'un visage, et support de stockage
WO2018086543A1 (fr) Procédé d'identification de corps vivant, procédé d'authentification d'identité, terminal, serveur et support d'information
WO2020062493A1 (fr) Procédé et appareil de traitement d'image
WO2019033572A1 (fr) Procédé de détection de situation de visage bloqué, dispositif et support d'informations
WO2019001481A1 (fr) Procédé et appareil de recherche de véhicule et d'identification de caractéristique d'aspect de véhicule, support de stockage et dispositif électronique
US20200110965A1 (en) Method and apparatus for generating vehicle damage information
CN108734185B (zh) 图像校验方法和装置
WO2019149186A1 (fr) Procédé et appareil de génération d'informations
US11367310B2 (en) Method and apparatus for identity verification, electronic device, computer program, and storage medium
WO2020107951A1 (fr) Procédé et appareil de facturation de produits à base d'images, support et dispositif électronique
WO2021083069A1 (fr) Procédé et dispositif de formation de modèle d'échange de visages
JP2016062253A (ja) オブジェクト識別装置、オブジェクト識別方法及びプログラム
CN110660102B (zh) 基于人工智能的说话人识别方法及装置、系统
CN108388889B (zh) 用于分析人脸图像的方法和装置
CN110941978B (zh) 一种未识别身份人员的人脸聚类方法、装置及存储介质
CN108509994B (zh) 人物图像聚类方法和装置
CA3052846A1 (fr) Procede et dispositif de reconnaissance de caracteres, dispositif electronique et support de stockage
WO2019080702A1 (fr) Procédé et appareil de traitement d'images
CN113569740B (zh) 视频识别模型训练方法与装置、视频识别方法与装置
CN111783626A (zh) 图像识别方法、装置、电子设备及存储介质
CN108615006B (zh) 用于输出信息的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18893439

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18893439

Country of ref document: EP

Kind code of ref document: A1