WO2019071663A1 - Electronic apparatus, virtual sample generation method and storage medium - Google Patents

Electronic apparatus, virtual sample generation method and storage medium Download PDF

Info

Publication number
WO2019071663A1
WO2019071663A1 PCT/CN2017/108775 CN2017108775W WO2019071663A1 WO 2019071663 A1 WO2019071663 A1 WO 2019071663A1 CN 2017108775 W CN2017108775 W CN 2017108775W WO 2019071663 A1 WO2019071663 A1 WO 2019071663A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
wavelet
wavelet coefficient
rule
wavelet transform
Prior art date
Application number
PCT/CN2017/108775
Other languages
French (fr)
Chinese (zh)
Inventor
戴磊
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019071663A1 publication Critical patent/WO2019071663A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the invention relates to the field of face recognition, and in particular to an electronic device, a virtual sample generating method and a storage medium.
  • face recognition technology is more and more extensive.
  • the face recognition access control attendance system is applied in enterprise management
  • the face recognition security door application is used in residential security management.
  • Face recognition systems and networks are used by public security, judicial or criminal investigation departments to search for fugitives nationwide.
  • the present invention provides an electronic device, a virtual sample generating method, and a storage medium, which can generate virtual samples corresponding to illumination conditions according to an average face image under predetermined illumination conditions, thereby solving the cumbersome process of collecting training samples. Problems, a lot of manpower savings, and the efficiency of generating samples.
  • the present invention provides an electronic device including a memory and a processor connected to the memory, the processor for executing a virtual sample generation program stored on the memory, The following steps are implemented when the virtual sample generation program is executed by the processor:
  • the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain the first person. a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient corresponds to a second illumination condition reflected in the first face image, the second wavelet coefficient corresponding to the first facial contour detail reflected in the first facial image;
  • the wavelet transform rule performing a wavelet transform process on the predetermined average face image in the first illumination condition to obtain a third wavelet in the low frequency part of the gray value corresponding to the average face image a coefficient, and a fourth wavelet coefficient of the high frequency portion of the gray value corresponding to the average face image, wherein the third wavelet coefficient corresponds to the first illumination condition reflected in the average face image, The fourth wavelet coefficient corresponds to the second human face contour detail reflected in the average face image;
  • the third wavelet coefficient and the second wavelet coefficient are merged by using an inverse transform rule of the wavelet transform rule to obtain a virtual face of the first face image under the first illumination condition. Image sample.
  • the present invention further provides a virtual sample generating method, the method comprising the following steps:
  • the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain the first person. a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient corresponds to a second illumination condition reflected in the first face image, the second wavelet coefficient corresponding to the first facial contour detail reflected in the first facial image;
  • the wavelet transform rule performing a wavelet transform process on the predetermined average face image in the first illumination condition to obtain a third wavelet coefficient of the low frequency part in the gray value corresponding to the average face image
  • the average face image corresponds to a fourth wavelet coefficient of the high frequency portion of the gray value, wherein the third wavelet coefficient corresponds to a first illumination condition reflected in the average face image, the fourth wavelet The coefficient corresponds to the second face contour detail reflected in the average face image;
  • the third wavelet coefficient and the second wavelet coefficient are merged by using an inverse transform rule of the wavelet transform rule to obtain a virtual face of the first face image under the first illumination condition. Image sample.
  • the present invention also provides a computer readable storage medium storing a virtual sample generation program, the virtual sample generation program being executable by at least one processor such that The at least one processor performs the following steps:
  • the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain the first person. a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient Corresponding to the second illumination condition reflected in the first facial image, the second wavelet coefficient corresponds to the first facial contour detail reflected in the first facial image;
  • the wavelet transform rule performing a wavelet transform process on the predetermined average face image in the first illumination condition to obtain a third wavelet coefficient of the low frequency part in the gray value corresponding to the average face image
  • the average face image corresponds to a fourth wavelet coefficient of the high frequency portion of the gray value, wherein the third wavelet coefficient corresponds to a first illumination condition reflected in the average face image, the fourth wavelet The coefficient corresponds to the second face contour detail reflected in the average face image;
  • the third wavelet coefficient and the second wavelet coefficient are merged by using an inverse transform rule of the wavelet transform rule to obtain a virtual face of the first face image under the first illumination condition. Image sample.
  • the electronic device, the virtual sample generating method and the storage medium proposed by the present invention firstly perform wavelet transform processing on a first face image by using a predetermined wavelet transform rule to obtain a first face image.
  • a predetermined wavelet transform rule Corresponding to the first wavelet coefficient of the low frequency part of the gray value and the second wavelet coefficient of the high frequency part of the corresponding gray value of the first face image; and then performing wavelet transform on the average face image under the predetermined first illumination condition Processing to obtain a third wavelet coefficient of the low frequency portion of the gray value corresponding to the average face image, and a fourth wavelet coefficient of the high frequency portion of the gray value corresponding to the average face image; and finally replacing the first wavelet coefficient with the third wavelet Coefficient, keeping the second wavelet coefficient unchanged, and using the inverse transform of the wavelet transform rule to fuse the third wavelet coefficient with the second wavelet coefficient to obtain a virtual face of the first face image under the predetermined first illumination condition Image sample.
  • the average face image under the first illumination condition and the first face image under the second illumination condition can be processed according to a predetermined wavelet transformation rule to obtain a virtual sample of the first face image under the first illumination condition. It solves the cumbersome problem of collecting training samples, saves a lot of manpower, and has high efficiency in generating samples.
  • FIG. 1 is a schematic diagram of an optional hardware architecture of an electronic device according to the present invention.
  • FIG. 3 is a schematic flow chart of an implementation of a preferred embodiment of a virtual sample generation method according to the present invention.
  • first, second and the like in the present invention are for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of indicated technical features. .
  • features defining “first” and “second” may include at least one of the features, either explicitly or implicitly.
  • the technical solutions between the various embodiments may be combined with each other, but must be based on the realization of those skilled in the art, and when the combination of the technical solutions is contradictory or impossible to implement, it should be considered that the combination of the technical solutions does not exist. It is also within the scope of protection required by the present invention.
  • the electronic device 10 may include, but is not limited to, a memory 101, a processor 102, a network interface 103, and a communication bus 104 that are communicably connected to each other through a system bus. It should be noted that FIG. 1 only shows the electronic device 10 having the components 101-104, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the memory 101 includes at least one type of computer readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), and a static Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 101 can be an internal storage unit of the electronic device 10, such as a hard disk or memory of the electronic device 10.
  • the memory 101 may also be an external storage device of the electronic device 10, such as a plug-in hard disk equipped on the electronic device 10, a smart memory card (SMC), and a secure digital (Secure Digital, SD). ) cards, flash cards, etc.
  • the memory 101 can also include both an internal storage unit of the electronic device 10 and an external storage device thereof.
  • the memory 101 is generally used to store an operating system installed on the electronic device 10 and various types of application software, such as a virtual sample generation program. Further, the memory 101 can also be used to temporarily store various types of data that have been output or are to be output.
  • Processor 102 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments.
  • the processor 102 is typically used to control the overall operation of the electronic device 10.
  • the processor 102 is configured to run program code or processing data stored in the memory 101, such as a running virtual sample generation program or the like.
  • the network interface 103 can include a wireless network interface or a wired network interface, which is typically used to establish a communication connection between the electronic device 10 and other electronic devices.
  • Communication bus 104 is used to implement connection communication between components 101-103.
  • the computer program of the virtual sample generation system 200 stored in the memory 101 is executed by the processor 102 to implement the steps of the virtual sample generation method of various embodiments of the present application.
  • the virtual sample generation program stored in the memory 101 is executed by the processor 102 to implement the following steps:
  • the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain the first a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient corresponds to the first face image a second illumination condition reflected in the second wavelet coefficient corresponding to the first facial contour detail reflected in the first facial image;
  • the average face image under the predetermined first illumination condition is subjected to wavelet transform processing to obtain the third wavelet coefficient of the low frequency part of the gray value corresponding to the average face image, and the average face
  • the image corresponds to a fourth wavelet coefficient of the high frequency portion of the gray value, wherein the third wavelet coefficient corresponds to the first illumination condition reflected in the average face image, and the fourth wavelet coefficient corresponds to the second person reflected in the average face image Face contour detail;
  • the third wavelet coefficient and the second wavelet coefficient are merged to obtain a virtual face image sample of the first face image under the first illumination condition.
  • the solution of the present invention is explained by taking a sample under the first illumination condition as an example.
  • the first face image is subjected to wavelet transform processing using a wavelet transform rule.
  • the face image as the training sample needs to include the contour of the face, so in order to ensure the use value of the obtained sample, the first face image is wavelet transformed by using the wavelet transform rule.
  • Each face image is composed of pixels of certain data, for example, 128 ⁇ 68 pixels, and the pixel points of each pixel coordinate correspond to one gray value, and a face can be used by using wavelet transform rules.
  • the gray value of the image is separated from the high frequency portion according to the image frequency distribution, and the wavelet coefficient corresponding to the low frequency portion of the gray value and the wavelet coefficient corresponding to the high frequency portion are obtained.
  • the first face image is subjected to wavelet transform processing by using a 3-layer 2-dimensional wavelet transform rule to obtain a first wavelet coefficient of the low-frequency portion of the gray value corresponding to the first face image, and the first person The second wavelet coefficient of the high frequency portion of the gray value corresponding to the face image.
  • the wavelet transform rule is used to perform wavelet transform processing on the average face image under the first illumination condition to obtain the gray scale corresponding to the average face image.
  • the process of generating an average face image under the first lighting condition includes the following steps:
  • G respectively calculating an average value of the gray values of the pixels of the same pixel coordinates of all the cropped second face images, and performing Gaussian filtering on the calculated average values of the gray values of the same pixel coordinates to obtain the Average face image.
  • the steps of respectively calculating the average value of the gray values of each of the same pixel points of each second face image include:
  • the predetermined cutting rule is: identifying all face feature points of the second face image to be cropped; determining a preset shape including all the recognized face feature points, for example, a rectangle, the minimum
  • the bounding box for example, the minimum bounding box refers to the bounding box of the preset shape including all the recognized face feature points, and the smallest bounding box.
  • the geometric center point of the minimum bounding box is determined, and the determined geometric center point is used as a cropping frame of a preset size specification.
  • the cropping frame of the preset size specification may be x pixels*y pixels.
  • the geometric center point of the cropping frame of the size determines the size and the position of the cropping frame for cropping the second human face image; finally, the second human face image is cropped by using a cropping frame that determines the size and the position of the region.
  • the electronic device performs wavelet transform processing on the first face image, and acquires a first wavelet coefficient of the low frequency portion of the gray value corresponding to the first face image and a gray value corresponding to the first face image.
  • the second wavelet coefficient of the middle and high frequency portions is subjected to wavelet transform processing on the average face image under the first illumination condition, and the third wavelet coefficient of the low frequency portion corresponding to the gray value corresponding to the average face image is obtained, and the average face image corresponding to the average face image is obtained.
  • the fourth wavelet coefficient of the high frequency part of the gray value is replaced by the third wavelet coefficient, and then the third wavelet coefficient and the second wavelet coefficient are inversely transformed by wavelet, and the first face image is obtained.
  • the computer program of the virtual sample generation system may be divided into different logic parts according to different functions implemented by the various parts thereof, and different logic parts may be used.
  • a virtual functional module with different functions is described.
  • FIG. 2 is a schematic diagram of a virtual function module of a computer program of a virtual sample generation system in an electronic device of the present invention.
  • each virtual function module of the computer program of the virtual sample generation system is named according to functions implemented by various parts of the computer program of the virtual sample generation system.
  • the computer program of the virtual sample generation system can be segmented into a first processing module 201, a second processing module 202, a replacement module 203, and a virtual sample generation module 204.
  • the functions or operation steps implemented by the modules 201-204 are similar to the above, and are not described in detail herein, by way of example, For example, where:
  • the first processing module 201 is configured to perform wavelet transform processing on the first face image by using a predetermined wavelet transform rule to obtain a sample of the first face image under a predetermined first illumination condition. a first wavelet coefficient of a low frequency portion of the gray value corresponding to the first face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet The coefficient corresponds to a second illumination condition reflected in the first facial image, and the second wavelet coefficient corresponds to a first facial contour detail reflected in the first facial image;
  • the second processing module 202 is configured to perform wavelet transform processing on the average face image in the first illumination condition by using the wavelet transform rule to obtain a low frequency in the gray value corresponding to the average face image. a portion of the third wavelet coefficient and a fourth wavelet coefficient of the high frequency portion of the gray value corresponding to the average face image, wherein the third wavelet coefficient corresponds to the first reflected in the average face image a lighting condition, the fourth wavelet coefficient corresponding to the second human face contour reflected in the average face image;
  • the replacing module 203 is configured to acquire the third wavelet coefficient, and replace the first wavelet coefficient with the third wavelet coefficient;
  • the virtual sample generation module 204 is configured to fuse the third wavelet coefficient and the second wavelet coefficient by using an inverse transform rule of the wavelet transform rule to obtain the first face image in the first illumination condition. A sample of the virtual face image below.
  • the present invention also provides a virtual sample generation method. Please refer to FIG. 3, which is a flowchart of an implementation of a preferred embodiment of the virtual sample generation method of the present invention.
  • the method can be performed by a device that can be implemented by software and/or hardware.
  • the virtual sample generation method includes:
  • Step S301 if it is required to obtain a sample of the first face image under the predetermined first illumination condition, performing the wavelet transform process on the first face image by using a predetermined wavelet transform rule to obtain the first a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient corresponds to a second illumination condition reflected in the first human face image, the second wavelet coefficient corresponding to the first facial contour detail reflected in the first facial image;
  • Step S302 using the wavelet transform rule, performing a wavelet transform process on the predetermined average face image under the first illumination condition to obtain a third of the low frequency part of the gray value corresponding to the average face image.
  • a wavelet coefficient, and a fourth wavelet coefficient of the high frequency portion of the gray value corresponding to the average face image wherein the third wavelet coefficient corresponds to the first illumination condition reflected in the average face image,
  • the fourth wavelet coefficient corresponds to the second human face contour detail reflected in the average face image;
  • Step S303 acquiring the third wavelet coefficient, and replacing the first wavelet coefficient with the third wavelet coefficient
  • Step S304 performing fusion processing on the third wavelet coefficient and the second wavelet coefficient by using an inverse transform rule of the wavelet transform rule to obtain a virtual person of the first face image under the first illumination condition. Face image sample.
  • the solution of the present invention is explained by taking a sample under the first illumination condition as an example.
  • the first face image is subjected to wavelet transform processing using a wavelet transform rule.
  • the face image as the training sample needs to include the contour of the face, so in order to ensure the use value of the obtained sample, the first face image is wavelet transformed by using the wavelet transform rule.
  • Each face image is composed of pixels of certain data, for example, 128 ⁇ 68 pixels, each pixel point corresponds to a gray value, and the gray value of a face image can be obtained by using wavelet transform rules.
  • the low frequency portion and the high frequency portion are separated according to the image frequency distribution, and the wavelet coefficients corresponding to the low frequency portion of the gray value and the wavelet coefficients corresponding to the high frequency portion are obtained.
  • the first face image is subjected to wavelet transform processing by using a 3-layer 2-dimensional wavelet transform rule to obtain a first wavelet coefficient of the low-frequency portion of the gray value corresponding to the first face image, and the first person The second wavelet coefficient of the high frequency portion of the gray value corresponding to the face image.
  • the wavelet transform rule is used to perform wavelet transform processing on the average face image under the first illumination condition to obtain the gray scale corresponding to the average face image.
  • the process of generating an average face image under the first lighting condition includes the following steps:
  • G respectively calculating an average value of the gray values of the pixels of the same pixel coordinates of all the cropped second face images, and performing Gaussian filtering on the calculated average values of the gray values of the same pixel coordinates to obtain the Average face image.
  • the steps of respectively calculating the average value of the gray values of each of the same pixel points of each second face image include:
  • the predetermined cutting rule is: identifying all face feature points of the second face image to be cropped; determining a preset shape including all the recognized face feature points, for example, a rectangle, the minimum
  • the bounding box for example, the minimum bounding box refers to the bounding box of the preset shape including all the recognized face feature points, and the smallest bounding box.
  • the geometric center point of the minimum bounding box is determined, and the determined geometric center point is used as a cropping frame of a preset size specification.
  • the cropping frame of the preset size specification may be x pixels*y pixels. Size cropping
  • the geometric center point of the frame determines the size and the position of the crop frame for cropping the second face image; finally, the second face image is cropped by using a crop frame that determines the size and the position of the region.
  • the virtual sample generating method performs wavelet transform processing on the first face image, and acquires a first wavelet coefficient of the low frequency part corresponding to the first human face image and a gray corresponding to the first face image.
  • the second wavelet coefficient of the middle and high frequency portions of the degree value is subjected to wavelet transform processing on the average face image under the first illumination condition, and the third wavelet coefficient and the average face of the low frequency portion corresponding to the gray value corresponding to the average face image are obtained.
  • the fourth wavelet coefficient of the high frequency part corresponding to the gray value of the image, the first wavelet coefficient is replaced by the third wavelet coefficient, and the third wavelet coefficient and the second wavelet coefficient are inversely transformed by the wavelet to obtain the first face image.
  • an embodiment of the present invention further provides a computer readable storage medium, where the virtual sample generation program is stored on the computer readable storage medium, and the virtual sample generation program can be executed by at least one processor to implement the following steps:
  • the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain a corresponding image of the first face image.
  • wavelet transform processing by using a predetermined wavelet transform rule to obtain a corresponding image of the first face image.
  • the average face image under the predetermined first illumination condition is subjected to wavelet transform processing to obtain the third wavelet coefficient and the average face image of the low frequency part of the corresponding gray value of the average face image.
  • wavelet transform processing corresponds to obtain the third wavelet coefficient and the average face image of the low frequency part of the corresponding gray value of the average face image.
  • the third wavelet coefficient corresponds to a first illumination condition reflected in the average face image
  • the fourth wavelet coefficient corresponds to the second face reflected in the average face image Contour detail
  • the third wavelet coefficient and the second wavelet coefficient are merged to obtain a virtual face image sample of the first face image under the first illumination condition.
  • G respectively calculating an average value of the gray values of the pixels of the same pixel coordinates of all the cropped second face images, and performing Gaussian filtering on the calculated average values of the gray values of the same pixel coordinates to obtain the Average face image.
  • the first face image is cropped by using a predetermined cropping rule, so that the cropped first face image includes the detected face contour, and the size and size of the cropped first face image are
  • the average face image has the same size specifications.
  • the specific embodiment of the computer readable storage medium of the present invention is substantially the same as the embodiments of the virtual sample generating method described above, and will not be described herein.
  • predetermined first lighting conditions, the preset number, the preset size specifications, and the like which are involved in the foregoing embodiments, need to be preset, and the user can set according to actual conditions.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present invention.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed are an electronic apparatus, a virtual sample generation method and a storage medium. The method comprises: carrying out wavelet transformation processing on a first human face image, and acquiring a first wavelet coefficient of a low and medium frequency part in a grayscale value corresponding to the first human face image and a second wavelet coefficient of a medium and high frequency part in the grayscale value corresponding to the first human face image; carrying out wavelet transformation processing on an average human face image under a first illumination condition, and acquiring a third wavelet coefficient of a low and medium frequency part in a grayscale value corresponding to the average human face image and a fourth wavelet coefficient of a medium and high frequency part in the grayscale value corresponding to the average human face image; and replacing the first wavelet coefficient with the third wavelet coefficient, and carrying out wavelet inverse transformation processing on the third wavelet coefficient and the second wavelet coefficient so as to obtain a sample of the first human face image under the first illumination condition. By means of the method, the present invention solves the problem that the training sample collection process is tedious, saves a lot of manpower, and improves the efficiency of sample generation.

Description

电子装置、虚拟样本生成方法及存储介质Electronic device, virtual sample generation method and storage medium
本申请要求于2017年10月9日提交中国专利局、申请号为201710929640.4,发明名称为“电子装置、虚拟样本生成方法及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. 201710929640.4, entitled "Electronic Device, Virtual Sample Generation Method, and Storage Medium" on October 9, 2017, the entire contents of which are incorporated by reference. In this application.
技术领域Technical field
本发明涉人脸识别领域,尤其涉及一种电子装置、虚拟样本生成方法及存储介质。The invention relates to the field of face recognition, and in particular to an electronic device, a virtual sample generating method and a storage medium.
背景技术Background technique
随着科技的进步和社会认同度的提高,人脸识别技术的应用越来越广泛,例如,人脸识别门禁考勤系统应用在企业管理中,人脸识别防盗门应用在住宅安全管理中,人脸识别系统和网络被公安、司法或刑侦部门用来在全国范围内搜捕逃犯等。With the advancement of technology and the improvement of social recognition, the application of face recognition technology is more and more extensive. For example, the face recognition access control attendance system is applied in enterprise management, and the face recognition security door application is used in residential security management. Face recognition systems and networks are used by public security, judicial or criminal investigation departments to search for fugitives nationwide.
目前,在企业或住宅安全管理中,应用人脸识别技术进行人脸识别时,人脸识别的准确率与被识别人员所处方位的光照强度有关,因此,为了提高人脸识别的准确率,需要采集在不同光照强度下拍摄的图片作为训练样本集。而通常采集不同人员在多个不同光照条件下的图片存在一定的困难,且在需要的训练样本数量较多的情况下,例如,通常需要的样本数量为几万或者几十万张,采集训练样本的过程繁琐,不仅导致浪费大量的人力、且采集效率低。At present, in the enterprise or residential security management, when face recognition technology is applied for face recognition, the accuracy of face recognition is related to the illumination intensity of the position of the identified person. Therefore, in order to improve the accuracy of face recognition, Images taken at different light intensities need to be collected as a training sample set. Usually, it is difficult to collect pictures of different people under different illumination conditions, and in the case where the number of training samples required is large, for example, the number of samples usually required is tens of thousands or hundreds of thousands, and the collection training is performed. The process of the sample is cumbersome, which not only leads to a waste of manpower, but also has low collection efficiency.
发明内容Summary of the invention
有鉴于此,本发明提出一种电子装置、虚拟样本生成方法及存储介质,能够根据预先确定的光照条件下的平均人脸图像生成对应光照条件下的虚拟样本,解决了采集训练样本过程繁琐的问题、节省了大量的人力、且生成样本的效率高。In view of this, the present invention provides an electronic device, a virtual sample generating method, and a storage medium, which can generate virtual samples corresponding to illumination conditions according to an average face image under predetermined illumination conditions, thereby solving the cumbersome process of collecting training samples. Problems, a lot of manpower savings, and the efficiency of generating samples.
首先,为实现上述目的,本发明提出一种电子装置,所述电子装置包括存储器、及与所述存储器连接的处理器,所述处理器用于执行所述存储器上存储的虚拟样本生成程序,所述虚拟样本生成程序被所述处理器执行时实现如下步骤:First, in order to achieve the above object, the present invention provides an electronic device including a memory and a processor connected to the memory, the processor for executing a virtual sample generation program stored on the memory, The following steps are implemented when the virtual sample generation program is executed by the processor:
A、若需要获得第一人脸图像在预先确定的第一光照条件下的样本,则利用预先确定的小波变换规则将所述第一人脸图像进行小波变换处理,以获取所述第一人脸图像对应的灰度值中低频部分的第一小波系数、及所述第一人脸图像对应的灰度值中高频部分的第二小波系数,其中,所述第一小波系数对应在所述第一人脸图像中反映的第二光照条件,所述第二小波系数对应在所述第一人脸图像中反映的第一人脸轮廓细节; A. If it is required to obtain a sample of the first face image under the predetermined first illumination condition, the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain the first person. a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient corresponds to a second illumination condition reflected in the first face image, the second wavelet coefficient corresponding to the first facial contour detail reflected in the first facial image;
B、利用所述小波变换规则,将预先确定的所述第一光照条件下的平均人脸图像进行小波变换处理,以获取所述平均人脸图像对应的灰度值中低频部分的第三小波系数、及所述平均脸图像对应的灰度值中高频部分的第四小波系数,其中,所述第三小波系数对应在所述平均人脸图像中反映的所述第一光照条件,所述第四小波系数对应在所述平均人脸图像中反映的第二人脸轮廓细节;B. Using the wavelet transform rule, performing a wavelet transform process on the predetermined average face image in the first illumination condition to obtain a third wavelet in the low frequency part of the gray value corresponding to the average face image a coefficient, and a fourth wavelet coefficient of the high frequency portion of the gray value corresponding to the average face image, wherein the third wavelet coefficient corresponds to the first illumination condition reflected in the average face image, The fourth wavelet coefficient corresponds to the second human face contour detail reflected in the average face image;
C、获取所述第三小波系数,并将所述第一小波系数替换为所述第三小波系数;C. acquiring the third wavelet coefficient, and replacing the first wavelet coefficient with the third wavelet coefficient;
D、利用所述小波变换规则的逆变换规则将所述第三小波系数与所述第二小波系数进行融合处理,得到所述第一人脸图像在所述第一光照条件下的虚拟人脸图像样本。D. The third wavelet coefficient and the second wavelet coefficient are merged by using an inverse transform rule of the wavelet transform rule to obtain a virtual face of the first face image under the first illumination condition. Image sample.
此外,为实现上述目的,本发明还提供一种虚拟样本生成方法,所述方法包括如下步骤:In addition, to achieve the above object, the present invention further provides a virtual sample generating method, the method comprising the following steps:
A、若需要获得第一人脸图像在预先确定的第一光照条件下的样本,则利用预先确定的小波变换规则将所述第一人脸图像进行小波变换处理,以获取所述第一人脸图像对应的灰度值中低频部分的第一小波系数、及所述第一人脸图像对应的灰度值中高频部分的第二小波系数,其中,所述第一小波系数对应在所述第一人脸图像中反映的第二光照条件,所述第二小波系数对应在所述第一人脸图像中反映的第一人脸轮廓细节;A. If it is required to obtain a sample of the first face image under the predetermined first illumination condition, the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain the first person. a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient corresponds to a second illumination condition reflected in the first face image, the second wavelet coefficient corresponding to the first facial contour detail reflected in the first facial image;
B、利用所述小波变换规则,将预先确定的所述第一光照条件下的平均人脸图像进行小波变换处理,以获取所述平均人脸图像对应灰度值中低频部分的第三小波系数、及所述平均人脸图像对应灰度值中高频部分的第四小波系数,其中,所述第三小波系数对应在所述平均人脸图像中反映的第一光照条件,所述第四小波系数对应在所述平均人脸图像中反映的第二人脸轮廓细节;B. Using the wavelet transform rule, performing a wavelet transform process on the predetermined average face image in the first illumination condition to obtain a third wavelet coefficient of the low frequency part in the gray value corresponding to the average face image And the average face image corresponds to a fourth wavelet coefficient of the high frequency portion of the gray value, wherein the third wavelet coefficient corresponds to a first illumination condition reflected in the average face image, the fourth wavelet The coefficient corresponds to the second face contour detail reflected in the average face image;
C、获取所述第三小波系数,并将所述第一小波系数替换为所述第三小波系数;C. acquiring the third wavelet coefficient, and replacing the first wavelet coefficient with the third wavelet coefficient;
D、利用所述小波变换规则的逆变换规则将所述第三小波系数与所述第二小波系数进行融合处理,得到所述第一人脸图像在所述第一光照条件下的虚拟人脸图像样本。D. The third wavelet coefficient and the second wavelet coefficient are merged by using an inverse transform rule of the wavelet transform rule to obtain a virtual face of the first face image under the first illumination condition. Image sample.
进一步地,为实现上述目的,本发明还提供一种计算机可读存储介质,所述计算机可读存储介质存储有虚拟样本生成程序,所述虚拟样本生成程序可被至少一个处理器执行,以使所述至少一个处理器执行如下步骤:Further, in order to achieve the above object, the present invention also provides a computer readable storage medium storing a virtual sample generation program, the virtual sample generation program being executable by at least one processor such that The at least one processor performs the following steps:
A、若需要获得第一人脸图像在预先确定的第一光照条件下的样本,则利用预先确定的小波变换规则将所述第一人脸图像进行小波变换处理,以获取所述第一人脸图像对应的灰度值中低频部分的第一小波系数、及所述第一人脸图像对应的灰度值中高频部分的第二小波系数,其中,所述第一小波系数 对应在所述第一人脸图像中反映的第二光照条件,所述第二小波系数对应在所述第一人脸图像中反映的第一人脸轮廓细节;A. If it is required to obtain a sample of the first face image under the predetermined first illumination condition, the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain the first person. a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient Corresponding to the second illumination condition reflected in the first facial image, the second wavelet coefficient corresponds to the first facial contour detail reflected in the first facial image;
B、利用所述小波变换规则,将预先确定的所述第一光照条件下的平均人脸图像进行小波变换处理,以获取所述平均人脸图像对应灰度值中低频部分的第三小波系数、及所述平均人脸图像对应灰度值中高频部分的第四小波系数,其中,所述第三小波系数对应在所述平均人脸图像中反映的第一光照条件,所述第四小波系数对应在所述平均人脸图像中反映的第二人脸轮廓细节;B. Using the wavelet transform rule, performing a wavelet transform process on the predetermined average face image in the first illumination condition to obtain a third wavelet coefficient of the low frequency part in the gray value corresponding to the average face image And the average face image corresponds to a fourth wavelet coefficient of the high frequency portion of the gray value, wherein the third wavelet coefficient corresponds to a first illumination condition reflected in the average face image, the fourth wavelet The coefficient corresponds to the second face contour detail reflected in the average face image;
C、获取所述第三小波系数,并将所述第一小波系数替换为所述第三小波系数;C. acquiring the third wavelet coefficient, and replacing the first wavelet coefficient with the third wavelet coefficient;
D、利用所述小波变换规则的逆变换规则将所述第三小波系数与所述第二小波系数进行融合处理,得到所述第一人脸图像在所述第一光照条件下的虚拟人脸图像样本。D. The third wavelet coefficient and the second wavelet coefficient are merged by using an inverse transform rule of the wavelet transform rule to obtain a virtual face of the first face image under the first illumination condition. Image sample.
相较于现有技术,本发明所提出的电子装置、虚拟样本生成方法及存储介质,首先通过利用预先确定的小波变换规则将第一人脸图像进行小波变换处理,以获取第一人脸图像对应灰度值中低频部分的第一小波系数、及第一人脸图像对应灰度值中高频部分的第二小波系数;然后将预先确定的第一光照条件下的平均人脸图像进行小波变换处理,以获取平均人脸图像对应灰度值中低频部分的第三小波系数、及平均人脸图像对应灰度值中高频部分的第四小波系数;最后将第一小波系数替换为第三小波系数,保持第二小波系数不变,并利用小波变换规则的逆变换将第三小波系数与第二小波系数进行融合,得到第一人脸图像在预先确定的第一光照条件下的虚拟人脸图像样本。能够将第一光照条件下的平均人脸图像及第二光照条件下的第一人脸图像根据预先确定的小波变换规则进行处理,获得第一人脸图像在第一光照条件下的虚拟样本,解决了采集训练样本过程繁琐的问题、节省了大量的人力、且生成样本的效率高。Compared with the prior art, the electronic device, the virtual sample generating method and the storage medium proposed by the present invention firstly perform wavelet transform processing on a first face image by using a predetermined wavelet transform rule to obtain a first face image. Corresponding to the first wavelet coefficient of the low frequency part of the gray value and the second wavelet coefficient of the high frequency part of the corresponding gray value of the first face image; and then performing wavelet transform on the average face image under the predetermined first illumination condition Processing to obtain a third wavelet coefficient of the low frequency portion of the gray value corresponding to the average face image, and a fourth wavelet coefficient of the high frequency portion of the gray value corresponding to the average face image; and finally replacing the first wavelet coefficient with the third wavelet Coefficient, keeping the second wavelet coefficient unchanged, and using the inverse transform of the wavelet transform rule to fuse the third wavelet coefficient with the second wavelet coefficient to obtain a virtual face of the first face image under the predetermined first illumination condition Image sample. The average face image under the first illumination condition and the first face image under the second illumination condition can be processed according to a predetermined wavelet transformation rule to obtain a virtual sample of the first face image under the first illumination condition. It solves the cumbersome problem of collecting training samples, saves a lot of manpower, and has high efficiency in generating samples.
附图说明DRAWINGS
图1是本发明提出的电子装置一可选的硬件架构的示意图;1 is a schematic diagram of an optional hardware architecture of an electronic device according to the present invention;
图2是本发明电子装置中虚拟样本生成系统的计算机程序的虚拟功能模块示意图;2 is a schematic diagram of virtual functional modules of a computer program of a virtual sample generation system in an electronic device of the present invention;
图3是本发明虚拟样本生成方法较佳实施例的实施流程示意图。3 is a schematic flow chart of an implementation of a preferred embodiment of a virtual sample generation method according to the present invention.
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The implementation, functional features, and advantages of the present invention will be further described in conjunction with the embodiments.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都 属于本发明保护的范围。The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts are It belongs to the scope of protection of the present invention.
需要说明的是,在本发明中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本发明要求的保护范围之内。It should be noted that the descriptions of "first", "second" and the like in the present invention are for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of indicated technical features. . Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly. In addition, the technical solutions between the various embodiments may be combined with each other, but must be based on the realization of those skilled in the art, and when the combination of the technical solutions is contradictory or impossible to implement, it should be considered that the combination of the technical solutions does not exist. It is also within the scope of protection required by the present invention.
参阅图1所示,是本发明提出的电子装置一可选的硬件架构示意图。本实施例中,电子装置10可包括,但不仅限于,可通过系统总线相互通信连接的存储器101、处理器102、网络接口103、及通信总线104。需要指出的是,图1仅示出了具有组件101-104的电子装置10,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。Referring to FIG. 1 , it is an optional hardware architecture diagram of the electronic device proposed by the present invention. In this embodiment, the electronic device 10 may include, but is not limited to, a memory 101, a processor 102, a network interface 103, and a communication bus 104 that are communicably connected to each other through a system bus. It should be noted that FIG. 1 only shows the electronic device 10 having the components 101-104, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
其中,存储器101至少包括一种类型的计算机可读存储介质,计算机可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,存储器101可以是电子装置10的内部存储单元,例如电子装置10的硬盘或内存。在另一些实施例中,存储器101也可以是电子装置10的外部存储设备,例如电子装置10上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,存储器101还可以既包括电子装置10的内部存储单元也包括其外部存储设备。本实施例中,存储器101通常用于存储安装于电子装置10的操作系统和各类应用软件,例如虚拟样本生成程序等。此外,存储器101还可以用于暂时地存储已经输出或者将要输出的各类数据。The memory 101 includes at least one type of computer readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), and a static Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like. In some embodiments, the memory 101 can be an internal storage unit of the electronic device 10, such as a hard disk or memory of the electronic device 10. In other embodiments, the memory 101 may also be an external storage device of the electronic device 10, such as a plug-in hard disk equipped on the electronic device 10, a smart memory card (SMC), and a secure digital (Secure Digital, SD). ) cards, flash cards, etc. Of course, the memory 101 can also include both an internal storage unit of the electronic device 10 and an external storage device thereof. In this embodiment, the memory 101 is generally used to store an operating system installed on the electronic device 10 and various types of application software, such as a virtual sample generation program. Further, the memory 101 can also be used to temporarily store various types of data that have been output or are to be output.
处理器102在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。处理器102通常用于控制电子装置10的总体操作。本实施例中,处理器102用于运行存储器101中存储的程序代码或者处理数据,例如运行的虚拟样本生成程序等。 Processor 102 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 102 is typically used to control the overall operation of the electronic device 10. In this embodiment, the processor 102 is configured to run program code or processing data stored in the memory 101, such as a running virtual sample generation program or the like.
网络接口103可包括无线网络接口或有线网络接口,网络接口103通常用于在电子装置10与其他电子设备之间建立通信连接。The network interface 103 can include a wireless network interface or a wired network interface, which is typically used to establish a communication connection between the electronic device 10 and other electronic devices.
通信总线104用于实现组件101-103之间的连接通信。 Communication bus 104 is used to implement connection communication between components 101-103.
存储器101中存储的虚拟样本生成系统200的计算机程序被处理器102执行,以实现本申请各个实施例的虚拟样本生成方法的步骤。The computer program of the virtual sample generation system 200 stored in the memory 101 is executed by the processor 102 to implement the steps of the virtual sample generation method of various embodiments of the present application.
在图1所示的电子装置的实施例中,存储器101中存储的虚拟样本生成程序被处理器102执行时实现如下步骤:In the embodiment of the electronic device shown in FIG. 1, the virtual sample generation program stored in the memory 101 is executed by the processor 102 to implement the following steps:
A、若需要获得第一人脸图像在预先确定的第一光照条件下的样本,则利用预先确定的小波变换规则将第一人脸图像进行小波变换处理,以获取第一 人脸图像对应的灰度值中低频部分的第一小波系数、及第一人脸图像对应的灰度值中高频部分的第二小波系数,其中,第一小波系数对应在第一人脸图像中反映的第二光照条件,第二小波系数对应在第一人脸图像中反映的第一人脸轮廓细节;A. If it is required to obtain a sample of the first face image under a predetermined first illumination condition, the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain the first a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient corresponds to the first face image a second illumination condition reflected in the second wavelet coefficient corresponding to the first facial contour detail reflected in the first facial image;
B、利用小波变换规则,将预先确定的第一光照条件下的平均人脸图像进行小波变换处理,以获取平均人脸图像对应的灰度值中低频部分的第三小波系数、及平均人脸图像对应灰度值中高频部分的第四小波系数,其中,第三小波系数对应在平均人脸图像中反映的第一光照条件,第四小波系数对应在平均人脸图像中反映的第二人脸轮廓细节;B. Using wavelet transform rules, the average face image under the predetermined first illumination condition is subjected to wavelet transform processing to obtain the third wavelet coefficient of the low frequency part of the gray value corresponding to the average face image, and the average face The image corresponds to a fourth wavelet coefficient of the high frequency portion of the gray value, wherein the third wavelet coefficient corresponds to the first illumination condition reflected in the average face image, and the fourth wavelet coefficient corresponds to the second person reflected in the average face image Face contour detail;
C、获取第三小波系数,并将第一小波系数替换为第三小波系数;C. acquiring a third wavelet coefficient, and replacing the first wavelet coefficient with the third wavelet coefficient;
D、利用小波变换规则的逆变换规则将第三小波系数与第二小波系数进行融合处理,得到第一人脸图像在第一光照条件下的虚拟人脸图像样本。D. Using the inverse transform rule of the wavelet transform rule, the third wavelet coefficient and the second wavelet coefficient are merged to obtain a virtual face image sample of the first face image under the first illumination condition.
本实施例中,以获取第一光照条件下的样本为例,对本发明的方案进行解释。当需要获得第一人脸图像在第一光照条件下的样本时,利用小波变换规则将第一人脸图像进行小波变换处理。可以理解的是,在人脸识别技术中,作为训练样本的人脸图片中需要包含面部的轮廓,所以为了保证得到的样本的使用价值,在利用小波变换规则将第一人脸图像进行小波变换处理之前,最好检测第一人脸图像包含的面部轮廓,确定第一人脸图像包括面部轮廓的所有人脸特征点。In this embodiment, the solution of the present invention is explained by taking a sample under the first illumination condition as an example. When it is required to obtain a sample of the first face image under the first illumination condition, the first face image is subjected to wavelet transform processing using a wavelet transform rule. It can be understood that in the face recognition technology, the face image as the training sample needs to include the contour of the face, so in order to ensure the use value of the obtained sample, the first face image is wavelet transformed by using the wavelet transform rule. Preferably, before processing, it is preferred to detect a facial contour included in the first facial image, and determine that the first facial image includes all facial feature points of the facial contour.
其中,每张人脸图像都是由一定数据的像素组成的,例如,128×68个像素,每个像素坐标的像素点都对应一个灰度值,可以利用小波变换规则,将一张人脸图像的灰度值根据图像频率分布进行低频部分与高频部分的分离,获得灰度值中低频部分对应的小波系数及高频部分对应的小波系数。在本实施例中,利用3层2维小波变换规则将第一人脸图像进行小波变换处理,以获取第一人脸图像对应的灰度值中低频部分的第一小波系数、及第一人脸图像对应的灰度值中高频部分的第二小波系数。Each face image is composed of pixels of certain data, for example, 128×68 pixels, and the pixel points of each pixel coordinate correspond to one gray value, and a face can be used by using wavelet transform rules. The gray value of the image is separated from the high frequency portion according to the image frequency distribution, and the wavelet coefficient corresponding to the low frequency portion of the gray value and the wavelet coefficient corresponding to the high frequency portion are obtained. In this embodiment, the first face image is subjected to wavelet transform processing by using a 3-layer 2-dimensional wavelet transform rule to obtain a first wavelet coefficient of the low-frequency portion of the gray value corresponding to the first face image, and the first person The second wavelet coefficient of the high frequency portion of the gray value corresponding to the face image.
在利用小波变换规则将第一人脸图像进行小波变换处理之后,继续利用该小波变换规则,将第一光照条件下的平均人脸图像进行小波变换处理,以获取平均人脸图像对应的灰度值中低频部分的第三小波系数、及平均人脸图像对应灰度值中高频部分的第四小波系数。After wavelet transform processing is performed on the first face image by using the wavelet transform rule, the wavelet transform rule is used to perform wavelet transform processing on the average face image under the first illumination condition to obtain the gray scale corresponding to the average face image. The third wavelet coefficient of the low frequency portion of the value, and the fourth wavelet coefficient of the high frequency portion of the gray value corresponding to the average face image.
其中,第一光照条件下的平均人脸图像的生成过程包括如下步骤:The process of generating an average face image under the first lighting condition includes the following steps:
E、分别获取预设数量的人员(例如,2000人或500人)在第一光照条件下拍摄的第二人脸图像,并分别检测每张第二人脸图像包含的面部轮廓;E. respectively acquiring a second face image taken by a preset number of people (for example, 2000 or 500 people) under the first lighting condition, and detecting a facial contour included in each second face image respectively;
F、采用预先确定的裁剪规则分别对各个第二人脸图像进行裁剪,使得裁剪后的各个第二人脸图像均包含检测到的面部轮廓,且使得裁剪后的各个第二人脸图像的尺寸规格均相同;F. cutting each second face image by using a predetermined cutting rule, so that each of the cropped second face images includes the detected face contour, and the size of each cropped second face image is made. The specifications are the same;
G、分别计算所有裁剪后的第二人脸图像的各个相同像素坐标的像素点的灰度值的平均值,将计算的各个相同像素坐标的灰度值的平均值进行高斯滤波,得到所述平均人脸图像。 G. respectively calculating an average value of the gray values of the pixels of the same pixel coordinates of all the cropped second face images, and performing Gaussian filtering on the calculated average values of the gray values of the same pixel coordinates to obtain the Average face image.
具体地,分别计算每张第二人脸图像的每个相同像素点的灰度值的平均值的步骤包括:Specifically, the steps of respectively calculating the average value of the gray values of each of the same pixel points of each second face image include:
分别扫描各个第二人脸图像中所有的像素点;Scanning all the pixels in each second face image separately;
获取扫描得到的各个第二人脸图像中每个相同像素点的灰度值,求所获取的像素灰度值的平均值。Obtaining gray values of each of the same pixel points in each of the scanned second face images, and obtaining an average value of the acquired pixel gray values.
需要说明的是,预先确定的裁剪规则为:识别出待裁剪的第二人脸图像的所有人脸特征点;确定包含识别出的所有人脸特征点的预设形状,例如,长方形,的最小包围框,例如,所述最小包围框指的是包含识别出的所有人脸特征点的预设形状的包围框中,面积最小的包围框。接下来确定该最小包围框的几何中心点,并将确定的几何中心点作为预设尺寸规格的裁剪框,例如,在本实施例中,预设尺寸规格的裁剪框可以为x像素*y像素大小的裁剪框,的几何中心点,确定针对该第二人脸图像进行裁剪的裁剪框的大小和区域位置;最后利用确定了大小和区域位置的裁剪框对该第二人脸图像进行裁剪。It should be noted that the predetermined cutting rule is: identifying all face feature points of the second face image to be cropped; determining a preset shape including all the recognized face feature points, for example, a rectangle, the minimum The bounding box, for example, the minimum bounding box refers to the bounding box of the preset shape including all the recognized face feature points, and the smallest bounding box. Next, the geometric center point of the minimum bounding box is determined, and the determined geometric center point is used as a cropping frame of a preset size specification. For example, in this embodiment, the cropping frame of the preset size specification may be x pixels*y pixels. The geometric center point of the cropping frame of the size determines the size and the position of the cropping frame for cropping the second human face image; finally, the second human face image is cropped by using a cropping frame that determines the size and the position of the region.
可以理解的是,为了进一步确保识别的准确性,在利用预先确定的小波变换规则将第一人脸图像进行小波变换处理之前,需要检测第一人脸图像包含的面部轮廓;采用预先确定的裁剪规则,及上述裁剪各个第二人脸图像的裁剪规则,对第一人脸图像进行裁剪,使得裁剪后的第一人脸图像包含检测到的面部轮廓,且使得裁剪后的第一人脸图像的尺寸规格与平均人脸图像的尺寸规格一致。It can be understood that, in order to further ensure the accuracy of the recognition, before the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule, it is required to detect the facial contour included in the first face image; using predetermined cutting. a rule, and the cutting rule for cropping each second face image, cropping the first face image, so that the cropped first face image includes the detected face contour, and the cropped first face image is made The size specifications are the same as the size specifications of the average face image.
上述实施例提出的电子装置,对第一人脸图像进行小波变换处理,获取第一人脸图像对应的灰度值中低频部分的第一小波系数、及第一人脸图像对应的灰度值中高频部分的第二小波系数,对第一光照条件下的平均人脸图像进行小波变换处理,获取平均人脸图像对应的灰度值中低频部分的第三小波系数、及平均人脸图像对应的灰度值中高频部分的第四小波系数,将第一小波系数替换为第三小波系数,再将第三小波系数与第二小波系数进行小波逆变换处理,得到第一人脸图像在第一光照条件下的样本。本发明通过上述方式解决了采集训练样本过程繁琐的问题、节省了大量的人力、且生成样本的效率高。The electronic device according to the above embodiment performs wavelet transform processing on the first face image, and acquires a first wavelet coefficient of the low frequency portion of the gray value corresponding to the first face image and a gray value corresponding to the first face image. The second wavelet coefficient of the middle and high frequency portions is subjected to wavelet transform processing on the average face image under the first illumination condition, and the third wavelet coefficient of the low frequency portion corresponding to the gray value corresponding to the average face image is obtained, and the average face image corresponding to the average face image is obtained. The fourth wavelet coefficient of the high frequency part of the gray value is replaced by the third wavelet coefficient, and then the third wavelet coefficient and the second wavelet coefficient are inversely transformed by wavelet, and the first face image is obtained. A sample under light conditions. The invention solves the cumbersome problem of collecting training samples in the above manner, saves a lot of manpower, and has high efficiency in generating samples.
进一步需要说明的是,如下面图2所示的实施例,虚拟样本生成系统的计算机程序依据其各部分所实现的功能不同,还可以被分割为不同的逻辑部分,且不同的逻辑部分可以用具有不同功能的虚拟的功能模块进行描述。It should be further noted that, as in the embodiment shown in FIG. 2 below, the computer program of the virtual sample generation system may be divided into different logic parts according to different functions implemented by the various parts thereof, and different logic parts may be used. A virtual functional module with different functions is described.
例如,请参阅图2所示,是本发明电子装置中虚拟样本生成系统的计算机程序的虚拟功能模块示意图。For example, please refer to FIG. 2, which is a schematic diagram of a virtual function module of a computer program of a virtual sample generation system in an electronic device of the present invention.
本实施例中,虚拟样本生成系统的计算机程序的各个虚拟功能模块是依据虚拟样本生成系统的计算机程序各部分所实现的功能进行命名的。例如,在图2中,虚拟样本生成系统的计算机程序可以被分割成第一处理模块201、第二处理模块202、替换模块203、以及虚拟样本生成模块204。其中,模块201-204所实现的功能或操作步骤均与上文类似,此处不再详述,示例性地, 例如,其中:In this embodiment, each virtual function module of the computer program of the virtual sample generation system is named according to functions implemented by various parts of the computer program of the virtual sample generation system. For example, in FIG. 2, the computer program of the virtual sample generation system can be segmented into a first processing module 201, a second processing module 202, a replacement module 203, and a virtual sample generation module 204. The functions or operation steps implemented by the modules 201-204 are similar to the above, and are not described in detail herein, by way of example, For example, where:
第一处理模块201用于若需要获得第一人脸图像在预先确定的第一光照条件下的样本,则利用预先确定的小波变换规则将所述第一人脸图像进行小波变换处理,以获取所述第一人脸图像对应的灰度值中低频部分的第一小波系数、及所述第一人脸图像对应的灰度值中高频部分的第二小波系数,其中,所述第一小波系数对应在所述第一人脸图像中反映的第二光照条件,所述第二小波系数对应在所述第一人脸图像中反映的第一人脸轮廓细节;The first processing module 201 is configured to perform wavelet transform processing on the first face image by using a predetermined wavelet transform rule to obtain a sample of the first face image under a predetermined first illumination condition. a first wavelet coefficient of a low frequency portion of the gray value corresponding to the first face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet The coefficient corresponds to a second illumination condition reflected in the first facial image, and the second wavelet coefficient corresponds to a first facial contour detail reflected in the first facial image;
第二处理模块202用于利用所述小波变换规则,将预先确定的所述第一光照条件下的平均人脸图像进行小波变换处理,以获取所述平均人脸图像对应的灰度值中低频部分的第三小波系数、及所述平均脸图像对应的灰度值中高频部分的第四小波系数,其中,所述第三小波系数对应在所述平均人脸图像中反映的所述第一光照条件,所述第四小波系数对应在所述平均人脸图像中反映的第二人脸轮廓细节;The second processing module 202 is configured to perform wavelet transform processing on the average face image in the first illumination condition by using the wavelet transform rule to obtain a low frequency in the gray value corresponding to the average face image. a portion of the third wavelet coefficient and a fourth wavelet coefficient of the high frequency portion of the gray value corresponding to the average face image, wherein the third wavelet coefficient corresponds to the first reflected in the average face image a lighting condition, the fourth wavelet coefficient corresponding to the second human face contour reflected in the average face image;
替换模块203用于获取所述第三小波系数,并将所述第一小波系数替换为所述第三小波系数;The replacing module 203 is configured to acquire the third wavelet coefficient, and replace the first wavelet coefficient with the third wavelet coefficient;
虚拟样本生成模块204用于利用所述小波变换规则的逆变换规则将所述第三小波系数与所述第二小波系数进行融合处理,得到所述第一人脸图像在所述第一光照条件下的虚拟人脸图像样本。The virtual sample generation module 204 is configured to fuse the third wavelet coefficient and the second wavelet coefficient by using an inverse transform rule of the wavelet transform rule to obtain the first face image in the first illumination condition. A sample of the virtual face image below.
此外,本发明还提供一种虚拟样本生成方法。请参阅图3所示,是本发明虚拟样本生成方法较佳实施例的实施流程图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。In addition, the present invention also provides a virtual sample generation method. Please refer to FIG. 3, which is a flowchart of an implementation of a preferred embodiment of the virtual sample generation method of the present invention. The method can be performed by a device that can be implemented by software and/or hardware.
在本实施例中,虚拟样本生成方法包括:In this embodiment, the virtual sample generation method includes:
步骤S301,若需要获得第一人脸图像在预先确定的第一光照条件下的样本,则利用预先确定的小波变换规则将所述第一人脸图像进行小波变换处理,以获取所述第一人脸图像对应的灰度值中低频部分的第一小波系数、及所述第一人脸图像对应的灰度值中高频部分的第二小波系数,其中,所述第一小波系数对应在所述第一人脸图像中反映的第二光照条件,所述第二小波系数对应在所述第一人脸图像中反映的第一人脸轮廓细节;Step S301, if it is required to obtain a sample of the first face image under the predetermined first illumination condition, performing the wavelet transform process on the first face image by using a predetermined wavelet transform rule to obtain the first a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient corresponds to a second illumination condition reflected in the first human face image, the second wavelet coefficient corresponding to the first facial contour detail reflected in the first facial image;
步骤S302,利用所述小波变换规则,将预先确定的所述第一光照条件下的平均人脸图像进行小波变换处理,以获取所述平均人脸图像对应的灰度值中低频部分的第三小波系数、及所述平均脸图像对应的灰度值中高频部分的第四小波系数,其中,所述第三小波系数对应在所述平均人脸图像中反映的所述第一光照条件,所述第四小波系数对应在所述平均人脸图像中反映的第二人脸轮廓细节;Step S302, using the wavelet transform rule, performing a wavelet transform process on the predetermined average face image under the first illumination condition to obtain a third of the low frequency part of the gray value corresponding to the average face image. a wavelet coefficient, and a fourth wavelet coefficient of the high frequency portion of the gray value corresponding to the average face image, wherein the third wavelet coefficient corresponds to the first illumination condition reflected in the average face image, The fourth wavelet coefficient corresponds to the second human face contour detail reflected in the average face image;
步骤S303,获取所述第三小波系数,并将所述第一小波系数替换为所述第三小波系数;Step S303, acquiring the third wavelet coefficient, and replacing the first wavelet coefficient with the third wavelet coefficient;
步骤S304,利用所述小波变换规则的逆变换规则将所述第三小波系数与所述第二小波系数进行融合处理,得到所述第一人脸图像在所述第一光照条件下的虚拟人脸图像样本。 Step S304, performing fusion processing on the third wavelet coefficient and the second wavelet coefficient by using an inverse transform rule of the wavelet transform rule to obtain a virtual person of the first face image under the first illumination condition. Face image sample.
本实施例中,以获取第一光照条件下的样本为例,对本发明的方案进行解释。当需要获得第一人脸图像在第一光照条件下的样本时,利用小波变换规则将第一人脸图像进行小波变换处理。可以理解的是,在人脸识别技术中,作为训练样本的人脸图片中需要包含面部的轮廓,所以为了保证得到的样本的使用价值,在利用小波变换规则将第一人脸图像进行小波变换处理之前,最好检测第一人脸图像包含的面部轮廓,在检测到第一人脸图像包含的面部轮廓中的所有人脸特征点之后,对第一人脸图像根据预设的裁剪规则进行裁剪,得到包含所有人脸特征点的第一人脸图像。In this embodiment, the solution of the present invention is explained by taking a sample under the first illumination condition as an example. When it is required to obtain a sample of the first face image under the first illumination condition, the first face image is subjected to wavelet transform processing using a wavelet transform rule. It can be understood that in the face recognition technology, the face image as the training sample needs to include the contour of the face, so in order to ensure the use value of the obtained sample, the first face image is wavelet transformed by using the wavelet transform rule. Before processing, it is preferable to detect a facial contour included in the first facial image, and after detecting all the facial feature points in the facial contour included in the first facial image, perform the first facial image according to a preset cutting rule. Crop to get the first face image containing all face feature points.
每张人脸图像都是由一定数据的像素组成的,例如,128×68个像素,每个像素点都对应一个灰度值,可以利用小波变换规则,将一张人脸图像的灰度值根据图像频率分布进行低频部分与高频部分的分离,获得灰度值中低频部分对应的小波系数及高频部分对应的小波系数。在本实施例中,利用3层2维小波变换规则将第一人脸图像进行小波变换处理,以获取第一人脸图像对应的灰度值中低频部分的第一小波系数、及第一人脸图像对应的灰度值中高频部分的第二小波系数。Each face image is composed of pixels of certain data, for example, 128×68 pixels, each pixel point corresponds to a gray value, and the gray value of a face image can be obtained by using wavelet transform rules. The low frequency portion and the high frequency portion are separated according to the image frequency distribution, and the wavelet coefficients corresponding to the low frequency portion of the gray value and the wavelet coefficients corresponding to the high frequency portion are obtained. In this embodiment, the first face image is subjected to wavelet transform processing by using a 3-layer 2-dimensional wavelet transform rule to obtain a first wavelet coefficient of the low-frequency portion of the gray value corresponding to the first face image, and the first person The second wavelet coefficient of the high frequency portion of the gray value corresponding to the face image.
在利用小波变换规则将第一人脸图像进行小波变换处理之后,继续利用该小波变换规则,将第一光照条件下的平均人脸图像进行小波变换处理,以获取平均人脸图像对应的灰度值中低频部分的第三小波系数、及平均人脸图像对应灰度值中高频部分的第四小波系数。After wavelet transform processing is performed on the first face image by using the wavelet transform rule, the wavelet transform rule is used to perform wavelet transform processing on the average face image under the first illumination condition to obtain the gray scale corresponding to the average face image. The third wavelet coefficient of the low frequency portion of the value, and the fourth wavelet coefficient of the high frequency portion of the gray value corresponding to the average face image.
其中,第一光照条件下的平均人脸图像的生成过程包括如下步骤:The process of generating an average face image under the first lighting condition includes the following steps:
E、分别获取预设数量的人员(例如,2000人或500人)在第一光照条件下拍摄的第二人脸图像,并分别检测每张第二人脸图像包含的面部轮廓;E. respectively acquiring a second face image taken by a preset number of people (for example, 2000 or 500 people) under the first lighting condition, and detecting a facial contour included in each second face image respectively;
F、采用预先确定的裁剪规则分别对各个第二人脸图像进行裁剪,使得裁剪后的各个第二人脸图像均包含检测到的面部轮廓,且使得裁剪后的各个第二人脸图像的尺寸规格均相同;F. cutting each second face image by using a predetermined cutting rule, so that each of the cropped second face images includes the detected face contour, and the size of each cropped second face image is made. The specifications are the same;
G、分别计算所有裁剪后的第二人脸图像的各个相同像素坐标的像素点的灰度值的平均值,将计算的各个相同像素坐标的灰度值的平均值进行高斯滤波,得到所述平均人脸图像。G. respectively calculating an average value of the gray values of the pixels of the same pixel coordinates of all the cropped second face images, and performing Gaussian filtering on the calculated average values of the gray values of the same pixel coordinates to obtain the Average face image.
具体地,分别计算每张第二人脸图像的每个相同像素点的灰度值的平均值的步骤包括:Specifically, the steps of respectively calculating the average value of the gray values of each of the same pixel points of each second face image include:
分别扫描各个第二人脸图像中所有的像素点;Scanning all the pixels in each second face image separately;
获取扫描得到的各个第二人脸图像中每个相同像素点的灰度值,求所获取的像素灰度值的平均值。Obtaining gray values of each of the same pixel points in each of the scanned second face images, and obtaining an average value of the acquired pixel gray values.
需要说明的是,预先确定的裁剪规则为:识别出待裁剪的第二人脸图像的所有人脸特征点;确定包含识别出的所有人脸特征点的预设形状,例如,长方形,的最小包围框,例如,所述最小包围框指的是包含识别出的所有人脸特征点的预设形状的包围框中,面积最小的包围框。接下来确定该最小包围框的几何中心点,并将确定的几何中心点作为预设尺寸规格的裁剪框,例如,在本实施例中,预设尺寸规格的裁剪框可以为x像素*y像素大小的裁剪 框,的几何中心点,确定针对该第二人脸图像进行裁剪的裁剪框的大小和区域位置;最后利用确定了大小和区域位置的裁剪框对该第二人脸图像进行裁剪。It should be noted that the predetermined cutting rule is: identifying all face feature points of the second face image to be cropped; determining a preset shape including all the recognized face feature points, for example, a rectangle, the minimum The bounding box, for example, the minimum bounding box refers to the bounding box of the preset shape including all the recognized face feature points, and the smallest bounding box. Next, the geometric center point of the minimum bounding box is determined, and the determined geometric center point is used as a cropping frame of a preset size specification. For example, in this embodiment, the cropping frame of the preset size specification may be x pixels*y pixels. Size cropping The geometric center point of the frame determines the size and the position of the crop frame for cropping the second face image; finally, the second face image is cropped by using a crop frame that determines the size and the position of the region.
可以理解的是,为了进一步确保识别的准确性,在利用预先确定的小波变换规则将第一人脸图像进行小波变换处理之前,需要检测第一人脸图像包含的面部轮廓;采用预先确定的裁剪规则,及上述裁剪各个第二人脸图像的裁剪规则,对第一人脸图像进行裁剪,使得裁剪后的第一人脸图像包含检测到的面部轮廓,且使得裁剪后的第一人脸图像的尺寸规格与平均人脸图像的尺寸规格一致。It can be understood that, in order to further ensure the accuracy of the recognition, before the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule, it is required to detect the facial contour included in the first face image; using predetermined cutting. a rule, and the cutting rule for cropping each second face image, cropping the first face image, so that the cropped first face image includes the detected face contour, and the cropped first face image is made The size specifications are the same as the size specifications of the average face image.
上述实施例提出的虚拟样本生成方法,对第一人脸图像进行小波变换处理,获取第一人脸图像对应的灰度值中低频部分的第一小波系数、及第一人脸图像对应的灰度值中高频部分的第二小波系数,对第一光照条件下的平均人脸图像进行小波变换处理,获取平均人脸图像对应的灰度值中低频部分的第三小波系数、及平均人脸图像对应的灰度值中高频部分的第四小波系数,将第一小波系数替换为第三小波系数,再将第三小波系数与第二小波系数进行小波逆变换处理,得到第一人脸图像在第一光照条件下的样本。本发明通过上述方式解决了采集训练样本过程繁琐的问题、节省了大量的人力、且生成样本的效率高。The virtual sample generating method according to the above embodiment performs wavelet transform processing on the first face image, and acquires a first wavelet coefficient of the low frequency part corresponding to the first human face image and a gray corresponding to the first face image. The second wavelet coefficient of the middle and high frequency portions of the degree value is subjected to wavelet transform processing on the average face image under the first illumination condition, and the third wavelet coefficient and the average face of the low frequency portion corresponding to the gray value corresponding to the average face image are obtained. The fourth wavelet coefficient of the high frequency part corresponding to the gray value of the image, the first wavelet coefficient is replaced by the third wavelet coefficient, and the third wavelet coefficient and the second wavelet coefficient are inversely transformed by the wavelet to obtain the first face image. A sample under the first lighting condition. The invention solves the cumbersome problem of collecting training samples in the above manner, saves a lot of manpower, and has high efficiency in generating samples.
此外,本发明实施例还提出一种计算机可读存储介质,该计算机可读存储介质上存储有虚拟样本生成程序,该虚拟样本生成程序可被至少一个处理器执行,以实现如下步骤:In addition, an embodiment of the present invention further provides a computer readable storage medium, where the virtual sample generation program is stored on the computer readable storage medium, and the virtual sample generation program can be executed by at least one processor to implement the following steps:
A、若需要获得第一人脸图像在预先确定的第一光照条件下的样本,则利用预先确定的小波变换规则将第一人脸图像进行小波变换处理,以获取第一人脸图像对应的灰度值中低频部分的第一小波系数、及第一人脸图像对应的灰度值中高频部分的第二小波系数,其中,第一小波系数对应在第一人脸图像中反映的第二光照条件,第二小波系数对应在第一人脸图像中反映的第一人脸轮廓细节;A. If it is required to obtain a sample of the first face image under the predetermined first illumination condition, the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain a corresponding image of the first face image. a first wavelet coefficient of the low frequency portion of the gray value, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient corresponds to the second reflected in the first face image a light condition, the second wavelet coefficient corresponding to the first face contour detail reflected in the first face image;
B、利用小波变换规则,将预先确定的第一光照条件下的平均人脸图像进行小波变换处理,以获取平均人脸图像对应灰度值中低频部分的第三小波系数、及平均人脸图像对应灰度值中高频部分的第四小波系数,其中,第三小波系数对应在平均人脸图像中反映的第一光照条件,第四小波系数对应在平均人脸图像中反映的第二人脸轮廓细节;B. Using wavelet transform rules, the average face image under the predetermined first illumination condition is subjected to wavelet transform processing to obtain the third wavelet coefficient and the average face image of the low frequency part of the corresponding gray value of the average face image. Corresponding to a fourth wavelet coefficient of the high frequency portion of the gray value, wherein the third wavelet coefficient corresponds to a first illumination condition reflected in the average face image, and the fourth wavelet coefficient corresponds to the second face reflected in the average face image Contour detail
C、获取第三小波系数,并将第一小波系数替换为第三小波系数;C. acquiring a third wavelet coefficient, and replacing the first wavelet coefficient with the third wavelet coefficient;
D、利用小波变换规则的逆变换规则将第三小波系数与第二小波系数进行融合处理,得第一人脸图像在第一光照条件下的虚拟人脸图像样本。D. Using the inverse transform rule of the wavelet transform rule, the third wavelet coefficient and the second wavelet coefficient are merged to obtain a virtual face image sample of the first face image under the first illumination condition.
进一步地,该虚拟样本生成程序可被至少一个处理器执行时,还实现如下步骤:Further, when the virtual sample generation program is executable by at least one processor, the following steps are further implemented:
E、分别获取预设数量的人员在所述第一光照条件下拍摄的第二人脸图像,并分别检测每张所述第二人脸图像包含的面部轮廓; E. respectively acquiring a second facial image captured by the preset number of persons under the first lighting condition, and respectively detecting a facial contour included in each of the second facial images;
F、采用预先确定的裁剪规则分别对各个所述第二人脸图像进行裁剪,使得裁剪后的各个第二人脸图像均包含检测到的面部轮廓,且使得裁剪后的各个第二人脸图像的尺寸规格均相同;F. cutting each of the second facial images by using a predetermined cutting rule, so that each of the cropped second facial images includes the detected facial contour, and the cropped second facial images are obtained. Dimensions are the same;
G、分别计算所有裁剪后的第二人脸图像的各个相同像素坐标的像素点的灰度值的平均值,将计算的各个相同像素坐标的灰度值的平均值进行高斯滤波,得到所述平均人脸图像。G. respectively calculating an average value of the gray values of the pixels of the same pixel coordinates of all the cropped second face images, and performing Gaussian filtering on the calculated average values of the gray values of the same pixel coordinates to obtain the Average face image.
进一步地,该虚拟样本生成程序可被至少一个处理器执行时,还实现如下步骤:Further, when the virtual sample generation program is executable by at least one processor, the following steps are further implemented:
检测所述第一人脸图像包含的面部轮廓;Detecting a facial contour included in the first facial image;
采用预先确定的裁剪规则对所述第一人脸图像进行裁剪,使得裁剪后的所述第一人脸图像包含检测到的面部轮廓,且使得裁剪后的第一人脸图像的尺寸规格与所述平均人脸图像的尺寸规格一致。The first face image is cropped by using a predetermined cropping rule, so that the cropped first face image includes the detected face contour, and the size and size of the cropped first face image are The average face image has the same size specifications.
本发明计算机可读存储介质具体实施方式与上述虚拟样本生成方法各实施例基本相同,在此不作累述。The specific embodiment of the computer readable storage medium of the present invention is substantially the same as the embodiments of the virtual sample generating method described above, and will not be described herein.
可以理解的是,上述各实施例中涉及到的预先确定的第一光照条件、预设数量、预设的尺寸规格等需要预先设置的参数,用户可以根据实际情况进行设置。It can be understood that the predetermined first lighting conditions, the preset number, the preset size specifications, and the like, which are involved in the foregoing embodiments, need to be preset, and the user can set according to actual conditions.
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better. Implementation. Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, The optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。 The above are only the preferred embodiments of the present invention, and are not intended to limit the scope of the invention, and the equivalent structure or equivalent process transformations made by the description of the present invention and the drawings are directly or indirectly applied to other related technical fields. The same is included in the scope of patent protection of the present invention.

Claims (20)

  1. 一种电子装置,其特征在于,所述电子装置包括存储器、及与所述存储器连接的处理器,所述处理器用于执行所述存储器上存储的虚拟样本生成程序,所述虚拟样本生成程序被所述处理器执行时实现如下步骤:An electronic device, comprising: a memory, and a processor coupled to the memory, the processor configured to execute a virtual sample generation program stored on the memory, the virtual sample generation program being The processor implements the following steps when executed:
    A、若需要获得第一人脸图像在预先确定的第一光照条件下的样本,则利用预先确定的小波变换规则将所述第一人脸图像进行小波变换处理,以获取所述第一人脸图像对应的灰度值中低频部分的第一小波系数、及所述第一人脸图像对应的灰度值中高频部分的第二小波系数,其中,所述第一小波系数对应在所述第一人脸图像中反映的第二光照条件,所述第二小波系数对应在所述第一人脸图像中反映的第一人脸轮廓细节;A. If it is required to obtain a sample of the first face image under the predetermined first illumination condition, the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain the first person. a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient corresponds to a second illumination condition reflected in the first face image, the second wavelet coefficient corresponding to the first facial contour detail reflected in the first facial image;
    B、利用所述小波变换规则,将预先确定的所述第一光照条件下的平均人脸图像进行小波变换处理,以获取所述平均人脸图像对应的灰度值中低频部分的第三小波系数、及所述平均脸图像对应的灰度值中高频部分的第四小波系数,其中,所述第三小波系数对应在所述平均人脸图像中反映的所述第一光照条件,所述第四小波系数对应在所述平均人脸图像中反映的第二人脸轮廓细节;B. Using the wavelet transform rule, performing a wavelet transform process on the predetermined average face image in the first illumination condition to obtain a third wavelet in the low frequency part of the gray value corresponding to the average face image a coefficient, and a fourth wavelet coefficient of the high frequency portion of the gray value corresponding to the average face image, wherein the third wavelet coefficient corresponds to the first illumination condition reflected in the average face image, The fourth wavelet coefficient corresponds to the second human face contour detail reflected in the average face image;
    C、获取所述第三小波系数,并将所述第一小波系数替换为所述第三小波系数;C. acquiring the third wavelet coefficient, and replacing the first wavelet coefficient with the third wavelet coefficient;
    D、利用所述小波变换规则的逆变换规则将所述第三小波系数与所述第二小波系数进行融合处理,得到所述第一人脸图像在所述第一光照条件下的虚拟人脸图像样本。D. The third wavelet coefficient and the second wavelet coefficient are merged by using an inverse transform rule of the wavelet transform rule to obtain a virtual face of the first face image under the first illumination condition. Image sample.
  2. 如权利要求1所述的电子装置,其特征在于,在所述步骤B中,所述平均人脸图像的生成过程包括如下步骤:The electronic device according to claim 1, wherein in the step B, the generating process of the average face image comprises the following steps:
    E、分别获取预设数量的人员在所述第一光照条件下拍摄的第二人脸图像,并分别检测每张所述第二人脸图像包含的面部轮廓;E. respectively acquiring a second facial image captured by the preset number of persons under the first lighting condition, and respectively detecting a facial contour included in each of the second facial images;
    F、采用预先确定的裁剪规则分别对各个所述第二人脸图像进行裁剪,使得裁剪后的各个第二人脸图像均包含检测到的面部轮廓,且使得裁剪后的各个第二人脸图像的尺寸规格均相同;F. cutting each of the second facial images by using a predetermined cutting rule, so that each of the cropped second facial images includes the detected facial contour, and the cropped second facial images are obtained. Dimensions are the same;
    G、分别计算所有裁剪后的第二人脸图像的各个相同像素坐标的像素点的灰度值的平均值,将计算的各个相同像素坐标的灰度值的平均值进行高斯滤波,得到所述平均人脸图像。G. respectively calculating an average value of the gray values of the pixels of the same pixel coordinates of all the cropped second face images, and performing Gaussian filtering on the calculated average values of the gray values of the same pixel coordinates to obtain the Average face image.
  3. 如权利要求2所述的电子装置,其特征在于,所述预先确定的裁剪规则为:The electronic device of claim 2, wherein the predetermined cropping rule is:
    识别出待裁剪的第二人脸图像的所有人脸特征点;Identifying all facial feature points of the second face image to be cropped;
    确定包含识别出的所有人脸特征点的预设形状的最小包围框;Determining a minimum bounding box containing a preset shape of all recognized face feature points;
    确定该最小包围框的几何中心点,并将确定的几何中心点作为预设尺寸规格的裁剪框的几何中心点,确定针对该第二人脸图像进行裁剪的裁剪框的大小和区域位置; Determining a geometric center point of the minimum bounding box, and determining the geometric center point as a geometric center point of the trimming frame of the preset size specification, and determining a size and a region position of the cropping frame for cropping the second face image;
    利用确定了大小和区域位置的裁剪框对该第二人脸图像进行裁剪。The second face image is cropped using a crop frame that determines the size and location of the region.
  4. 如权利要求3所述的电子装置,其特征在于,在所述步骤A之前,还包括如下步骤:The electronic device according to claim 3, further comprising the following steps before said step A:
    检测所述第一人脸图像包含的面部轮廓;Detecting a facial contour included in the first facial image;
    采用预先确定的裁剪规则对所述第一人脸图像进行裁剪,使得裁剪后的所述第一人脸图像包含检测到的面部轮廓,且使得裁剪后的第一人脸图像的尺寸规格与所述平均人脸图像的尺寸规格一致。The first face image is cropped by using a predetermined cropping rule, so that the cropped first face image includes the detected face contour, and the size and size of the cropped first face image are The average face image has the same size specifications.
  5. 如权利要求1所述的电子装置,其特征在于,所述小波变换规则包括3层2维小波变换,所述小波变换规则的逆变换规则为所述3层2维小波变换的逆变换。The electronic device according to claim 1, wherein said wavelet transform rule comprises a 3-layer 2-dimensional wavelet transform, and an inverse transform rule of said wavelet transform rule is an inverse transform of said 3-layer 2-dimensional wavelet transform.
  6. 如权利要求2所述的电子装置,其特征在于,所述小波变换规则包括3层2维小波变换,所述小波变换规则的逆变换规则为所述3层2维小波变换的逆变换。The electronic device according to claim 2, wherein said wavelet transform rule comprises a 3-layer 2-dimensional wavelet transform, and an inverse transform rule of said wavelet transform rule is an inverse transform of said 3-layer 2-dimensional wavelet transform.
  7. 如权利要求3所述的电子装置,其特征在于,所述小波变换规则包括3层2维小波变换,所述小波变换规则的逆变换规则为所述3层2维小波变换的逆变换。The electronic device according to claim 3, wherein said wavelet transform rule comprises a 3-layer 2-dimensional wavelet transform, and an inverse transform rule of said wavelet transform rule is an inverse transform of said 3-layer 2-dimensional wavelet transform.
  8. 如权利要求4所述的电子装置,其特征在于,所述小波变换规则包括3层2维小波变换,所述小波变换规则的逆变换规则为所述3层2维小波变换的逆变换。The electronic device according to claim 4, wherein said wavelet transform rule comprises a 3-layer 2-dimensional wavelet transform, and an inverse transform rule of said wavelet transform rule is an inverse transform of said 3-layer 2-dimensional wavelet transform.
  9. 一种虚拟样本生成方法,其特征在于,所述方法包括如下步骤:A virtual sample generation method, characterized in that the method comprises the following steps:
    A、若需要获得第一人脸图像在预先确定的第一光照条件下的样本,则利用预先确定的小波变换规则将所述第一人脸图像进行小波变换处理,以获取所述第一人脸图像对应的灰度值中低频部分的第一小波系数、及所述第一人脸图像对应的灰度值中高频部分的第二小波系数,其中,所述第一小波系数对应在所述第一人脸图像中反映的第二光照条件,所述第二小波系数对应在所述第一人脸图像中反映的第一人脸轮廓细节;A. If it is required to obtain a sample of the first face image under the predetermined first illumination condition, the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain the first person. a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and a second wavelet coefficient of the high frequency portion of the gray value corresponding to the first face image, wherein the first wavelet coefficient corresponds to a second illumination condition reflected in the first face image, the second wavelet coefficient corresponding to the first facial contour detail reflected in the first facial image;
    B、利用所述小波变换规则,将预先确定的所述第一光照条件下的平均人脸图像进行小波变换处理,以获取所述平均人脸图像对应灰度值中低频部分的第三小波系数、及所述平均人脸图像对应灰度值中高频部分的第四小波系数,其中,所述第三小波系数对应在所述平均人脸图像中反映的第一光照条件,所述第四小波系数对应在所述平均人脸图像中反映的第二人脸轮廓细节;B. Using the wavelet transform rule, performing a wavelet transform process on the predetermined average face image in the first illumination condition to obtain a third wavelet coefficient of the low frequency part in the gray value corresponding to the average face image And the average face image corresponds to a fourth wavelet coefficient of the high frequency portion of the gray value, wherein the third wavelet coefficient corresponds to a first illumination condition reflected in the average face image, the fourth wavelet The coefficient corresponds to the second face contour detail reflected in the average face image;
    C、获取所述第三小波系数,并将所述第一小波系数替换为所述第三小波系数;C. acquiring the third wavelet coefficient, and replacing the first wavelet coefficient with the third wavelet coefficient;
    D、利用所述小波变换规则的逆变换规则将所述第三小波系数与所述第二小波系数进行融合处理,得到所述第一人脸图像在所述第一光照条件下的虚拟人脸图像样本。D. The third wavelet coefficient and the second wavelet coefficient are merged by using an inverse transform rule of the wavelet transform rule to obtain a virtual face of the first face image under the first illumination condition. Image sample.
  10. 如权利要求9所述的虚拟样本生成方法,其特征在于,在所述步骤B中,所述平均人脸图像的生成过程包括如下步骤:The virtual sample generating method according to claim 9, wherein in the step B, the generating process of the average face image comprises the following steps:
    E、分别获取预设数量的人员在所述第一光照条件下拍摄的第二人脸图 像,并分别检测每张所述第二人脸图像包含的面部轮廓;E. respectively obtaining a second face map taken by the preset number of people under the first lighting condition And detecting, respectively, a facial contour included in each of the second facial images;
    F、采用预先确定的裁剪规则分别对各个所述第二人脸图像进行裁剪,使得裁剪后的各个第二人脸图像均包含检测到的面部轮廓,且使得裁剪后的各个第二人脸图像的尺寸规格均相同;F. cutting each of the second facial images by using a predetermined cutting rule, so that each of the cropped second facial images includes the detected facial contour, and the cropped second facial images are obtained. Dimensions are the same;
    G、分别计算所有裁剪后的第二人脸图像的各个相同像素坐标的像素点的灰度值的平均值,将计算的各个相同像素坐标的灰度值的平均值进行高斯滤波,得到所述平均人脸图像。G. respectively calculating an average value of the gray values of the pixels of the same pixel coordinates of all the cropped second face images, and performing Gaussian filtering on the calculated average values of the gray values of the same pixel coordinates to obtain the Average face image.
  11. 如权利要求10所述的虚拟样本生成方法,其特征在于,所述预先确定的裁剪规则为:The virtual sample generating method according to claim 10, wherein the predetermined cutting rule is:
    识别出待裁剪的第二人脸图像的所有人脸特征点;Identifying all facial feature points of the second face image to be cropped;
    确定包含识别出的所有人脸特征点的预设形状的最小包围框;Determining a minimum bounding box containing a preset shape of all recognized face feature points;
    确定该最小包围框的几何中心点,并将确定的几何中心点作为预设尺寸规格的裁剪框的几何中心点,确定针对该第二人脸图像进行裁剪的裁剪框的大小和区域位置;Determining a geometric center point of the minimum bounding box, and determining the geometric center point as a geometric center point of the trimming frame of the preset size specification, and determining a size and a region position of the cropping frame for cropping the second face image;
    利用确定了大小和区域位置的裁剪框对该第二人脸图像进行裁剪。The second face image is cropped using a crop frame that determines the size and location of the region.
  12. 如权利要求11所述的虚拟样本生成方法,其特征在于,在所述步骤A之前,还包括如下步骤,包括:The virtual sample generating method according to claim 11, further comprising the following steps before the step A, comprising:
    检测所述第一人脸图像包含的面部轮廓;Detecting a facial contour included in the first facial image;
    采用预先确定的裁剪规则对所述第一人脸图像进行裁剪,使得裁剪后的所述第一人脸图像包含检测到的面部轮廓,且使得裁剪后的第一人脸图像的尺寸规格与所述平均人脸图像的尺寸规格一致。The first face image is cropped by using a predetermined cropping rule, so that the cropped first face image includes the detected face contour, and the size and size of the cropped first face image are The average face image has the same size specifications.
  13. 如权利要求9所述的虚拟样本生成方法,其特征在于,所述小波变换规则包括3层2维小波变换,所述小波变换规则的逆变换规则为所述3层2维小波变换的逆变换。The virtual sample generating method according to claim 9, wherein the wavelet transform rule comprises a 3-layer 2-dimensional wavelet transform, and an inverse transform rule of the wavelet transform rule is an inverse transform of the 3-layer 2-dimensional wavelet transform .
  14. 如权利要求10所述的虚拟样本生成方法,其特征在于,所述小波变换规则包括3层2维小波变换,所述小波变换规则的逆变换规则为所述3层2维小波变换的逆变换。The virtual sample generating method according to claim 10, wherein the wavelet transform rule comprises a 3-layer 2-dimensional wavelet transform, and an inverse transform rule of the wavelet transform rule is an inverse transform of the 3-layer 2-dimensional wavelet transform .
  15. 如权利要求11所述的虚拟样本生成方法,其特征在于,所述小波变换规则包括3层2维小波变换,所述小波变换规则的逆变换规则为所述3层2维小波变换的逆变换。The virtual sample generating method according to claim 11, wherein the wavelet transform rule comprises a 3-layer 2-dimensional wavelet transform, and an inverse transform rule of the wavelet transform rule is an inverse transform of the 3-layer 2-dimensional wavelet transform .
  16. 如权利要求12所述的虚拟样本生成方法,其特征在于,所述小波变换规则包括3层2维小波变换,所述小波变换规则的逆变换规则为所述3层2维小波变换的逆变换。The virtual sample generating method according to claim 12, wherein the wavelet transform rule comprises a 3-layer 2-dimensional wavelet transform, and an inverse transform rule of the wavelet transform rule is an inverse transform of the 3-layer 2-dimensional wavelet transform .
  17. 一种计算机可读存储介质,所述计算机可读存储介质存储有虚拟样本生成程序,所述虚拟样本生成程序可被至少一个处理器执行,以使所述至少一个处理器执行如下的步骤:A computer readable storage medium storing a virtual sample generation program, the virtual sample generation program being executable by at least one processor to cause the at least one processor to perform the following steps:
    A、若需要获得第一人脸图像在预先确定的第一光照条件下的样本,则利用预先确定的小波变换规则将所述第一人脸图像进行小波变换处理,以获取所述第一人脸图像对应的灰度值中低频部分的第一小波系数、及所述第一人 脸图像对应的灰度值中高频部分的第二小波系数,其中,所述第一小波系数对应在所述第一人脸图像中反映的第二光照条件,所述第二小波系数对应在所述第一人脸图像中反映的第一人脸轮廓细节;A. If it is required to obtain a sample of the first face image under the predetermined first illumination condition, the first face image is subjected to wavelet transform processing by using a predetermined wavelet transform rule to obtain the first person. a first wavelet coefficient of a low frequency portion of the gray value corresponding to the face image, and the first person a second wavelet coefficient of the high frequency portion of the gray value corresponding to the face image, wherein the first wavelet coefficient corresponds to a second illumination condition reflected in the first face image, and the second wavelet coefficient corresponds to Describe the first face contour detail reflected in the first face image;
    B、利用所述小波变换规则,将预先确定的所述第一光照条件下的平均人脸图像进行小波变换处理,以获取所述平均人脸图像对应灰度值中低频部分的第三小波系数、及所述平均人脸图像对应灰度值中高频部分的第四小波系数,其中,所述第三小波系数对应在所述平均人脸图像中反映的第一光照条件,所述第四小波系数对应在所述平均人脸图像中反映的第二人脸轮廓细节;B. Using the wavelet transform rule, performing a wavelet transform process on the predetermined average face image in the first illumination condition to obtain a third wavelet coefficient of the low frequency part in the gray value corresponding to the average face image And the average face image corresponds to a fourth wavelet coefficient of the high frequency portion of the gray value, wherein the third wavelet coefficient corresponds to a first illumination condition reflected in the average face image, the fourth wavelet The coefficient corresponds to the second face contour detail reflected in the average face image;
    C、获取所述第三小波系数,并将所述第一小波系数替换为所述第三小波系数;C. acquiring the third wavelet coefficient, and replacing the first wavelet coefficient with the third wavelet coefficient;
    D、利用所述小波变换规则的逆变换规则将所述第三小波系数与所述第二小波系数进行融合处理,得到所述第一人脸图像在所述第一光照条件下的虚拟人脸图像样本。D. The third wavelet coefficient and the second wavelet coefficient are merged by using an inverse transform rule of the wavelet transform rule to obtain a virtual face of the first face image under the first illumination condition. Image sample.
  18. 如权利要求17所述的计算机可读存储介质,其特征在于,在所述步骤B中,所述平均人脸图像的生成过程包括如下步骤:The computer readable storage medium according to claim 17, wherein in the step B, the generating process of the average face image comprises the following steps:
    E、分别获取预设数量的人员在所述第一光照条件下拍摄的第二人脸图像,并分别检测每张所述第二人脸图像包含的面部轮廓;E. respectively acquiring a second facial image captured by the preset number of persons under the first lighting condition, and respectively detecting a facial contour included in each of the second facial images;
    F、采用预先确定的裁剪规则分别对各个所述第二人脸图像进行裁剪,使得裁剪后的各个第二人脸图像均包含检测到的面部轮廓,且使得裁剪后的各个第二人脸图像的尺寸规格均相同;F. cutting each of the second facial images by using a predetermined cutting rule, so that each of the cropped second facial images includes the detected facial contour, and the cropped second facial images are obtained. Dimensions are the same;
    G、分别计算所有裁剪后的第二人脸图像的各个相同像素坐标的像素点的灰度值的平均值,将计算的各个相同像素坐标的灰度值的平均值进行高斯滤波,得到所述平均人脸图像。G. respectively calculating an average value of the gray values of the pixels of the same pixel coordinates of all the cropped second face images, and performing Gaussian filtering on the calculated average values of the gray values of the same pixel coordinates to obtain the Average face image.
  19. 如权利要求18所述的计算机可读存储介质,其特征在于,所述预先确定的裁剪规则为:The computer readable storage medium of claim 18, wherein the predetermined cropping rule is:
    识别出待裁剪的第二人脸图像的所有人脸特征点;Identifying all facial feature points of the second face image to be cropped;
    确定包含识别出的所有人脸特征点的预设形状的最小包围框;Determining a minimum bounding box containing a preset shape of all recognized face feature points;
    确定该最小包围框的几何中心点,并将确定的几何中心点作为预设尺寸规格的裁剪框的几何中心点,确定针对该第二人脸图像进行裁剪的裁剪框的大小和区域位置;Determining a geometric center point of the minimum bounding box, and determining the geometric center point as a geometric center point of the trimming frame of the preset size specification, and determining a size and a region position of the cropping frame for cropping the second face image;
    利用确定了大小和区域位置的裁剪框对该第二人脸图像进行裁剪。The second face image is cropped using a crop frame that determines the size and location of the region.
  20. 如权利要求19所述的计算机可读存储介质,其特征在于,在所述步骤A之前,还包括如下步骤,包括:The computer readable storage medium according to claim 19, further comprising the following steps, including:
    检测所述第一人脸图像包含的面部轮廓;Detecting a facial contour included in the first facial image;
    采用预先确定的裁剪规则对所述第一人脸图像进行裁剪,使得裁剪后的所述第一人脸图像包含检测到的面部轮廓,且使得裁剪后的第一人脸图像的尺寸规格与所述平均人脸图像的尺寸规格一致。 The first face image is cropped by using a predetermined cropping rule, so that the cropped first face image includes the detected face contour, and the size and size of the cropped first face image are The average face image has the same size specifications.
PCT/CN2017/108775 2017-10-09 2017-10-31 Electronic apparatus, virtual sample generation method and storage medium WO2019071663A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710929640.4A CN107784625B (en) 2017-10-09 2017-10-09 Electronic device, virtual sample generation method and storage medium
CN201710929640.4 2017-10-09

Publications (1)

Publication Number Publication Date
WO2019071663A1 true WO2019071663A1 (en) 2019-04-18

Family

ID=61434160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108775 WO2019071663A1 (en) 2017-10-09 2017-10-31 Electronic apparatus, virtual sample generation method and storage medium

Country Status (2)

Country Link
CN (1) CN107784625B (en)
WO (1) WO2019071663A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765265B (en) * 2018-05-21 2022-05-24 北京微播视界科技有限公司 Image processing method, device, terminal equipment and storage medium
CN110838084B (en) * 2019-09-24 2023-10-17 咪咕文化科技有限公司 Method and device for transferring style of image, electronic equipment and storage medium
CN114898410B (en) * 2022-07-14 2022-10-11 安徽云森物联网科技有限公司 Cross-resolution pedestrian re-identification method based on wavelet transformation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060104517A1 (en) * 2004-11-17 2006-05-18 Byoung-Chul Ko Template-based face detection method
CN101430759A (en) * 2008-12-04 2009-05-13 上海大学 Optimized recognition pretreatment method for human face
CN102637302A (en) * 2011-10-24 2012-08-15 北京航空航天大学 Image coding method
CN106022241A (en) * 2016-05-12 2016-10-12 宁波大学 Face recognition method based on wavelet transformation and sparse representation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060104517A1 (en) * 2004-11-17 2006-05-18 Byoung-Chul Ko Template-based face detection method
CN101430759A (en) * 2008-12-04 2009-05-13 上海大学 Optimized recognition pretreatment method for human face
CN102637302A (en) * 2011-10-24 2012-08-15 北京航空航天大学 Image coding method
CN106022241A (en) * 2016-05-12 2016-10-12 宁波大学 Face recognition method based on wavelet transformation and sparse representation

Also Published As

Publication number Publication date
CN107784625A (en) 2018-03-09
CN107784625B (en) 2019-03-08

Similar Documents

Publication Publication Date Title
CN110705405B (en) Target labeling method and device
WO2019100608A1 (en) Video capturing device, face recognition method, system, and computer-readable storage medium
US9235759B2 (en) Detecting text using stroke width based text detection
WO2019033574A1 (en) Electronic device, dynamic video face recognition method and system, and storage medium
CN111507324B (en) Card frame recognition method, device, equipment and computer storage medium
CN112200081A (en) Abnormal behavior identification method and device, electronic equipment and storage medium
CN111914775A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
WO2019085338A1 (en) Electronic apparatus, image-based age classification method and system, and storage medium
WO2021051547A1 (en) Violent behavior detection method and system
CN111899270A (en) Card frame detection method, device and equipment and readable storage medium
CN116168351B (en) Inspection method and device for power equipment
WO2019071663A1 (en) Electronic apparatus, virtual sample generation method and storage medium
CN112149570B (en) Multi-person living body detection method, device, electronic equipment and storage medium
CN111985400A (en) Face living body identification method, device, equipment and storage medium
CN113971821A (en) Driver information determination method and device, terminal device and storage medium
WO2021218183A1 (en) Certificate edge detection method and apparatus, and device and medium
CN111080665B (en) Image frame recognition method, device, equipment and computer storage medium
CN113688839B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN113158773B (en) Training method and training device for living body detection model
CN111753766B (en) Image processing method, device, equipment and medium
CN112967224A (en) Electronic circuit board detection system, method and medium based on artificial intelligence
CN110956093A (en) Big data-based model identification method, device, equipment and medium
CN112541436B (en) Concentration analysis method and device, electronic equipment and computer storage medium
CN115147756A (en) Video stream processing method and device, electronic equipment and storage medium
CN114445751A (en) Method and device for extracting video key frame image contour features

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17928230

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17928230

Country of ref document: EP

Kind code of ref document: A1