CN109657603B - Face detection method and device - Google Patents

Face detection method and device Download PDF

Info

Publication number
CN109657603B
CN109657603B CN201811541438.5A CN201811541438A CN109657603B CN 109657603 B CN109657603 B CN 109657603B CN 201811541438 A CN201811541438 A CN 201811541438A CN 109657603 B CN109657603 B CN 109657603B
Authority
CN
China
Prior art keywords
window
cutting
size
data
cutting data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811541438.5A
Other languages
Chinese (zh)
Other versions
CN109657603A (en
Inventor
张永胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201811541438.5A priority Critical patent/CN109657603B/en
Publication of CN109657603A publication Critical patent/CN109657603A/en
Application granted granted Critical
Publication of CN109657603B publication Critical patent/CN109657603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a method and a device for detecting a face, wherein the method comprises the following steps: acquiring image processing information, wherein the image processing information comprises sliding window information and image cutting information; cutting the image data to be detected according to the image cutting information to obtain N cut data after cutting; and respectively acquiring the N parts of cutting data, scanning the N parts of cutting data based on the window indicated by the sliding window information to obtain the detection value of each window scanning area in the N parts of cutting data, and determining whether each window scanning area has a human face according to the detection value of each window scanning area. By adopting the embodiment of the application, the occupancy rate of hardware resources can be reduced, the utilization rate of the hardware resources is improved, the flexibility is high, and the applicability is strong.

Description

Face detection method and device
Technical Field
The present application relates to the field of image processing, and in particular, to a method and an apparatus for face detection.
Background
Vision is the most prominent means for humans to obtain information. According to relevant research statistics, the information obtained by vision accounts for more than 70% of the total amount of the information obtained by human beings; on the other hand, with the rapid improvement of computer hardware performance, research for assisting and simulating human vision by using a computer has attracted much attention.
The person in the image tends to be the center of the entire image, and the facial area of the person in the image is generally of greater interest, depending on the visual characteristics of the human eye. The processing and analysis of human faces include face recognition, face tracking, pose estimation, expression recognition, etc., wherein face detection is of great interest as a key step in all face information processing. In the prior art, because the storage resources of hardware are limited, how to store data under the condition that the storage resources of the hardware are limited becomes a problem to be solved urgently at present for a face detection algorithm for large-size images.
Disclosure of Invention
The embodiment of the application provides a method and a device for face detection, which can realize face detection on a large-size image under the condition of insufficient hardware resources and reduce the occupancy rate of the hardware resources.
In a first aspect, an embodiment of the present application provides a method for detecting a face, where the method includes:
acquiring image processing information, wherein the image processing information comprises sliding window information and image cutting information;
cutting the image data to be detected according to the image cutting information to obtain N cut data after cutting;
acquiring 1 st cutting data in the N cutting data, scanning the window from the starting point of the 1 st cutting data based on the window indicated by the sliding window information to obtain the detection value of each window scanning area in the 1 st cutting data, and determining whether each window scanning area has a human face according to the detection value of each window scanning area;
and acquiring the a +1 th cutting data after the 1 st cutting data, scanning the a +1 th cutting data from the scanning end position of the a th cutting data based on the window indicated by the sliding window information to obtain the detection value of each scanning window area in the a +1 th cutting data, and determining whether each scanning window area has a human face according to the detection value of each scanning window area, wherein a is an integer larger than 0 and smaller than N.
With reference to the first aspect, in a possible implementation manner, the image cutting information includes N sets of cutting parameters, where an ith set of cutting parameters is used to indicate a cutting manner of an ith piece of cutting data; the cutting the image data to be detected according to the image cutting information to obtain N cut data after cutting includes:
acquiring image data to be detected;
and carrying out image data cutting on the image data to be detected according to the cutting mode indicated by each group of cutting parameters in the N groups of cutting parameters to obtain N parts of cutting data.
With reference to the first aspect, in a possible implementation manner, the sliding window information includes a window size and a window step size, where the window size includes multiple sizes, one size corresponds to one size window, and one size window corresponds to one window step size;
the method further comprises the following steps:
acquiring a maximum size window of a plurality of size windows corresponding to the plurality of sizes, and taking the maximum size window as a first size window, wherein the first size window corresponds to a first window step length;
determining a 1 st group of cutting parameters according to the first size window and the image column width of the image to be detected;
and determining other N-1 groups of cutting parameters except the 1 st group of cutting parameters according to the first window step length and the image column width of the image to be detected so as to obtain the N groups of cutting parameters.
With reference to the first aspect, in a possible implementation manner, the sliding window information includes a window size and a window step size, where the window size at least includes a first size and a second size, the first size corresponds to a first-size window, the second size corresponds to a second-size window, the first-size window corresponds to the first window step size, and the second-size window corresponds to the second window step size;
the method for obtaining the detection value of each window scanning area in the 1 st cutting data by scanning the window indicated by the sliding window information from the starting point of the 1 st cutting data comprises the following steps:
performing window scanning from left to right and from top to bottom from the starting point of the 1 st cutting data according to the distance of the step length of moving the first window every time by using the first-size window to obtain the detection value of each window scanning area passed by the first-size window in the 1 st cutting data;
scanning the window from left to right and from top to bottom from the starting point of the 1 st cutting data by using the second-size window according to the distance of the step length of moving the second window every time, and obtaining the detection value of each window scanning area passed by the second-size window in the 1 st cutting data;
and determining the detection value of each window scanning area passed by the first-size window in the 1 st cutting data and the detection value of each window scanning area passed by the second-size window in the 1 st cutting data as the detection value of each window scanning area in the 1 st cutting data.
With reference to the first aspect, in a possible implementation manner, when the window of the first size is swept to any window sweep area on the a-th cutting data and data in the window of the first size is smaller than the size of the window of the first size, a first stopping position on the a-th cutting data when the window of the first size is swept to any window sweep area is recorded;
when the window of the second size is scanned to any window scanning area on the a-th cutting data and the data in the window of the second size is smaller than the size of the window of the second size, recording a second stopping position on the a-th cutting data when the window of the second size is scanned to any window scanning area;
and determining the first stop position and the second stop position as the scanning window end position of the a-th cutting data.
With reference to the first aspect, in a possible implementation manner, the obtaining the detection values of the respective window-scanning areas in the a +1 th cut data by scanning the a +1 th cut data from the window-scanning end position of the a +1 th cut data based on the window indicated by the sliding window information includes:
scanning the a +1 th cutting data from the first stopping position to the right and from the top to the bottom according to the distance of the step length of moving the first window every time by using the first size window to obtain the detection value of each window scanning area passed by the first size window in the a +1 th cutting data;
scanning the a +1 th cutting data from the second stopping position from left to right and from top to bottom according to the distance of the step length of moving the second window every time by using the second size window to obtain the detection value of each window scanning area passed by the second size window in the a +1 th cutting data;
and determining the detection value of each window scanning area passed by the first-size window in the a +1 th cutting data and the detection value of each window scanning area passed by the second-size window in the a +1 th cutting data as the detection value of each window scanning area in the a +1 th cutting data.
With reference to the first aspect, in a possible implementation manner, the determining whether each of the window-scanning regions has a human face according to the detection value of each of the window-scanning regions includes:
and comparing the detection value of each window scanning area with a preset value, and if the detection value of any window scanning area in each window scanning area is greater than or equal to the preset value, determining that any window scanning area has a human face.
In a second aspect, an embodiment of the present application provides an apparatus for face detection, where the apparatus includes:
the information acquisition unit is used for acquiring image processing information, and the image processing information comprises sliding window information and image cutting information;
the data cutting unit is used for cutting the image data to be detected according to the image cutting information determined by the information acquisition unit to obtain N pieces of cut data after cutting;
a cutting data processing unit, configured to obtain 1 st piece of cutting data in the N pieces of cutting data determined by the data cutting unit, perform window scanning from a starting point of the 1 st piece of cutting data based on a window indicated by the sliding window information, obtain a detection value of each window scanning area in the 1 st piece of cutting data, and determine whether a human face exists in each window scanning area according to the detection value of each window scanning area;
the cutting data processing unit is further configured to obtain a +1 th cutting data after the 1 st cutting data determined by the data cutting unit, perform window scanning on the a +1 st cutting data from a window scanning end position of the a th cutting data based on a window indicated by the sliding window information to obtain a detection value of each window scanning area in the a +1 th cutting data, and determine whether a face exists in each window scanning area according to the detection value of each window scanning area, where a is an integer greater than 0 and less than N.
With reference to the second aspect, in a possible implementation manner, the image cutting information includes N sets of cutting parameters, where the ith set of cutting parameters is used to indicate a cutting manner of the ith piece of cutting data; the data cutting unit is configured to:
acquiring image data to be detected;
and carrying out image data cutting on the image data to be detected according to the cutting mode indicated by each group of cutting parameters in the N groups of cutting parameters to obtain N parts of cutting data.
With reference to the second aspect, in a possible implementation manner, the sliding window information includes a window size and a window step size, where the window size includes a plurality of sizes, one size corresponds to one size window, and one size window corresponds to one window step size; the face detection device further comprises:
a cutting parameter determining unit, configured to obtain a maximum size window of multiple size windows corresponding to the multiple sizes, and use the maximum size window as a first size window, where the first size window corresponds to a first window step length;
determining a 1 st group of cutting parameters according to the first size window and the image column width of the image to be detected;
and determining other N-1 groups of cutting parameters except the 1 st group of cutting parameters according to the first window step length and the image column width of the image to be detected so as to obtain the N groups of cutting parameters.
With reference to the second aspect, in a possible implementation manner, the sliding window information includes a window size and a window step size, where the window size at least includes a first size and a second size, the first size corresponds to a first-size window, the second size corresponds to a second-size window, the first-size window corresponds to the first window step size, and the second-size window corresponds to the second window step size; the cutting data processing unit is used for:
performing window scanning from left to right and from top to bottom from the starting point of the 1 st cutting data according to the distance of the step length of moving the first window every time by using the first-size window to obtain the detection value of each window scanning area passed by the first-size window in the 1 st cutting data;
scanning the window from left to right and from top to bottom from the starting point of the 1 st cutting data by using the second-size window according to the distance of the step length of moving the second window every time, and obtaining the detection value of each window scanning area passed by the second-size window in the 1 st cutting data;
and determining the detection value of each window scanning area passed by the first-size window in the 1 st cutting data and the detection value of each window scanning area passed by the second-size window in the 1 st cutting data as the detection value of each window scanning area in the 1 st cutting data.
With reference to the second aspect, in a possible implementation manner, the face detection apparatus further includes:
a stop position determining unit, configured to record a first stop position on the a-th cutting data when the first size window is scanned to any window scanning area on the a-th cutting data and data in the first size window is smaller than the size of the first size window;
when the window of the second size is scanned to any window scanning area on the a-th cutting data and the data in the window of the second size is smaller than the size of the window of the second size, recording a second stopping position on the a-th cutting data when the window of the second size is scanned to any window scanning area;
and determining the first stop position and the second stop position as the scanning window end position of the a-th cutting data.
With reference to the second aspect, in a possible implementation manner, the cutting data processing unit is further configured to:
scanning the a +1 th cutting data from the first stopping position to the right and from the top to the bottom according to the distance of the step length of moving the first window every time by using the first size window to obtain the detection value of each window scanning area passed by the first size window in the a +1 th cutting data;
scanning the a +1 th cutting data from the second stopping position from left to right and from top to bottom according to the distance of the step length of moving the second window every time by using the second size window to obtain the detection value of each window scanning area passed by the second size window in the a +1 th cutting data;
and determining the detection value of each window scanning area passed by the first-size window in the a +1 th cutting data and the detection value of each window scanning area passed by the second-size window in the a +1 th cutting data as the detection value of each window scanning area in the a +1 th cutting data.
With reference to the second aspect, in a possible implementation manner, the cutting data processing unit is further configured to:
and comparing the detection value of each window scanning area with a preset value, and if the detection value of any window scanning area in each window scanning area is greater than or equal to the preset value, determining that any window scanning area has a human face.
In a third aspect, an embodiment of the present application provides a terminal device, where the terminal device includes a processor and a memory, and the processor and the memory are connected to each other. The memory is configured to store a computer program that supports the terminal device to execute the method provided by the first aspect and/or any one of the possible implementation manners of the first aspect, where the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method provided by the first aspect and/or any one of the possible implementation manners of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processor to execute the method provided by the first aspect and/or any one of the possible implementation manners of the first aspect.
The embodiment of the application has the following beneficial effects:
by cutting the large-size image data, the data volume of each cut data after cutting is reduced compared with the whole image data, the space occupied by hardware resources is reduced, and then each cut data after cutting is processed for multiple times, the face detection of the large-size image can be realized under the condition of insufficient hardware resources, the occupancy rate of the hardware resources is reduced, the utilization rate of the hardware resources is improved, the flexibility is high, and the applicability is strong.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a face detection method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an application scenario for cutting image data to be detected according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario of a window scanning process provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an application scenario for recording a window scanning end position according to an embodiment of the present application;
fig. 5 is a schematic view of an application scenario of a face detection method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a face detection apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The face detection method provided by the embodiment of the application can be widely applied to a server and/or terminal equipment for face detection. The server may be a server providing various services, and the terminal device may be hardware or software. When the terminal device is hardware, it may be various electronic devices with a display screen, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal device is software, the terminal device can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein. When an image to be detected is acquired and face detection needs to be performed on the image to be detected, image data to be detected can be cut through image cutting information in acquired image processing information to obtain a plurality of cut data after cutting, window scanning is performed on each cut data by using sliding window information in the image processing information and recording the end position of the window scanning, and face detection results of each window scanning area in the plurality of cut data can be obtained. By adopting the embodiment of the application, the data volume of each cut data after cutting is reduced compared with the data volume of the whole image data by cutting the large-size image data, so that the occupied space of hardware resources is reduced, and then each cut data after cutting is processed for multiple times, so that the face detection of the large-size image can be realized under the condition of insufficient hardware resources, the occupancy rate of the hardware resources is reduced, the utilization rate of the hardware resources is improved, the flexibility is high, and the applicability is strong.
The method and the related apparatus provided by the embodiments of the present application will be described in detail with reference to fig. 1 to 7, respectively. The method provided by the embodiment of the application can comprise data processing stages, such as obtaining image processing information, cutting image data to be detected, and scanning a window on the cut data based on sliding window information. The implementation manner of each data processing stage can be referred to as the implementation manner shown in fig. 1 below.
Referring to fig. 1, fig. 1 is a schematic flow chart of a face detection method according to an embodiment of the present disclosure. The method provided by the embodiment of the application can comprise the following steps 101 to 104:
101. and acquiring image processing information, wherein the image processing information comprises sliding window information and image cutting information.
In some possible embodiments, the face detection refers to searching any given image by using a certain strategy to determine whether the image contains a face. The algorithms for face detection include, but are not limited to, a PICO (pixel intensity comparison) algorithm, an NPD (normalized pixel difference) algorithm, and the like, and may be determined according to an actual application scenario, and are not limited herein. By training the face detection algorithm, the sliding window information for face detection can be obtained. When the face detection is needed to be carried out on the image to be detected, the sliding window information obtained through training can be obtained. And scanning the cut image data to be detected according to the window indicated by the sliding window information. The sliding window information includes a window size and a window step size, and the window size includes a plurality of sizes. It will be appreciated that one size indicates the size of one size window, one size window corresponding to one window step. For convenience of description, two sizes (assumed as a first size and a second size) among the plurality of sizes will be explained as an example.
For example, assuming that the window sizes in the obtained sliding window information at least include a first size and a second size, the window corresponding to the first size is a window of the first size, the window corresponding to the second size is a window of the second size, and the window of the first size corresponds to a window step size of the first window, and the window of the second size corresponds to a window step size of the second window.
In some feasible embodiments, when the face detection is performed on the image to be detected, if the hardware storage resource is large enough, the algorithm parameters for the face detection and the image data to be detected can be stored. Here, the image to be detected is composed of image data to be detected. It can be understood that in a practical application scenario, hardware storage resources are limited, and it is difficult to store all parameters and data at one time. Therefore, the embodiment of the application can achieve the purpose of carrying out face detection on the image to be detected and solve the problem of insufficient storage resources by dividing the image data to be detected into a plurality of pieces of cutting data and then processing the cutting data. Here, the image cutting information may be used to indicate a cutting manner of cutting the image data to be detected, the cutting manner includes a line width and a column width of the cutting data and a cutting direction, and the cutting direction includes a transverse cutting and a longitudinal cutting. The cutting directions in the embodiments of the present application are all transverse cuts. Assuming that N sets of cutting parameters are included in the image cutting information, the ith set of cutting parameters may be used to indicate a cutting manner of the ith cutting data. In other words, the ith cutting parameter indicates the line width and the column width of the ith cutting data and the cutting direction when the image data to be detected is cut, that is, the ith cutting parameter can indicate the cutting start point and the cutting end point when the ith cutting data is cut on the image data to be detected. Wherein N is a positive integer, and the size of N is determined according to an actual application scenario, which is not limited herein.
102. And cutting the image data to be detected according to the image cutting information to obtain N cut data after cutting.
In some possible embodiments, the image data to be detected may be expressed as an image row width x an image column width. By obtaining a maximum size window of a plurality of size windows corresponding to the plurality of sizes, the maximum size window may be used as a first size window, where the first size window corresponds to a first window step size. According to the first size window and the image column width of the image data to be detected, the line width and the column width of the 1 st cutting data indicated by the 1 st group of cutting parameters and the cutting direction can be determined, namely the cutting starting point and the cutting end point when the 1 st cutting data is cut on the image data to be detected. According to the first window step length and the image column width of the image data to be detected, the line width, the column width and the cutting direction of each piece of cutting data indicated by the other N-1 groups of cutting parameters except the 1 st group of cutting parameters can be determined, namely the cutting starting point and the cutting end point when the 2 nd to the Nth pieces of cutting data are cut on the image data to be detected.
For example, assuming that the image to be detected is a 720p image, that is, the image line width of the image to be detected is 720, and the image column width of the image to be detected is 1280, the size of the image data to be detected can be represented as 720 × 1280. The window size obtained in the sliding window information comprises a first size and a second size, and the window step size comprises a first window step size and a second window step size. And the first size window corresponding to the first size is larger than the second size window corresponding to the second size. If the size of the first size window is 300 × 300, the size of the second size window is 180 × 180, the first window step is 84, and the second window step is 60, the line width and the column width of the cutting pattern cut data indicated by the 1 st set of cutting parameters in the image cut information may be represented as the first size × the image column width, i.e., 300 × 1280, and the line width and the column width of the cutting pattern cut data indicated by the N-1 sets of cutting parameters other than the 1 st set of cutting parameters in the image cut information may be represented as the first window step × the image column width, i.e., 84 × 1280, where the cutting direction is transverse cutting.
Optionally, in some possible embodiments, each of the N sets of cutting parameters in the image cutting information may be different. If the cutting direction is transverse cutting, the line width and the column width of the 1 st cut data obtained by cutting in the cutting mode indicated by the 1 st group of cutting parameters at least should satisfy the condition that the line width and the column width are larger than or equal to the first size multiplied by the image column width of the image data to be detected.
For example, referring to fig. 2, fig. 2 is a schematic view of an application scenario for cutting image data to be detected according to an embodiment of the present application. Assuming that the acquired image to be detected is 720p, that is, the image line width of the image to be detected is 720, and the image column width of the image to be detected is 1280, the size of the image data to be detected can be represented as image line width × image column width, that is, 720 × 1280. Assuming that 6 sets of cutting parameters are obtained, wherein the cutting direction in the cutting mode indicated by the 1 st set of cutting parameters is transverse cutting, and the line width and the column width of the cutting data are 300 × 1280, the image data to be detected is cut according to the 1 st set of cutting parameters, and the 0 th to 299 th lines of which the 1 st piece of cutting data is the image data to be detected can be obtained. The cutting direction in the cutting mode indicated by the cutting parameters in the sets 2-6 is transverse cutting, and the row width and the column width of each cutting data are 84 × 1280. And cutting the image data to be detected according to the 2 nd group of cutting parameters to obtain the 300 th to 383 th rows of the 2 nd cutting data which are the image data to be detected. And cutting the image data to be detected according to the 3 rd group of cutting parameters to obtain the 384-467 th line of which the 3 rd cutting data is the image data to be detected. And cutting the image data to be detected according to the 4 th group of cutting parameters to obtain 468-551 rows of which the 4 th cutting data are the image data to be detected. And cutting the image data to be detected according to the 5 th group of cutting parameters to obtain the 5 th cutting data which are 552 th to 635 th lines of the image data to be detected. And cutting the image data to be detected according to the 6 th group of cutting parameters to obtain 636-719 th lines of the 6 th cutting data as the image data to be detected.
103. And acquiring the 1 st cutting data in the N cutting data, scanning the window from the starting point of the 1 st cutting data based on the window indicated by the sliding window information to obtain the detection value of each window scanning area in the 1 st cutting data, and determining whether the each window scanning area has a human face according to the detection value of each window scanning area.
In some feasible embodiments, because hardware storage resources are limited, it is difficult to implement processing of storing the whole image to be detected and then performing face detection. According to the embodiment of the application, the problem of insufficient hardware storage resources can be effectively solved through the implementation mode of processing each part of cutting data in the N parts of cutting data after the image data to be detected is cut into the N parts of cutting data.
In some possible embodiments, the 1 st cut data in the N cut data is obtained, and the window is scanned from the starting point of the 1 st cut data according to the window indicated by the obtained sliding window information, so as to obtain the detection value of each window scanning area in the 1 st cut data. Specifically, the 1 st cut data is acquired, and a first size window is used to perform window scanning from left to right and from top to bottom from the start point of the 1 st cut data by a distance of a first window step length for each movement, so that the detected values of the window scanning regions through which the first size window passes in the 1 st cut data can be obtained. And scanning the window from left to right and from top to bottom from the starting point of the 1 st cutting data by using the second-size window according to the distance of the step length of moving the second window every time, and obtaining the detection value of each window scanning area passed by the second-size window in the 1 st cutting data. By comparing the detection values of the window scanning areas with preset values, whether the window scanning areas have human faces or not can be determined. And if the detection value of any one of the window scanning areas is greater than or equal to the preset value, determining that the face exists in any one of the window scanning areas.
For example, referring to fig. 3, fig. 3 is a schematic view of an application scenario of a window scanning process provided in an embodiment of the present application. In some application scenarios, 1 st cut data, namely, 0-299 th lines of data, is obtained, assuming that the size of the xth size window is mxm, the xth window step is M ', starting from the start point (line 0) of the 1 st cut data, and starting from the start point (line 0) of the xth size window, the 1 st cut data is scanned from left to right by moving the xth size window by one xth window step (namely, moving the xth size window by M') at a time. Wherein, the X-size window can obtain a detection value through one window scanning area each time. When the window of the X-th size is scanned from left to right to the last column (1279 column) of the 1 st cut data, the window is moved down by a distance of the step M ' of the X-th window, and the 1 st cut data is continuously scanned from left to right by a distance of one step (i.e. each movement M ') of the X-th window from the M ' th row of the 1 st cut data by using the size M × M of the window of the X-th size. And in the window scanning process, the detection values of all window scanning areas passed by the X-th size window can be obtained. And comparing the detection value of each window scanning area with a preset value, and if the detection value of any window scanning area in each window scanning area is greater than or equal to the preset value, determining that any window scanning area has a human face.
Optionally, in some possible embodiments, when scanning the a-th cutting data, every time the distance of one xth window step is moved downwards (i.e. every time the distance is moved by M'), it is required to determine whether the data in the window scanning region from the xth window to the a-th cutting data satisfies the size equal to the xth window. If the data in the window during the window scanning of the X-th size window is smaller than the size of the X-th size window, recording the window scanning end position of the X-th size window on the a-th cutting data as an X-th stop position and/or marking an X-th label at the window scanning end position of the a-th cutting data.
For example, referring to fig. 4, fig. 4 is a schematic view of an application scenario for recording a window scanning end position according to an embodiment of the present application. In some application scenarios, for the a-th cut data, assuming that the size of the Y-th size window is W × W and the Y-th window step is W ', when the a-th cut data is scanned from left to right and from top to bottom, and the Y-th size window is moved downward by the distance of the Y-th window step W', the data (shaded portion) in the Y-th size window is smaller than the size W × W of the Y-th size window, and the scanning end position of the Y-th size window on the a-th cut data is recorded as the W-th stop position W.
104. And acquiring the (a + 1) th cutting data after the (1) th cutting data, scanning the (a + 1) th cutting data from the scanning end position of the (a) th cutting data based on the window indicated by the sliding window information to obtain the detection value of each scanning window area in the (a + 1) th cutting data, and determining whether each scanning window area has a human face according to the detection value of each scanning window area.
In some possible embodiments, the a +1 st cutting data after the 1 st cutting data is obtained, where a is an integer greater than 0 and less than N. It is understood that the 2 nd cut data is acquired when the size windows of the respective sizes have the respective sweep end positions recorded in the 1 st cut data. When the hardware resources are limited, the obtained 2 nd cutting data cover the data of the 1 st cutting data after the window scanning operation is performed by the size windows of all sizes. In other words, the portion of the data after having been once swept by each size window in the 1 st cut data is deleted. And scanning the 2 nd cutting data from the scanning end position of the 1 st cutting data according to the window indicated by the sliding window information, so as to obtain the detection value of each scanning window area in the 2 nd cutting data. Specifically, if the scanning end position of the first size window on the 1 st cutting data is the first stop position, the scanning end position of the second size window on the 1 st cutting data is the second stop position. And acquiring the 2 nd cutting data, and scanning the 2 nd cutting data from left to right and from top to bottom from the first stop position by using the first size window according to the distance of the step length of moving the first window every time, so as to obtain the detection values of the window scanning areas passed by the first size window in the 1 st cutting data. And scanning the 2 nd cutting data from the second stopping position to the right and from the top to the bottom by using the second size window according to the distance of the step length of moving the second window every time, and obtaining the detection value of each window scanning area passed by the second size window in the 2 nd cutting data. By comparing the detection values of the window scanning areas with preset values, whether the window scanning areas have human faces or not can be determined. And if the detection value of any one of the window scanning areas is greater than or equal to the preset value, determining that the face exists in any one of the window scanning areas. Meanwhile, in the window scanning process, the window scanning end positions of the size windows of each size in the 2 nd cutting data are recorded, and the specific implementation manner of recording the window scanning end positions is as described in step 103, and is not described herein again. And continuously acquiring the 3 rd cutting data for processing until each piece of cutting data in the N pieces of cutting data obtained by cutting the image data to be detected finishes the window scanning of each size window, and finishing the window scanning.
For example, referring to fig. 5, fig. 5 is a schematic view of an application scenario of the face detection method provided in the embodiment of the present application. Assuming that the acquired image to be detected is 720p, that is, the image line width of the image to be detected is 720, and the image column width of the image to be detected is 1280, the size of the image data to be detected can be represented as image line width × image column width, that is, 720 × 1280. The size of the first size window is 300 × 300, the first window step is 84, the size of the second size window is 180 × 180, and the second window step is 60. The image data to be detected is cut to obtain 6 pieces of cut data, and the 1 st piece of cut data and the 2 nd piece of cut data will be described as an example. The 1 st cutting data is the 0 th to 299 th lines of the image data to be detected, and the 2 nd cutting data is the 300 th to 383 th lines of the image data to be detected. Acquiring 1 st cutting data (line 0-299), starting from the starting point (line 0) of the 1 st cutting data, scanning the 1 st cutting data from left to right and from top to bottom by using a first size window (300 multiplied by 300) according to the distance of moving one first window step length 84 each time, wherein a window scanning area passed by the first size window (300 multiplied by 300) each time can obtain a detection value, and comparing the obtained detection value d with the size of a preset value e. And if the detection value d is larger than or equal to the preset value e, determining that the window scanning area corresponding to the detection value d has a human face. When the first size window (300 × 300) moves down the 1 st cut data by a distance of 84 first window steps, the data (shaded portion in the figure) in the window during the window scanning of the first size window (300 × 300) is smaller than the size of the first size window (300 × 300), and the window scanning end position of the first size window (300 × 300) on the 1 st cut data is recorded as a first stop position (84 th row). And (2) scanning the 1 st cutting data from left to right and from top to bottom by using a second size window (180 multiplied by 180) according to the distance of moving one second window step length 60 each time, wherein a detection value can be obtained from a window scanning area passed by the second size window (180 multiplied by 180) each time, and the obtained detection value c is compared with the preset value e. And if the detection value c is smaller than the preset value e, determining that no human face exists in the window scanning area corresponding to the detection value c. When the second size window (180 × 180) is shifted down by a distance of a second window step 60 for the third time on the 1 st cut data, the data (shaded portion in the figure) in the window during the window scanning of the second size window (180 × 180) is smaller than the size of the second size window (180 × 180), and the window scanning end position of the second size window (180 × 180) on the 1 st cut data is recorded as a second stop position (line 180). Acquiring a 2 nd cutting data (line 300-383), deleting data (line 0-83) which are scanned by the first size window (300 × 300) and the second size window (180 × 180) in the first cutting data, and continuing to scan the 2 nd cutting data from left to right and from top to bottom from the first stop position (line 84) and the second stop position (line 180) until each cutting data in the 6 cutting data in the image to be detected finishes scanning the window, wherein the window scanning process is as described above, and the description is not repeated.
In the embodiment of the application, when an image to be detected is acquired and face detection needs to be performed on the image to be detected, the image data to be detected is cut according to the cutting mode indicated by the N groups of cutting parameters through the N groups of cutting parameters in the acquired image cutting information, so that N pieces of cut data after cutting can be obtained. The method comprises the steps of obtaining one cutting data in the N cutting data every time, scanning the window from left to right and from top to bottom according to the size window and moving a distance of one window step length every time, recording the end position of the window scanning, obtaining the detection value of each window scanning area passed by each size window in each cutting data, and comparing the detection value with a preset value to obtain the face detection result of each window scanning area. By adopting the embodiment of the application, the data volume of each cut data after cutting is reduced compared with the data volume of the whole image data by cutting the large-size image data, so that the occupied space of hardware resources is reduced, and then each cut data after cutting is processed for multiple times, so that the face detection of the large-size image can be realized under the condition of insufficient hardware resources, the occupancy rate of the hardware resources is reduced, the utilization rate of the hardware resources is improved, the flexibility is high, and the applicability is strong.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a face detection apparatus according to an embodiment of the present application. The embodiment of the application provides a face detection device, which comprises:
an information acquisition unit 31 configured to acquire image processing information including sliding window information and image cutting information;
a data cutting unit 32, configured to cut the image data to be detected according to the image cutting information determined by the information obtaining unit 31, so as to obtain N pieces of cut data after cutting;
a cutting data processing unit 33, configured to obtain the 1 st cutting data of the N cutting data determined by the data cutting unit 32, perform window scanning from the starting point of the 1 st cutting data based on the window indicated by the sliding window information, obtain the detection value of each window scanning area in the 1 st cutting data, and determine whether a human face exists in each window scanning area according to the detection value of each window scanning area;
the cutting data processing unit 33 is further configured to obtain a +1 th cutting data after the 1 st cutting data determined by the data cutting unit 32, perform window scanning on the a +1 st cutting data from a window scanning end position of the a th cutting data based on the window indicated by the sliding window information, obtain a detection value of each window scanning area in the a +1 th cutting data, and determine whether a face exists in each window scanning area according to the detection value of each window scanning area, where a is an integer greater than 0 and less than N.
In some possible embodiments, the image cutting information includes N sets of cutting parameters, where the ith set of cutting parameters is used to indicate a cutting mode of the ith piece of cutting data; the data cutting unit 32 is configured to:
acquiring image data to be detected;
and carrying out image data cutting on the image data to be detected according to the cutting mode indicated by each group of cutting parameters in the N groups of cutting parameters to obtain N parts of cutting data.
In some possible embodiments, the sliding window information includes a window size and a window step size, and the window size includes a plurality of sizes, where one size corresponds to one size window and one size window corresponds to one window step size; the face detection device further comprises:
a cutting parameter determining unit 34, configured to obtain a maximum size window of multiple size windows corresponding to the multiple sizes, and use the maximum size window as a first size window, where the first size window corresponds to a first window step length;
determining a 1 st group of cutting parameters according to the first size window and the image column width of the image to be detected;
and determining other N-1 groups of cutting parameters except the 1 st group of cutting parameters according to the first window step length and the image column width of the image to be detected so as to obtain the N groups of cutting parameters.
In some possible embodiments, the sliding window information includes a window size and a window step size, where the window size includes at least a first size and a second size, the first size corresponds to a first size window, the second size corresponds to a second size window, the first size window corresponds to a first window step size, and the second size window corresponds to a second window step size; the cutting data processing unit 33 is configured to:
performing window scanning from left to right and from top to bottom from the starting point of the 1 st cutting data according to the distance of the step length of moving the first window every time by using the first-size window to obtain the detection value of each window scanning area passed by the first-size window in the 1 st cutting data;
scanning the window from left to right and from top to bottom from the starting point of the 1 st cutting data by using the second-size window according to the distance of the step length of moving the second window every time, and obtaining the detection value of each window scanning area passed by the second-size window in the 1 st cutting data;
and determining the detection value of each window scanning area passed by the first-size window in the 1 st cutting data and the detection value of each window scanning area passed by the second-size window in the 1 st cutting data as the detection value of each window scanning area in the 1 st cutting data.
In some possible embodiments, the face detection apparatus further includes:
a stopping position determining unit 35, configured to record a first stopping position on the a-th cutting data when the first size window is scanned to any window scanning area on the a-th cutting data and data in the first size window is smaller than the size of the first size window;
when the window of the second size is scanned to any window scanning area on the a-th cutting data and the data in the window of the second size is smaller than the size of the window of the second size, recording a second stopping position on the a-th cutting data when the window of the second size is scanned to any window scanning area;
and determining the first stop position and the second stop position as the scanning window end position of the a-th cutting data.
In some possible embodiments, the cutting data processing unit 33 is further configured to:
scanning the a +1 th cutting data from the first stopping position to the right and from the top to the bottom according to the distance of the step length of moving the first window every time by using the first size window to obtain the detection value of each window scanning area passed by the first size window in the a +1 th cutting data;
scanning the a +1 th cutting data from the second stopping position from left to right and from top to bottom according to the distance of the step length of moving the second window every time by using the second size window to obtain the detection value of each window scanning area passed by the second size window in the a +1 th cutting data;
and determining the detection value of each window scanning area passed by the first-size window in the a +1 th cutting data and the detection value of each window scanning area passed by the second-size window in the a +1 th cutting data as the detection value of each window scanning area in the a +1 th cutting data.
In some possible embodiments, the cutting data processing unit 33 is further configured to:
and comparing the detection value of each window scanning area with a preset value, and if the detection value of any window scanning area in each window scanning area is greater than or equal to the preset value, determining that any window scanning area has a human face.
In a specific implementation, the face detection apparatus may execute the implementation manner provided in each step in fig. 1 through each built-in functional module of the face detection apparatus. For example, the information obtaining unit 31 may be configured to execute implementation manners of obtaining the image processing information in the above steps, and specifically, refer to the implementation manners provided in the above steps, which are not described herein again. The data cutting unit 32 may be configured to execute the implementation manners described in the relevant steps of cutting the image data to be detected in the above steps, which may specifically refer to the implementation manners provided in the above steps, and will not be described herein again. The cutting data processing unit 33 may be configured to execute the implementation manners, such as performing window scanning on each piece of cutting data based on each size window and the window step length in each step, and determining whether there is a face, which may specifically refer to the implementation manners provided in each step, and will not be described herein again. The cutting parameter determining unit 34 may be configured to determine the implementation manners of the N groups of cutting parameters in each step, which may specifically refer to the implementation manners provided in each step, and will not be described herein again. The stopping position determining unit 35 may be configured to execute implementation manners such as recording a window scanning ending position of each size window on each piece of cutting data in each step, which may specifically refer to the implementation manners provided in each step, and details are not described herein again.
In this embodiment of the application, the face detection device may obtain N cut data after cutting by cutting the image data to be detected based on the acquired image to be detected and N sets of cutting parameters in the image cutting information. The method comprises the steps of obtaining one cutting data in the N cutting data every time, scanning the window from left to right and from top to bottom according to the size window and moving a distance of one window step length every time, recording the end position of the window scanning, obtaining the detection value of each window scanning area passed by each size window in each cutting data, and comparing the detection value with a preset value to obtain the face detection result of each window scanning area. By adopting the embodiment of the application, the data volume of each cut data after cutting is reduced compared with the data volume of the whole image data by cutting the large-size image data, so that the occupied space of hardware resources is reduced, and then each cut data after cutting is processed for multiple times, the face detection of the large-size image can be realized under the condition of insufficient hardware resources, the occupancy rate of the hardware resources is reduced, the utilization rate of the hardware resources is improved, the flexibility is high, and the application range is wide.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a terminal device provided in an embodiment of the present application. As shown in fig. 7, the terminal device in this embodiment may include: one or more processors 401 and memory 402. The processor 401 and the memory 402 are connected by a bus 403. The memory 402 is used to store a computer program comprising program instructions, and the processor 401 is used to execute the program instructions stored in the memory 402 to perform the following operations:
acquiring image processing information, wherein the image processing information comprises sliding window information and image cutting information;
cutting the image data to be detected according to the image cutting information to obtain N cut data after cutting;
acquiring 1 st cutting data in the N cutting data, scanning the window from the starting point of the 1 st cutting data based on the window indicated by the sliding window information to obtain the detection value of each window scanning area in the 1 st cutting data, and determining whether each window scanning area has a human face according to the detection value of each window scanning area;
and acquiring the a +1 th cutting data after the 1 st cutting data, scanning the a +1 th cutting data from the scanning end position of the a th cutting data based on the window indicated by the sliding window information to obtain the detection value of each scanning window area in the a +1 th cutting data, and determining whether each scanning window area has a human face according to the detection value of each scanning window area, wherein a is an integer larger than 0 and smaller than N.
In some possible embodiments, the image cutting information includes N sets of cutting parameters, where the ith set of cutting parameters is used to indicate a cutting mode of the ith piece of cutting data; the processor 401 is configured to:
acquiring image data to be detected;
and carrying out image data cutting on the image data to be detected according to the cutting mode indicated by each group of cutting parameters in the N groups of cutting parameters to obtain N parts of cutting data.
In some possible embodiments, the sliding window information includes a window size and a window step size, and the window size includes a plurality of sizes, where one size corresponds to one size window and one size window corresponds to one window step size; the processor 401 is configured to:
acquiring a maximum size window of a plurality of size windows corresponding to the plurality of sizes, and taking the maximum size window as a first size window, wherein the first size window corresponds to a first window step length;
determining a 1 st group of cutting parameters according to the first size window and the image column width of the image to be detected;
and determining other N-1 groups of cutting parameters except the 1 st group of cutting parameters according to the first window step length and the image column width of the image to be detected so as to obtain the N groups of cutting parameters.
In some possible embodiments, the sliding window information includes a window size and a window step size, where the window size includes at least a first size and a second size, the first size corresponds to a first size window, the second size corresponds to a second size window, the first size window corresponds to a first window step size, and the second size window corresponds to a second window step size; the processor 401 is configured to:
performing window scanning from left to right and from top to bottom from the starting point of the 1 st cutting data according to the distance of the step length of moving the first window every time by using the first-size window to obtain the detection value of each window scanning area passed by the first-size window in the 1 st cutting data;
scanning the window from left to right and from top to bottom from the starting point of the 1 st cutting data by using the second-size window according to the distance of the step length of moving the second window every time, and obtaining the detection value of each window scanning area passed by the second-size window in the 1 st cutting data;
and determining the detection value of each window scanning area passed by the first-size window in the 1 st cutting data and the detection value of each window scanning area passed by the second-size window in the 1 st cutting data as the detection value of each window scanning area in the 1 st cutting data.
In some possible embodiments, the processor 401 is configured to:
when the window of the first size window is scanned to any window scanning area on the a-th cutting data and the data in the window of the first size is smaller than the size of the window of the first size, recording a first stopping position on the a-th cutting data when the window of the first size window is scanned to any window scanning area;
when the window of the second size is scanned to any window scanning area on the a-th cutting data and the data in the window of the second size is smaller than the size of the window of the second size, recording a second stopping position on the a-th cutting data when the window of the second size is scanned to any window scanning area;
and determining the first stop position and the second stop position as the scanning window end position of the a-th cutting data.
In some possible embodiments, the processor 401 is configured to:
scanning the a +1 th cutting data from the first stopping position to the right and from the top to the bottom according to the distance of the step length of moving the first window every time by using the first size window to obtain the detection value of each window scanning area passed by the first size window in the a +1 th cutting data;
scanning the a +1 th cutting data from the second stopping position from left to right and from top to bottom according to the distance of the step length of moving the second window every time by using the second size window to obtain the detection value of each window scanning area passed by the second size window in the a +1 th cutting data;
and determining the detection value of each window scanning area passed by the first-size window in the a +1 th cutting data and the detection value of each window scanning area passed by the second-size window in the a +1 th cutting data as the detection value of each window scanning area in the a +1 th cutting data.
In some possible embodiments, the processor 401 is configured to:
and comparing the detection value of each window scanning area with a preset value, and if the detection value of any window scanning area in each window scanning area is greater than or equal to the preset value, determining that any window scanning area has a human face.
It should be understood that in some possible embodiments, the processor 401 may be a Central Processing Unit (CPU), and the processor may be other general processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The memory 402 may include both read-only memory and random access memory, and provides instructions and data to the processor 401. A portion of the memory 402 may also include non-volatile random access memory. For example, the memory 402 may also store device type information.
In a specific implementation, the terminal device may execute the implementation manners provided in the steps in fig. 1 through the built-in function modules, which may specifically refer to the implementation manners provided in the steps, and are not described herein again.
In the embodiment of the application, the terminal device can obtain N cut data after cutting by cutting the image data to be detected based on the acquired image to be detected and N sets of cutting parameters in the image cutting information. The method comprises the steps of obtaining one cutting data in the N cutting data every time, scanning the window from left to right and from top to bottom according to the size window and moving a distance of one window step length every time, recording the end position of the window scanning, obtaining the detection value of each window scanning area passed by each size window in each cutting data, and comparing the detection value with a preset value to obtain the face detection result of each window scanning area. By adopting the embodiment of the application, the data volume of each cut data after cutting is reduced compared with the data volume of the whole image data by cutting the large-size image data, so that the occupied space of hardware resources is reduced, and then each cut data after cutting is processed for multiple times, the face detection of the large-size image can be realized under the condition of insufficient hardware resources, the occupancy rate of the hardware resources is reduced, the utilization rate of the hardware resources is improved, the flexibility is high, and the application range is wide.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, where the computer program includes program instructions, and the program instructions, when executed by a processor, implement the face detection method provided in each step in fig. 1, which may specifically refer to the implementation manners provided in each step, and are not described herein again.
The computer-readable storage medium may be the face detection apparatus provided in any of the foregoing embodiments or an internal storage unit of the terminal device, such as a hard disk or a memory of an electronic device. The computer readable storage medium may also be an external storage device of the electronic device, such as a plug-in hard disk, a smart card (SMC), a Secure Digital (SD) card, a flash memory card (flashcard), and the like, which are provided on the electronic device. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the electronic device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the electronic device. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
The terms "first", "second", "third", "fourth", and the like in the claims and in the description and drawings of the present application are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments. The term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items. Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The method and the related apparatus provided by the embodiments of the present application are described with reference to the flowchart and/or the structural diagram of the method provided by the embodiments of the present application, and each flow and/or block of the flowchart and/or the structural diagram of the method, and the combination of the flow and/or block in the flowchart and/or the block diagram can be specifically implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block or blocks of the block diagram. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block or blocks of the block diagram. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block or blocks.

Claims (9)

1. A method of face detection, the method comprising:
acquiring image processing information, wherein the image processing information comprises sliding window information and image cutting information, the sliding window information comprises window sizes and window step lengths, the window sizes comprise a plurality of sizes, one size corresponds to one size window, one size window corresponds to one window step length, the image cutting information comprises N groups of cutting parameters, the ith group of cutting parameters is used for indicating the cutting mode of the ith cutting data, and i is more than or equal to 1 and less than or equal to N;
cutting the image data to be detected according to the N groups of cutting parameters included in the image cutting information to obtain N cut data after cutting;
acquiring 1 st cutting data in the N cutting data, scanning the window from the starting point of the 1 st cutting data based on the window indicated by the sliding window information to obtain a detection value of each window scanning area in the 1 st cutting data, and determining whether each window scanning area has a human face according to the detection value of each window scanning area;
acquiring other (N-1) cutting data in the N cutting data except the 1 st cutting data, performing window scanning detection operation on each cutting data in the other (N-1) cutting data based on a window indicated by the sliding window information to obtain a detection value of each window scanning area included in each cutting data in the other (N-1) cutting data, and determining whether each window scanning area has a human face according to the detection value of each window scanning area, wherein the window scanning end position of any cutting data in the N cutting data is the window scanning start position of the cutting data next to any cutting data;
the method further comprises the following steps:
obtaining a maximum size window of a plurality of size windows corresponding to the plurality of sizes, and taking the maximum size window as a first size window, wherein the first size window corresponds to a first window step length;
determining a 1 st group of cutting parameters according to the first size window and the image column width of the image to be detected;
and determining other N-1 groups of cutting parameters except the 1 st group of cutting parameters according to the first window step length and the image column width of the image to be detected to obtain the N groups of cutting parameters.
2. The method according to claim 1, wherein the cutting the image data to be detected according to the N sets of cutting parameters included in the image cutting information to obtain N cut data after cutting, includes:
acquiring image data to be detected;
and carrying out image data cutting on the image data to be detected according to the cutting mode indicated by each cutting parameter in the N groups of cutting parameters to obtain N parts of cutting data.
3. The method of claim 1, wherein the window sizes comprise at least a third size and a second size, the third size corresponding to a third size window, the second size corresponding to a second size window, the third size window corresponding to a third window step size, the second size window corresponding to a second window step size;
the obtaining the detection values of the window scanning areas in the 1 st cutting data by scanning the window indicated by the sliding window information from the starting point of the 1 st cutting data includes:
performing window scanning from left to right and from top to bottom from the starting point of the 1 st cutting data by using the third-size window according to the distance of the step length of the third window, so as to obtain the detection values of all window scanning areas passed by the third-size window in the 1 st cutting data;
by using the second-size window, according to the distance of the step length of moving the second window every time, performing window scanning from left to right and from top to bottom from the starting point of the 1 st cut data to obtain the detection values of all window scanning areas passed by the second-size window in the 1 st cut data;
and determining the detection value of each window scanning area passed by the window of the third size in the 1 st cutting data and the detection value of each window scanning area passed by the window of the second size in the 1 st cutting data as the detection value of each window scanning area in the 1 st cutting data.
4. The method of claim 3, further comprising:
for any cutting data in the N cutting data, when the window of the third size is scanned to any window scanning area on the cutting data and the data in the window of the third size is smaller than the size of the window of the third size, recording a first stopping position on the cutting data when the window of the third size is scanned to the window scanning area;
when the window of the second size is scanned to any window scanning area on any piece of cutting data and the data in the window of the second size is smaller than the size of the window of the second size, recording a second stopping position on any piece of cutting data when the window of the second size is scanned to any window scanning area;
determining the first stop position and the second stop position as the end-of-scan-window position of the arbitrary piece of cutting data.
5. The method of claim 4, further comprising:
for the next piece of cutting data of any piece of cutting data, performing window scanning on the next piece of cutting data of any piece of cutting data from left to right and from top to bottom from the first stopping position by using the third-size window according to the distance of the step length of moving the third window every time, and obtaining the detection value of each window scanning area passed by the third-size window in the next piece of cutting data of any piece of cutting data;
by using the second size window, according to the distance of the step length of the second window moved each time, performing window scanning from left to right and from top to bottom on the next cutting data of any cutting data from the second stop position to obtain the detection value of each window scanning area passed by the second size window in the next cutting data of any cutting data;
and determining the detection value of each window scanning area passed by the third size window in the latter part of cutting data of any part of cutting data and the detection value of each window scanning area passed by the second size window in the latter part of cutting data of any part of cutting data as the detection value of each window scanning area in the latter part of cutting data of any part of cutting data.
6. The method according to any one of claims 1-5, wherein the determining whether the respective window-scanning area has a human face according to the detection value of the respective window-scanning area comprises:
and comparing the detection value of each window scanning area with a preset value, and if the detection value of any window scanning area in each window scanning area is greater than or equal to the preset value, determining that the any window scanning area has a human face.
7. An apparatus for face detection, the apparatus comprising:
the information acquisition unit is used for acquiring image processing information, the image processing information comprises sliding window information and image cutting information, the sliding window information comprises window sizes and window step lengths, the window sizes comprise a plurality of sizes, one size corresponds to one size window, one size window corresponds to one window step length, the image cutting information comprises N groups of cutting parameters, the ith group of cutting parameters is used for indicating the cutting mode of the ith cutting data, and i is more than or equal to 1 and less than or equal to N;
the data cutting unit is used for cutting the image data to be detected according to the N groups of cutting parameters included in the image cutting information determined by the information acquisition unit to obtain N pieces of cut data after cutting;
the cutting data processing unit is used for acquiring the 1 st cutting data in the N cutting data determined by the data cutting unit, scanning the window from the starting point of the 1 st cutting data based on the window indicated by the sliding window information to obtain the detection value of each window scanning area in the 1 st cutting data, and determining whether each window scanning area has a human face according to the detection value of each window scanning area;
the cutting data processing unit is further configured to obtain other (N-1) cutting data in the N cutting data except the 1 st cutting data, perform a window scanning detection operation on each cutting data in the other (N-1) cutting data based on a window indicated by the sliding window information to obtain a detection value of each window scanning area included in each cutting data in the other (N-1) cutting data, and determine whether a human face exists in each window scanning area according to the detection value of each window scanning area, where a window scanning end position of any cutting data in the N cutting data is a window scanning start position of a cutting data subsequent to the any cutting data;
the face detection apparatus further includes:
a cutting parameter determining unit, configured to obtain a maximum size window of multiple size windows corresponding to the multiple sizes, and use the maximum size window as a first size window, where the first size window corresponds to a first window step length;
determining a 1 st group of cutting parameters according to the first size window and the image column width of the image to be detected;
and determining other N-1 groups of cutting parameters except the 1 st group of cutting parameters according to the first window step length and the image column width of the image to be detected to obtain the N groups of cutting parameters.
8. A terminal device, comprising a processor and a memory, the processor and the memory being interconnected;
the memory for storing a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-6.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-6.
CN201811541438.5A 2018-12-17 2018-12-17 Face detection method and device Active CN109657603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811541438.5A CN109657603B (en) 2018-12-17 2018-12-17 Face detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811541438.5A CN109657603B (en) 2018-12-17 2018-12-17 Face detection method and device

Publications (2)

Publication Number Publication Date
CN109657603A CN109657603A (en) 2019-04-19
CN109657603B true CN109657603B (en) 2021-05-11

Family

ID=66113795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811541438.5A Active CN109657603B (en) 2018-12-17 2018-12-17 Face detection method and device

Country Status (1)

Country Link
CN (1) CN109657603B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889667A (en) * 2006-07-26 2007-01-03 浙江大学 Video frequency signal multi-processor parallel processing method
CN103049733A (en) * 2011-10-11 2013-04-17 株式会社理光 Human face detection method and human face detection equipment
CN105338236A (en) * 2014-07-25 2016-02-17 诺基亚技术有限公司 Method and apparatus for detecting object in image and electronic device
CN106991363A (en) * 2016-01-21 2017-07-28 北京三星通信技术研究有限公司 A kind of method and apparatus of Face datection

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271515B (en) * 2007-03-21 2014-03-19 株式会社理光 Image detection device capable of recognizing multi-angle objective
CN105095866B (en) * 2015-07-17 2018-12-21 重庆邮电大学 A kind of quick Activity recognition method and system
CN108090908B (en) * 2017-12-07 2020-02-04 深圳云天励飞技术有限公司 Image segmentation method, device, terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889667A (en) * 2006-07-26 2007-01-03 浙江大学 Video frequency signal multi-processor parallel processing method
CN103049733A (en) * 2011-10-11 2013-04-17 株式会社理光 Human face detection method and human face detection equipment
CN105338236A (en) * 2014-07-25 2016-02-17 诺基亚技术有限公司 Method and apparatus for detecting object in image and electronic device
CN106991363A (en) * 2016-01-21 2017-07-28 北京三星通信技术研究有限公司 A kind of method and apparatus of Face datection

Also Published As

Publication number Publication date
CN109657603A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN106560840B (en) A kind of image information identifying processing method and device
US9131227B2 (en) Computing device with video analyzing function and video analyzing method
CN108564579B (en) Concrete crack detection method and detection device based on time-space correlation
US8634604B2 (en) Method and system for enhanced image alignment
JP2005301746A (en) Fingerprint reading method and fingerprint reading system
CN113496208B (en) Video scene classification method and device, storage medium and terminal
CN112686919A (en) Object boundary line determining method and device, electronic equipment and storage medium
CN110209326B (en) Screenshot method and device, folding screen device and storage medium
CN111079730A (en) Method for determining area of sample image in interface image and electronic equipment
JP2006018754A (en) Fingerprints reading method, fingerprint reading system, and program
CN112819694A (en) Video image splicing method and device
CN112613508A (en) Object identification method, device and equipment
CN113221601A (en) Character recognition method, device and computer readable storage medium
CN109657603B (en) Face detection method and device
CN111539913B (en) Mobile device photographing definition quality evaluation method, system and terminal
CN112613510A (en) Picture preprocessing method, character recognition model training method and character recognition method
CN104754248B (en) A kind of method and device for obtaining target snapshot
US9704506B2 (en) Harmonic feature processing for reducing noise
CN108121942B (en) Fingerprint identification method and device
CN115004245A (en) Target detection method, target detection device, electronic equipment and computer storage medium
US8358803B2 (en) Navigation using fourier phase technique
CN112258427B (en) Infrared image restoration method and device
CN111832491B (en) Text detection method, device and processing equipment
CN114173194B (en) Page smoothness detection method and device, server and storage medium
CN113448470B (en) Webpage long screenshot method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant