CN116309488A - Image definition detection method, device, electronic equipment and readable storage medium - Google Patents

Image definition detection method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116309488A
CN116309488A CN202310295000.8A CN202310295000A CN116309488A CN 116309488 A CN116309488 A CN 116309488A CN 202310295000 A CN202310295000 A CN 202310295000A CN 116309488 A CN116309488 A CN 116309488A
Authority
CN
China
Prior art keywords
face
area
image
detection
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310295000.8A
Other languages
Chinese (zh)
Inventor
周凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202310295000.8A priority Critical patent/CN116309488A/en
Publication of CN116309488A publication Critical patent/CN116309488A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image definition detection method, an image definition detection device, electronic equipment and a readable storage medium, wherein a face area in a face image to be detected is determined, and the face area contains a plurality of face key points. And then, dividing a detection area based on a plurality of face key points contained in the face area, setting the direction information and the form information of the Gaussian kernel according to the information of the detection area, and carrying out Gaussian smoothing filtering processing based on the set Gaussian kernel detection area. And finally, calculating the gradient value of the processed detection area, judging whether the detection area is clear or not based on the gradient value, and further judging whether the face image is clear or not. In the scheme, the robustness of the subsequent gradient calculation result can be improved by setting the direction information and the form information of the Gaussian kernel used in Gaussian smoothing filtering processing based on the information of the detection area, and the robustness of the image definition detection judgment result is further improved.

Description

Image definition detection method, device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image sharpness detection method, an image sharpness detection device, an electronic apparatus, and a readable storage medium.
Background
In the application field related to face images, there is a need for determining the definition of face images, for example, in live broadcast scenes, the definition of face images in live broadcast processes is determined, in video play scenes, the definition of face images in video play is determined, and so on.
In the prior art, the currently commonly used image definition judging mode adopts algorithms such as gradient and energy functions, or an image non-reference quality evaluation model and the like to judge, but in the prior art, the judgment is usually realized by adopting a general algorithm, the face image to be detected is not optimized, and the defect of insufficient robustness exists when the face image to be detected is processed in the face of various types.
Disclosure of Invention
The purpose of the application includes, for example, providing an image definition detection method, an image definition detection device, an electronic device and a readable storage medium, which can improve the robustness of an image gradient calculation result and further improve the robustness of the image definition detection result.
Embodiments of the present application may be implemented as follows:
in a first aspect, the present application provides an image sharpness detection method, the method comprising:
Determining a face area in a face image aiming at the face image to be detected, wherein the face area comprises a plurality of face key points;
dividing a detection area based on a plurality of face key points contained in the face area;
setting direction information and form information of a Gaussian kernel according to the information of the detection area, and carrying out Gaussian smoothing filtering processing on the detection area based on the set Gaussian kernel;
and calculating the gradient value of the processed detection area, judging whether the detection area is clear or not based on the gradient value, and further judging whether the face image is clear or not.
In an optional embodiment, the step of setting direction information and morphology information of the gaussian kernel according to the information of the detection area includes:
obtaining the rotation angle and the aspect ratio of the detection area;
setting a rotation angle in the direction information of the Gaussian kernel according to the rotation angle of the detection area;
and setting the length-width ratio in the morphology information of the Gaussian kernel according to the length-width ratio of the detection area.
In an optional embodiment, the step of dividing the detection area based on the plurality of face keypoints contained in the face area includes:
Obtaining position information of each face key point contained in the face area;
and determining a detection frame according to the obtained position information of the plurality of face key points so as to mark and obtain a detection area surrounded by the detection frame, wherein the plurality of face key points are positioned in the detection frame, and the direction information of the detection frame is consistent with the direction information of the whole plurality of face key points.
In an optional embodiment, the step of determining the detection frame according to the obtained position information of the plurality of face keypoints includes:
determining the face key points positioned at the edge positions according to the obtained position information of each of the face key points;
performing connection processing on the key points of the human face positioned at the edge position;
and determining the direction of the frame based on the direction of the formed connecting line, and scribing a detection frame surrounding the connecting line based on the direction of the frame.
In an alternative embodiment, the face area is a plurality of face areas;
before the step of dividing the detection area based on the plurality of face key points contained in the face area, the method further comprises:
and screening effective face areas from the face areas based on a plurality of face key points contained in each face area.
In an alternative embodiment, each of the face keypoints has a corresponding keypoint confidence;
the step of screening the effective face area from the face areas based on the face key points contained in the face areas comprises the following steps:
judging whether the face key points are blocked or not based on the key point confidence corresponding to the face key points aiming at the face key points in the face areas;
and judging whether the face area is blocked or not according to the duty ratio of the face key points judged to be blocked in the face area so as to determine whether the face area is an effective face area or not.
In an optional implementation manner, the step of screening the valid face area from the face areas based on the face key points included in each face area includes:
for each face region, obtaining the key point numbers of the key points of each face in the face region;
and detecting whether the key point number is a set number so as to judge whether the face area is a valid face area.
In an alternative embodiment, the step of calculating the gradient value of the processed detection region includes:
Obtaining pixel intensity of each pixel point contained in the processed detection area;
and carrying out gradient calculation based on the pixel intensity of each pixel point, and combining gradient calculation results of a plurality of pixel points to obtain a gradient value of the detection area.
In an alternative embodiment, the method further comprises:
carrying out fuzzy processing on the face image judged to be clear;
taking the face image after the blurring process as a sample image, and taking the face image before the blurring process as a real label of the sample image;
training the constructed processing model based on the sample image and the corresponding real label to obtain the processing model meeting the preset requirement.
In a second aspect, the present application provides an image sharpness detection apparatus, the apparatus comprising:
the determining module is used for determining a face area in the face image aiming at the face image to be detected, wherein the face area comprises a plurality of face key points;
the dividing module is used for dividing a detection area based on a plurality of face key points contained in the face area;
the processing module is used for setting the direction information and the form information of the Gaussian kernel according to the information of the detection area and carrying out Gaussian smoothing filtering processing on the detection area based on the set Gaussian kernel;
The detection judging module is used for calculating the gradient value of the processed detection area, judging whether the detection area is clear or not based on the gradient value, and further judging whether the face image is clear or not.
In a third aspect, the present application provides an electronic device comprising one or more storage media and one or more processors in communication with the storage media, the one or more storage media storing machine-executable instructions that are executable by the processor to perform the method steps recited in any one of the preceding embodiments when the electronic device is operated.
In a fourth aspect, the present application provides a computer-readable storage medium storing machine-executable instructions which, when executed by a processor, implement the method steps of any of the preceding embodiments.
The beneficial effects of the embodiment of the application include, for example:
the application provides an image definition detection method, an image definition detection device, electronic equipment and a readable storage medium, wherein a face area in a face image to be detected is determined, and the face area contains a plurality of face key points. And then, dividing a detection area based on a plurality of face key points contained in the face area, setting the direction information and the form information of the Gaussian kernel according to the information of the detection area, and carrying out Gaussian smoothing filtering processing based on the set Gaussian kernel detection area. And finally, calculating the gradient value of the processed detection area, judging whether the detection area is clear or not based on the gradient value, and further judging whether the face image is clear or not. In the scheme, the robustness of the subsequent gradient calculation result can be improved by setting the direction information and the form information of the Gaussian kernel used in Gaussian smoothing filtering processing based on the information of the detection area, and the robustness of the image definition detection judgment result is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario of an image sharpness detection method provided in an embodiment of the present application;
fig. 2 is a flowchart of an image sharpness detection method according to an embodiment of the present application;
FIG. 3 is another flowchart of an image sharpness detection method according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of sub-steps included in step S10 of FIG. 2;
fig. 5 is a schematic diagram of a face key point provided in an embodiment of the present application;
FIG. 6 is another flow chart of sub-steps included in step S10 of FIG. 2;
FIG. 7 is a flow chart of sub-steps included in step S12 of FIG. 2;
FIG. 8 is a flowchart of sub-steps included in step S122 of FIG. 7;
fig. 9 is a schematic diagram of a detection frame based on face key point scribing according to an embodiment of the present application;
FIG. 10 is a flow chart of sub-steps included in step S14 of FIG. 2;
FIG. 11 (a) is a schematic diagram of an isotropic Gaussian blur kernel;
FIG. 11 (b) is a schematic diagram of an anisotropic Gaussian blur kernel;
FIG. 12 is a flow chart of sub-steps included in step S16 of FIG. 2;
FIG. 13 is a flow chart of process model training provided in an embodiment of the present application;
fig. 14 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 15 is a functional block diagram of an image sharpness detection apparatus according to an embodiment of the present application.
Icon: 100-live broadcast providing terminals, 200-live broadcast servers and 300-live broadcast receiving terminals; 110-a storage medium; a 120-processor; 130-image sharpness detection means; 131-a determination module; 132-scribing a module; 133-a processing module; 134-a detection judgment module; 140-communication interface.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
It should be noted that, without conflict, features in embodiments of the present application may be combined with each other.
The image definition detection method can be applied to application scenes processed by using equipment alone, wherein the equipment can be a mobile phone, a tablet computer or a personal computer, a server and the like. For example, the image sharpness may be evaluated by detecting and determining sharpness of an individual image stored in the apparatus, an image frame included in a stored video, or an image frame in a video played in real time. For example, in the application scene, the evaluation of image definition of pictures in a movie played on the device can be realized.
In addition, the image definition detection method provided by the application can also be applied to a scene where a plurality of terminal devices communicate, and can be used for detecting and judging the definition of the image acquired by the terminal devices, and further evaluating the definition of the image acquired by the terminal devices. For example, the application scene can be a network live broadcast scene, and the evaluation of the definition of the face image in the live video acquired in the live broadcast scene can be realized.
For example, referring to fig. 1, a possible application scenario of the image sharpness detection method provided in the present embodiment is shown. The application scenario includes a live broadcast providing terminal 100, a live broadcast server 200, and a live broadcast receiving terminal 300. The live broadcast server 200 is respectively connected to the live broadcast providing terminal 100 and the live broadcast receiving terminal 300 in a communication manner, and is configured to provide live broadcast services for the live broadcast providing terminal 100 and the live broadcast receiving terminal 300.
For example, the live providing terminal 100 may transmit a live video stream to the live server 200, and a viewer may access the live server 200 through the live receiving terminal 300 to watch the live video.
It will be appreciated that the scenario shown in fig. 1 is only one possible example, and that the method provided herein may also be applied in other scenarios, and that in other possible embodiments, the scenario may also include only a portion of the components shown in fig. 1 or may also include other components.
In this embodiment, the live broadcast providing terminal 100 and the live broadcast receiving terminal 300 may be, but are not limited to, a smart phone, a personal digital assistant, a tablet computer, a personal computer, a notebook computer, a virtual reality terminal device, an augmented reality terminal device, and the like. Among them, the live broadcast providing terminal 100 and the live broadcast receiving terminal 300 may have installed therein an internet product for providing an internet live broadcast service, for example, the internet product may be an application APP, a Web page, an applet, etc. related to the internet live broadcast service used in a computer or a smart phone.
In this embodiment, an image capturing device for capturing a face image of a presenter or a face image of a viewer may be further included in the scene, and the image capturing device may be, but is not limited to, a video camera, a digital camera, a depth camera, or the like. The image capturing apparatus may be directly installed or integrated with the live broadcast providing terminal 100 or the live broadcast receiving terminal 300. For example, the image capturing device may be a camera configured on the live broadcast providing terminal 100, and other modules or components in the live broadcast providing terminal 100 may receive video, images transmitted from the image capturing device via an internal bus.
Alternatively, the image capturing apparatus may communicate with each other by wired or wireless means independently of the live broadcast providing terminal 100 or the live broadcast receiving terminal 300. It should be noted that the foregoing is only one possible implementation scenario of the image sharpness detection method provided in the present application, and in addition, the image sharpness detection method may also be used for processing based on the acquired individual pictures.
Fig. 2 is a schematic flow chart of an image sharpness detection method according to an embodiment of the present application, where the image sharpness detection method may be performed by an image sharpness detection apparatus, and the image sharpness detection apparatus may be implemented by software and/or hardware, and may be configured in an electronic device, where the electronic device may be any one of the live broadcast server 200, the live broadcast providing terminal 100, and the live broadcast receiving terminal 300. The detailed steps of the image sharpness detection method are described below.
S10, determining a face area in a face image aiming at the face image to be detected, wherein the face area comprises a plurality of face key points.
And S12, dividing a detection area based on a plurality of face key points contained in the face area.
And S14, setting the direction information and the form information of the Gaussian kernel according to the information of the detection area, and carrying out Gaussian smoothing filtering processing on the detection area based on the set Gaussian kernel.
S16, calculating the gradient value of the processed detection area, judging whether the detection area is clear or not based on the gradient value, and further judging whether the face image is clear or not.
In this embodiment, the face image to be detected may be an image acquired in real time, or may be an image acquired and stored in non-real time, i.e., in a history period. For example, the images stored in the device may be images collected by the device in real time, such as a live broadcast face image, a spectator face image, and the like collected in real time in a live broadcast scene.
The face image to be detected may be a single image, or may be a continuous image frame by frame obtained by framing in a video, which is not particularly limited in this embodiment.
For the face image to be detected, the face region in the face image can be determined in an existing general mode, for example, the face region can be detected by using an existing face detection model which is trained. Under the condition that the face area is detected, a plurality of face key points on the face in the face area can be further detected, wherein the face key points are points capable of locating the five sense organs and the outlines on the face, such as eyes, nose, mouth, eyebrows and the like.
In this embodiment, the portion including the face region may be cut and scaled to the same size, for example, 512×512 pixels, to facilitate normalization.
In order to achieve finer detection and judgment, in this embodiment, a detection area may be defined based on a plurality of face key points included in a face area. In the case of dividing the detection region, division may be performed by dividing different portions of the face, for example, dividing a plurality of face key points in the nose region into the same detection region and dividing a plurality of face key points in the mouth region into the same detection region. In this embodiment, the direction and form of the detection area obtained by dividing can be maintained to be consistent with the corresponding five sense organs.
On the basis of the above, the embodiment calculates the gradient value of each detection area by adopting the local Laplace gradient algorithm so as to realize the evaluation of the image definition. But since the laplace operator is an approximate estimate of the second derivative of the image, it is sensitive to noise in the image. Based on this, in the present embodiment, the image is first subjected to gaussian smoothing filter processing before the laplace gradient calculation is performed.
In this embodiment, it is considered that in performing image sharpness evaluation, there are many differences in shape, direction, and the like in addition to differences in sharpness in the image to be processed. Because of a plurality of other differences besides the difference in definition, if the processing is performed by adopting a general processing mode, image tuning cannot be realized pertinently, and then the result of gradient calculation is affected.
In view of this, in the present embodiment, when performing gaussian smoothing filter processing on an image, first, direction information and form information of a gaussian kernel are set based on information of a detection region. And performing Gaussian smoothing filtering processing based on the set Gaussian kernel detection area. Therefore, the Gaussian kernel can be set in a targeted manner for different detection areas, the Gaussian kernel utilized in Gaussian smoothing filtering processing can be ensured to correspond to the detection areas, and further the optimization processing of the better effect of the detection area image is realized.
On the basis, for each detection area, a local Laplace gradient operator is adopted to calculate the gradient value, so that the gradient value of each detection area is obtained. Because the fuzzy image has a principle that the overall gradient value is lower due to the fact that the edges are not clear enough, the definition of each detection area can be judged based on the obtained gradient value.
Because the face image comprises a plurality of detection areas, the definition evaluation result of each detection area can be synthesized to obtain the overall definition evaluation result of the face image under the condition that the gradient value of each detection area is obtained to judge whether each detection area is clear or not.
According to the image definition detection scheme provided by the embodiment, the direction information and the form information of the Gaussian kernel used in Gaussian smoothing filtering processing are set based on the information of the detection area, so that targeted image tuning can be realized, the robustness of a subsequent gradient calculation result can be improved, and the robustness of an image definition detection judgment result is further improved.
In this embodiment, in consideration of the fact that certain parts on the face are special in the practical application scene, if the image definition is evaluated based on the parts, the accuracy of the evaluation result may be affected. For example, the gradient values at the contour edges of a face may not be high enough even though the face is clear due to background blurring problems. As another example, even in blurred images, the eyebrow area of the face may have a high gradient value due to excessive hair texture. It can be seen that the information of these parts is unfavorable for the accuracy of the overall definition detection judgment of the image.
In this embodiment, in the step of determining the face area in the face image, the face area may be multiple, for example, multiple face areas corresponding to the eyes, the eyebrows, the face outline, the nose, the mouth, and the like, and each face area includes multiple face key points.
Referring to fig. 3, the image sharpness detection method provided in the present embodiment may further include the following steps:
s11, based on a plurality of face key points contained in each face area, effective face areas are screened out from the face areas.
Thus, the sharpness evaluation is then performed based on the effective face area that is screened out.
In one possible implementation manner, the step of selecting a valid face region from the plurality of face regions may be implemented as follows, please refer to fig. 4 in combination:
S110A, obtaining key point numbers of the key points of each face in the face area aiming at each face area.
S112A, detecting whether the key point number is a set number so as to judge whether the face area is a valid face area.
In this embodiment, the same key point numbering mode is used to number the detected key points of the face for different face images. For example, as shown in fig. 5, the numbering may be performed from the contour on the left side of the face, and after the numbering of the face contour is finished, the face key points in the eyebrows, the eyes, the nose, and the mouth are sequentially numbered.
Thus, the specific face key point can be known to which part on the face based on the number of the face key point.
As described above, in view of the adverse effect of the face contour, eyebrows, and the like on the evaluation result of the face definition, the numbers of the face key points of the face contour, eyebrows, and the like may be set as the set numbers first. When processing specific face images, whether the corresponding face area is a face contour, eyebrows, etc. can be determined by comparing whether the key point numbers are set numbers, and further whether the corresponding face area is a valid face area can be determined. For example, if the face contour, the eyebrow, or the like is determined, the corresponding face region is not a valid face region, and the sharpness determination is performed without using the information of the face region.
In addition, in this embodiment, the confidence level of the key point of each face key point may be obtained in advance by using a related technology, where the confidence level of the key point may represent the likelihood that the detected corresponding face key point is an actual face key point. The keypoint confidence may laterally characterize whether the face keypoint is occluded. If a face region is largely blocked, the face region will not have a high referenceability for image sharpness evaluation.
Based on this, in this embodiment, in one possible implementation manner, the step of selecting the effective face area from the plurality of face areas may be implemented as follows, please refer to fig. 6 in combination:
S110B, judging whether the face key points are blocked or not according to the key point confidence degrees corresponding to the face key points aiming at the face key points in the face areas.
And S112B, judging whether the face area is blocked according to the ratio of the key points of the face judged to be blocked in the face area so as to determine whether the face area is a valid face area.
In this embodiment, a confidence threshold H may be set conf If the confidence coefficient of the key point of a certain face key point is larger than or equal to the confidence coefficient threshold value, the key point of the face key point is indicated not to be blocked. And if the confidence coefficient of the key point of the face is smaller than the confidence coefficient threshold value, indicating that the key point of the face is blocked.
For a certain face area, the total number of face key points in the face area can be counted, and the number of the blocked face key points can be judged. And dividing the number of the face key points judged to be blocked by the total number of the face key points to obtain the duty ratio of the face key points judged to be blocked.
In addition, in this embodiment, a duty ratio threshold may be set, and when the duty ratio of the key points of the face determined to be blocked is greater than the duty ratio threshold, the face area is blocked in a large area, for example, the eyes are blocked by hair, or the other side is blocked by itself when the face is sideways. In this case, the face region will not have the reference condition, and the face region may be regarded as an invalid face region. And the other face areas with the duty ratio of the face key points judged to be blocked being smaller than or equal to the duty ratio threshold value are used as effective face areas.
In this embodiment, when effective face region screening is performed, screening may be performed by any one of the above-mentioned manners based on the key point numbers and the key point confidence degrees, or by a combination of two manners, which is not specifically limited in this embodiment.
In this embodiment, after the effective face area is selected in the above manner, if the effective face area is less, for example, if the face is blocked by a large area, it indicates that the effective information that can be used in the face image to be detected is less, and the face image may not have an evaluation value, and may not be subjected to subsequent processing.
On the basis of the above, the face area needs to be processed to determine the subsequent gradient detection range. Referring to fig. 7, in the present embodiment, the gradient detection range can be defined by the following method:
s120, obtaining position information of each face key point contained in the face area.
S122, determining a detection frame according to the obtained position information of the plurality of face key points so as to mark a detection area surrounded by the detection frame.
In this embodiment, the gradient detection range that is defined needs to include each face key point, so that position information of each face key point in the face area may be determined first, where the position information may be coordinate information on a coordinate system that is constructed based on the face image.
After the position information of each face key point is obtained, a detection frame can be marked, a plurality of face key points contained in the face area are positioned in the detection frame, and in order to conveniently show the relevant information of the whole faces of the plurality of face key points by the information of the detection frame, the marked direction information of the detection frame is consistent with the direction information of the whole faces of the plurality of face key points.
Referring to fig. 8, in this embodiment, as a possible implementation manner, the detection frame may be specifically determined by:
S1220, determining the face key points at the edge positions according to the obtained position information of each of the face key points.
S1222, executing connection processing on the key points of the face located at the edge position.
S1224, determining a frame direction based on the direction of the formed connecting line, and scribing a detection frame surrounding the connecting line based on the frame direction.
In this embodiment, the face key points located at the edge positions may be the face key points located at the uppermost side, the lowermost side, the leftmost side, the rightmost side, and the like. As shown in fig. 9, for the right eye in the face image, the face key points located at the edge positions are the key points in the right eye area of the figure, i.e. the upper, lower, left and right key points.
When the connection processing is performed on the face key points located at the edge positions, the face key points located at the upper and lower positions can be connected, and the face key points located at the left and right positions can be connected. For example, as shown in fig. 9, a connection line constituting a cross is obtained.
The direction of the connecting line formed can show the direction of the whole five sense organs in the corresponding face area, as in fig. 9, a certain rotation angle exists in the vertical direction for the right eye, and a certain rotation angle exists in the vertical direction correspondingly for the connecting line constructed based on the right eye. In fig. 9, the nose portion is aligned with the nose portion such that the entire nose has no rotation angle in the vertical direction, and accordingly, the connecting line constructed based on the nose has no rotation angle in the vertical direction.
Therefore, the frame direction can be determined based on the direction of the connecting line, so that the direction of the marked detection frame is consistent with the direction of the whole of the plurality of face key points.
And (3) carrying out detection frame scribing based on the obtained frame direction, and enabling the detection frame to surround the connecting line.
The shape of the detection frame can be rectangular, square, elliptic and the like without limitation.
In order to ensure that all the face key points can be located inside the detection frame, in this embodiment, a certain scaling factor may be further used to perform scaling processing on the detection frame, for example, the scaling factor may be 1.25 to expand the detection frame.
In this embodiment, the detection frame is marked in the above manner to determine the detection area, so that the direction of the detection area changes along with the change of the direction of the corresponding five sense organs in the image, which has rotation invariance and is beneficial to improving the robustness of gradient detection.
On the basis of the above determination of the detection area, a gaussian smoothing filter process is performed on the detection area. In this embodiment, in the processing procedure, the gaussian kernel having pertinence to the detection area is used for processing, and the corresponding gaussian kernel may be set as follows, please refer to fig. 10 in combination:
And S140, obtaining the rotation angle and the aspect ratio of the detection area.
And S142, setting the rotation angle in the direction information of the Gaussian kernel according to the rotation angle of the detection area.
S144, setting the aspect ratio in the morphology information of the Gaussian kernel according to the aspect ratio of the detection area.
In this embodiment, the size of the gaussian kernel k centered on (0, 0) is 2t+1, and the element range in the kernel is (i, j) ∈ [ -t, t ], satisfying the following gaussian distribution:
Figure BDA0004142835620000151
wherein c= [ i, j] T For spatial coordinates, Σ is a covariance matrix, N is a normalization constant, and the covariance matrix is further represented as follows:
Figure BDA0004142835620000152
wherein R is a rotation matrix, sigma 12 And θ is the rotation angle, which is the standard deviation on the two principal axes. θ in this formula will affect the direction of the Gaussian kernel, while σ 12 Will affect the morphology of the gaussian kernel. When sigma is 1 =σ 2 The gaussian kernel has isotropy as shown in fig. 11 (a). When sigma is 1 ≠σ 2 When the gaussian kernel has variability, as shown in fig. 11 (b).
In this embodiment, the rotation angle and the aspect ratio of the gaussian kernel will be set according to the rotation angle and the aspect ratio of the detection region, respectively. When the detection frame forming the detection area is rectangular, the length ratio may be a ratio of a long side to a short side of the detection frame. If the detection frame is elliptical, the length ratio may be the ratio of the major axis to the minor axis of the ellipse.
In this embodiment, the rotation angle of the gaussian kernel may be set to coincide with the rotation angle of the detection region, and the aspect ratio of the gaussian kernel may be set to be positively correlated with the aspect ratio of the detection region. Therefore, the direction of the Gaussian kernel is consistent with the direction of the detection area, the shape of the Gaussian kernel is consistent with the shape of the detection area, and the robustness of the gradient calculation result is improved.
On the basis of the above, the detection area can be subjected to gray processing, gaussian smoothing filtering processing is performed on the detection area based on the set Gaussian kernel, so that image tuning is realized, and then the gradient value of the processed detection area is calculated. Referring to fig. 12, in the present embodiment, the gradient value can be calculated by:
s160, obtaining the pixel intensity of each pixel point contained in the processed detection area.
And S162, performing gradient calculation based on the pixel intensity of each pixel point, and combining gradient calculation results of a plurality of pixel points to obtain a gradient value of the detection area.
The pixel intensity of any pixel in the detection region can be noted as I (x, y), and the laplace gradient value of the pixel is calculated based on the following formula:
Figure BDA0004142835620000161
combining the gradient values of all pixel points in the detection area to obtain the gradient value F of the whole detection area ROI The F is ROI May be the standard deviation of the gradient values of all pixels.
For a certain detection area, if the gradient value is larger, the edge in the detection area is more obvious, namely the picture is clearer. Since the content in each detection area is relatively fixed, for example, the eye area is only provided with eyes and has rotation invariance, and the shape and the direction of the Gaussian kernel are consistent with those of the detection area, the gradient of the detection area has obvious regularity.
In practical application, the gradient values of the corresponding detection areas in the face images can be counted first, and a gradient threshold value is set based on the counted results to distinguish blurring and sharpness. If the gradient value of a certain detection area of the face image to be detected is larger than or equal to the gradient threshold value, the detection area is judged to be clear, otherwise, the detection area is fuzzy.
If the face image has M detection areas and each detection area is determined to be clear, the overall image clarity of the face image may be an average value of gradient values of the M detection areas, which may be expressed as follows:
Figure BDA0004142835620000162
if the detected region determined to be blurred exists in the face image, the face image is determined to be blurred.
The embodiment can realize high-efficiency and high-accuracy image definition evaluation based on the mode, so that the definition evaluation realized in a machine mode is kept to be consistent with the subjective feeling in manual screening. After the face image with high definition is detected in the above manner, the face image can be put into other applications based on the detected face image with high definition, for example, the face image with high definition can be used as a template for optimizing the image quality of an old movie, or reference information can be provided for live broadcast quality evaluation, and the like.
As a possible implementation manner, referring to fig. 13, on the basis of the foregoing, the image sharpness detection method provided in this embodiment may further include the following steps:
s20, blurring processing is carried out on the face image judged to be clear.
S22, taking the face image after blurring processing as a sample image, and taking the face image before blurring processing as a real label of the sample image.
S24, training the constructed processing model based on the sample image and the corresponding real label to obtain the processing model meeting the preset requirement.
In this embodiment, the face image determined to be clear may be subjected to blurring processing of a face contour, blurring processing of other five-sense organs, or the like, so as to obtain a corresponding blurred image.
The process model may be pre-built and may be built based on a neural network model, which may be, but is not limited to, a convolutional neural network, a recurrent neural network, a deep auto-encoder, and the like. The face image after the blurring process can be imported into a processing model for processing, and correspondingly, the clear face image before the blurring process is used as a real label. After the face image subjected to the blurring processing is processed by the processing model, a corresponding output image is output. Based on the output image and the real label thereof, the optimization training of the processing model can be realized, so that the output result of the optimized processing model is consistent with the real label thereof as much as possible, namely, the processing model can process the blurred face image into a clear face image.
When the training of the processing model meets the preset requirement, for example, the training iteration number reaches a certain number of times, or the output result of the processing model meets a certain requirement, a trained processing model can be obtained.
For example, when a video, such as an old movie, is image quality-optimized, the blurred image may be processed using a trained processing model to output a sharp image. In actual implementation, clear face images in the video can be screened out in the mode of the definition evaluation, then the screened face images are subjected to fuzzy processing, a processing model is obtained through training, and further tuning of other fuzzy face images in the video is achieved based on the processing model.
Because the processing model is based on clear face images in the video to be processed as real labels, the processing model has consistent basic information with the blurred face images to be optimized. Therefore, when the processing model is used for realizing the tuning of other fuzzy face images, the method has pertinence and can greatly improve the tuning effect.
According to the image definition detection scheme provided by the embodiment, after the face area is determined in the face image to be detected, the face area with the reference value is screened out according to priori knowledge, so that the accuracy of a subsequent evaluation result is guaranteed.
On the basis, after a detection area, namely an interested area is marked, the direction information and the form information of the Gaussian kernel are set according to the information of the detection area, gaussian smoothing processing is carried out on the detection area based on the set Gaussian kernel, and whether the detection area is clear or not is judged based on a gradient value so as to evaluate the definition of the face image. In this way, the robustness of the subsequent gradient calculation result can be improved by setting the direction information and the form information of the gaussian kernel used in the gaussian smoothing filter processing based on the information of the detection region, and the robustness of the image definition detection judgment result can be further improved.
Further, after the face image judged to be clear is subjected to reverse blurring processing, the face image is taken as a sample, training optimization of a processing model is performed, and the blurring processing of the blurred image can be realized by utilizing the processing model obtained by training.
Referring to fig. 14, an exemplary component diagram of an electronic device provided in an embodiment of the present application may be the live broadcast server 200, the live broadcast providing terminal 100, or the live broadcast receiving terminal 300. The electronic device may include a storage medium 110, a processor 120, an image sharpness detection arrangement 130, and a communication interface 140. In this embodiment, the storage medium 110 and the processor 120 are both located in the electronic device and are separately disposed. However, it should be understood that the storage medium 110 may also be separate from the electronic device and accessible to the processor 120 through a bus interface. Alternatively, the storage medium 110 may be integrated into the processor 120, for example, as a cache and/or general purpose registers.
The image sharpness detection apparatus 130 may be understood as the above-mentioned electronic device, or the processor 120 of the electronic device, or may be understood as a software functional module that implements the above-mentioned image sharpness detection method under the control of the electronic device, independently of the above-mentioned electronic device or the processor 120.
As shown in fig. 15, the image sharpness detection apparatus 130 may include a determination module 131, a scribing module 132, a processing module 133, and a detection judgment module 134. The functions of the respective functional blocks of the image clarity detecting apparatus 130 are described in detail below.
A determining module 131, configured to determine, for a face image to be detected, a face area in the face image, where the face area includes a plurality of face key points;
it will be appreciated that the determination module 131 may be used to perform the above step S10, and reference may be made to the details of the implementation of the determination module 131 regarding the above step S10.
A dividing module 132, configured to divide a detection area based on a plurality of face key points included in the face area;
it is understood that the scribing module 132 may be used to perform the step S12, and reference may be made to the details of the implementation of the scribing module 132 in the foregoing description regarding the step S12.
A processing module 133, configured to set direction information and shape information of a gaussian kernel according to information of the detection area, and perform gaussian smoothing filtering processing on the detection area based on the set gaussian kernel;
it will be appreciated that the processing module 133 may be used to perform step S14 described above, and reference may be made to the details of step S14 regarding the implementation of the processing module 133.
The detection judging module 134 is configured to calculate a gradient value of the processed detection area, and judge whether the detection area is clear based on the gradient value, thereby judging whether the face image is clear.
It is understood that the detection and determination module 134 may be used to perform the step S16, and reference may be made to the details of the implementation of the detection and determination module 134 in the foregoing description regarding the step S16.
In one possible implementation, the processing module 133 may be configured to:
obtaining the rotation angle and the aspect ratio included in the information of the detection area;
setting a rotation angle in the direction information of the Gaussian kernel according to the rotation angle of the detection area;
and setting the length-width ratio in the morphology information of the Gaussian kernel according to the length-width ratio of the detection area.
In one possible implementation, the scribing module 132 may be used to:
obtaining position information of each face key point contained in the face area;
and determining a detection frame according to the obtained position information of the plurality of face key points so as to mark and obtain a detection area surrounded by the detection frame, wherein the plurality of face key points are positioned in the detection frame, and the direction information of the detection frame is consistent with the direction information of the whole plurality of face key points.
In one possible implementation, the scribing module 132 may be specifically configured to:
determining the face key points positioned at the edge positions according to the obtained position information of each of the face key points;
performing connection processing on the key points of the human face positioned at the edge position;
and determining the direction of the frame based on the direction of the formed connecting line, and scribing a detection frame surrounding the connecting line based on the direction of the frame.
In one possible implementation manner, the face area is a plurality of, and the image sharpness detection apparatus 130 may further include a screening module, where the screening module may be configured to:
and screening effective face areas from the face areas based on a plurality of face key points contained in each face area.
In one possible implementation manner, each of the face keypoints has a corresponding keypoint confidence, and the screening module may specifically be configured to:
judging whether the face key points are blocked or not based on the key point confidence corresponding to the face key points aiming at the face key points in the face areas;
and judging whether the face area is blocked or not according to the duty ratio of the face key points judged to be blocked in the face area so as to determine whether the face area is an effective face area or not.
In one possible implementation, the screening module may be specifically configured to:
for each face region, obtaining the key point numbers of the key points of each face in the face region;
and detecting whether the key point number is a set number so as to judge whether the face area is a valid face area.
In one possible implementation, the detection and judgment module 134 may be configured to:
obtaining pixel intensity of each pixel point contained in the processed detection area;
and carrying out gradient calculation based on the pixel intensity of each pixel point, and combining gradient calculation results of a plurality of pixel points to obtain a gradient value of the detection area.
In one possible implementation, the image sharpness detection arrangement 130 may further include a training module, which may be configured to:
carrying out fuzzy processing on the face image judged to be clear;
taking the face image after the blurring process as a sample image, and taking the face image before the blurring process as a real label of the sample image;
training the constructed processing model based on the sample image and the corresponding real label to obtain the processing model meeting the preset requirement.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Further, the embodiment of the present application further provides a computer readable storage medium storing machine executable instructions, where the machine executable instructions when executed implement the image sharpness detection method provided in the foregoing embodiment.
Specifically, the computer readable storage medium can be a general-purpose storage medium, such as a removable disk, a hard disk, or the like, and the computer program on the computer readable storage medium can execute the above-described image sharpness detection method when executed. With respect to the processes involved in the computer readable storage medium and when executed as executable instructions thereof, reference is made to the relevant descriptions of the method embodiments described above and will not be described in detail herein.
In summary, the method, the device, the electronic device and the readable storage medium for detecting image definition provided by the embodiment of the application determine a face area in a face image to be detected, where the face area includes a plurality of face key points. And then, dividing a detection area based on a plurality of face key points contained in the face area, setting the direction information and the form information of the Gaussian kernel according to the information of the detection area, and carrying out Gaussian smoothing filtering processing based on the set Gaussian kernel detection area. And finally, calculating the gradient value of the processed detection area, judging whether the detection area is clear or not based on the gradient value, and further judging whether the face image is clear or not. In the scheme, the robustness of the subsequent gradient calculation result can be improved by setting the direction information and the form information of the Gaussian kernel used in Gaussian smoothing filtering processing based on the information of the detection area, and the robustness of the image definition detection judgment result is further improved.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily conceivable by those skilled in the art within the technical scope of the present application should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method for detecting image sharpness, the method comprising:
determining a face area in a face image aiming at the face image to be detected, wherein the face area comprises a plurality of face key points;
dividing a detection area based on a plurality of face key points contained in the face area;
setting direction information and form information of a Gaussian kernel according to the information of the detection area, and carrying out Gaussian smoothing filtering processing on the detection area based on the set Gaussian kernel;
and calculating the gradient value of the processed detection area, judging whether the detection area is clear or not based on the gradient value, and further judging whether the face image is clear or not.
2. The image clarity detecting method according to claim 1, wherein the step of setting direction information and form information of the gaussian kernel according to the information of the detection area comprises:
obtaining the rotation angle and the aspect ratio of the detection area;
setting a rotation angle in the direction information of the Gaussian kernel according to the rotation angle of the detection area;
and setting the length-width ratio in the morphology information of the Gaussian kernel according to the length-width ratio of the detection area.
3. The image sharpness detection method according to claim 1, wherein the step of dividing the detection area based on a plurality of face key points included in the face area includes:
obtaining position information of each face key point contained in the face area;
and determining a detection frame according to the obtained position information of the plurality of face key points so as to mark and obtain a detection area surrounded by the detection frame, wherein the plurality of face key points are positioned in the detection frame, and the direction information of the detection frame is consistent with the direction information of the whole plurality of face key points.
4. The image clarity detecting method according to claim 3, wherein the step of determining the detecting frame according to the obtained position information of the plurality of face key points comprises:
determining the face key points positioned at the edge positions according to the obtained position information of each of the face key points;
performing connection processing on the key points of the human face positioned at the edge position;
and determining the direction of the frame based on the direction of the formed connecting line, and scribing a detection frame surrounding the connecting line based on the direction of the frame.
5. The image clarity detection method according to claim 1, wherein the face area is plural;
Before the step of dividing the detection area based on the plurality of face key points contained in the face area, the method further comprises:
and screening effective face areas from the face areas based on a plurality of face key points contained in each face area.
6. The method of claim 5, wherein each of the face keypoints has a corresponding keypoint confidence;
the step of screening the effective face area from the face areas based on the face key points contained in the face areas comprises the following steps:
judging whether the face key points are blocked or not based on the key point confidence corresponding to the face key points aiming at the face key points in the face areas;
and judging whether the face area is blocked or not according to the duty ratio of the face key points judged to be blocked in the face area so as to determine whether the face area is an effective face area or not.
7. The method according to claim 5, wherein the step of screening valid face regions from the plurality of face regions based on a plurality of face key points included in each of the face regions comprises:
For each face region, obtaining the key point numbers of the key points of each face in the face region;
and detecting whether the key point number is a set number so as to judge whether the face area is a valid face area.
8. The image clarity detecting method according to claim 1, wherein the step of calculating the gradient value of the detected region after the processing includes:
obtaining pixel intensity of each pixel point contained in the processed detection area;
and carrying out gradient calculation based on the pixel intensity of each pixel point, and combining gradient calculation results of a plurality of pixel points to obtain a gradient value of the detection area.
9. The image sharpness detection method according to any one of claims 1 to 8, characterized in that the method further comprises:
carrying out fuzzy processing on the face image judged to be clear;
taking the face image after the blurring process as a sample image, and taking the face image before the blurring process as a real label of the sample image;
training the constructed processing model based on the sample image and the corresponding real label to obtain the processing model meeting the preset requirement.
10. An image sharpness detection apparatus, characterized in that the apparatus comprises:
The determining module is used for determining a face area in the face image aiming at the face image to be detected, wherein the face area comprises a plurality of face key points;
the dividing module is used for dividing a detection area based on a plurality of face key points contained in the face area;
the processing module is used for setting the direction information and the form information of the Gaussian kernel according to the information of the detection area and carrying out Gaussian smoothing filtering processing on the detection area based on the set Gaussian kernel;
the detection judging module is used for calculating the gradient value of the processed detection area, judging whether the detection area is clear or not based on the gradient value, and further judging whether the face image is clear or not.
11. An electronic device comprising one or more storage media and one or more processors in communication with the storage media, the one or more storage media storing processor-executable machine-executable instructions that, when the electronic device is run, are executed by the processor to perform the method steps recited in any of claims 1-9.
12. A computer readable storage medium storing machine executable instructions which when executed by a processor implement the method steps of any one of claims 1 to 9.
CN202310295000.8A 2023-03-22 2023-03-22 Image definition detection method, device, electronic equipment and readable storage medium Pending CN116309488A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310295000.8A CN116309488A (en) 2023-03-22 2023-03-22 Image definition detection method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310295000.8A CN116309488A (en) 2023-03-22 2023-03-22 Image definition detection method, device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116309488A true CN116309488A (en) 2023-06-23

Family

ID=86786830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310295000.8A Pending CN116309488A (en) 2023-03-22 2023-03-22 Image definition detection method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116309488A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117556404A (en) * 2023-12-11 2024-02-13 广西远方创客数据咨询有限公司 Business management system based on SaaS

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117556404A (en) * 2023-12-11 2024-02-13 广西远方创客数据咨询有限公司 Business management system based on SaaS

Similar Documents

Publication Publication Date Title
Min et al. Unified blind quality assessment of compressed natural, graphic, and screen content images
KR102319177B1 (en) Method and apparatus, equipment, and storage medium for determining object pose in an image
Gu et al. A fast reliable image quality predictor by fusing micro-and macro-structures
Gu et al. Analysis of distortion distribution for pooling in image quality prediction
US10963993B2 (en) Image noise intensity estimation method, image noise intensity estimation device, and image recognition device
Chen et al. Visual depth guided color image rain streaks removal using sparse coding
Shen et al. Hybrid no-reference natural image quality assessment of noisy, blurry, JPEG2000, and JPEG images
CN108229276B (en) Neural network training and image processing method and device and electronic equipment
US7599568B2 (en) Image processing method, apparatus, and program
Tian et al. Light field image quality assessment via the light field coherence
Zheng et al. UIF: An objective quality assessment for underwater image enhancement
CN111340749B (en) Image quality detection method, device, equipment and storage medium
US11310475B2 (en) Video quality determination system and method
CN111242074B (en) Certificate photo background replacement method based on image processing
US20210200990A1 (en) Method for extracting image of face detection and device thereof
Cheng et al. A pre-saliency map based blind image quality assessment via convolutional neural networks
CN111325107A (en) Detection model training method and device, electronic equipment and readable storage medium
CN111784658B (en) Quality analysis method and system for face image
CN116309488A (en) Image definition detection method, device, electronic equipment and readable storage medium
CN112991159B (en) Face illumination quality evaluation method, system, server and computer readable medium
Pan et al. Single-image dehazing via dark channel prior and adaptive threshold
Jacobson et al. Scale-aware saliency for application to frame rate upconversion
CN116385316B (en) Multi-target image dynamic capturing method and related device
Ferreira et al. A method to compute saliency regions in 3D video based on fusion of feature maps
CN106611417B (en) Method and device for classifying visual elements into foreground or background

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination