CN113792662B - Image detection method, device, electronic equipment and storage medium - Google Patents

Image detection method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113792662B
CN113792662B CN202111080907.XA CN202111080907A CN113792662B CN 113792662 B CN113792662 B CN 113792662B CN 202111080907 A CN202111080907 A CN 202111080907A CN 113792662 B CN113792662 B CN 113792662B
Authority
CN
China
Prior art keywords
detection
image
detected
preset condition
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111080907.XA
Other languages
Chinese (zh)
Other versions
CN113792662A (en
Inventor
曹金荣
周全
张弼坤
赖利锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111080907.XA priority Critical patent/CN113792662B/en
Publication of CN113792662A publication Critical patent/CN113792662A/en
Priority to PCT/CN2022/108681 priority patent/WO2023040480A1/en
Application granted granted Critical
Publication of CN113792662B publication Critical patent/CN113792662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to the technical field of computer vision, in particular to an image detection method, an image detection device, electronic equipment and a storage medium. An image detection method, comprising: determining a target preset condition corresponding to the current detection level based on a corresponding relation between the pre-established detection level and the preset condition; and carrying out image detection on the image to be detected, and carrying out warehousing processing on the image to be detected in response to the fact that the image to be detected meets the target preset condition. In the embodiment of the disclosure, different detection levels are established based on different scene requirements, so that the accuracy of the identification service is improved. The quality detection model can be suitable for various scene requirements, has stronger robustness and improves the model deployment efficiency.

Description

Image detection method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of computer vision, in particular to an image detection method, an image detection device, electronic equipment and a storage medium.
Background
Image quality detection is one of important research directions in the face recognition field, and standardizes the warehousing standard of face images, so that more accurate reference images are provided for subsequent face recognition services. However, the quality detection model in the related art has poor deployment efficiency and robustness for different scenes with different warehousing standards.
Disclosure of Invention
In order to improve the deployment efficiency and robustness of a detection model, the embodiment of the disclosure provides an image detection method, an image detection device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an image detection method, including:
Determining a target preset condition corresponding to the current detection level based on a corresponding relation between the pre-established detection level and the preset condition;
And carrying out image detection on the image to be detected, and carrying out warehousing processing on the image to be detected in response to the fact that the image to be detected meets the target preset condition.
In some embodiments, the process of pre-establishing the correspondence between the detection level and the preset condition includes:
Acquiring a plurality of detection types for detecting the image to be detected and detection conditions corresponding to each detection type;
And determining the detection type and the detection condition corresponding to each detection grade from the detection types to obtain the corresponding relation between the detection grade and the preset condition.
In some embodiments, the acquiring a plurality of detection types of the image to be detected and detection conditions corresponding to each detection type includes:
Acquiring at least two preset image detection standards, wherein each preset image detection standard comprises at least one detection type and detection conditions corresponding to the detection type;
And determining a plurality of detection types and detection conditions for detecting the image to be detected according to the detection types and the detection conditions included in the at least two preset image detection standards.
In some embodiments, the determining, according to the detection types and detection conditions included in the at least two preset image detection criteria, a plurality of detection types and detection conditions for detecting the image to be detected includes:
acquiring the detection type and the detection condition included in each preset image detection standard;
Fusion processing is carried out on the detection types of the at least two preset image detection standards to obtain the detection type for detecting the image to be detected;
And for any detection type for detecting the image to be detected, acquiring an intersection of corresponding detection conditions in the at least two preset image detection standards according to the detection type, and obtaining the detection conditions corresponding to the detection type.
In some embodiments, the preset image detection criteria include photo warehousing criteria for different geographic areas.
In some embodiments, the performing image detection on the image to be detected, and responding to the image to be detected meeting the target preset condition, includes:
extracting features of the image to be detected to obtain image features corresponding to each detection type included in the target preset condition;
and determining that the image to be detected meets the target preset condition in response to the fact that each detection type is determined to pass through according to the image characteristics.
In some embodiments, the image to be detected is a face image; the detection type comprises at least one of the following:
image size; the face ratio; a face position; face angle; face brightness; face sharpness; the opening and closing degree of the mouth; facial features contours.
In some embodiments, the process of acquiring the image to be detected includes:
Receiving the image to be detected uploaded through a display interface;
The method further comprises the steps of:
And generating corresponding prompt information in response to the image to be detected not meeting the target preset condition, and outputting the prompt information through the display interface.
In some embodiments, the generating the corresponding prompt information in response to the image to be detected not meeting the target preset condition includes:
Responding to the image to be detected not meeting the target preset condition, and acquiring a detection type which is not passed by detection in the target preset condition;
And determining the prompt information according to the detection type of which the detection fails.
In some embodiments, the performing binning processing on the image to be detected includes:
Determining user data according to the image to be detected and user information corresponding to the image to be detected;
And storing the user data in a database.
In a second aspect, embodiments of the present disclosure provide an image detection apparatus, including:
A determining module configured to determine a target preset condition corresponding to the current detection level based on a correspondence between a detection level established in advance and the preset condition;
the detection module is configured to detect the image to be detected, and the image to be detected is subjected to warehousing processing in response to the fact that the image to be detected meets the target preset condition.
In some embodiments, the disclosed apparatus further comprises a relationship establishment module configured to:
Acquiring a plurality of detection types for detecting the image to be detected and detection conditions corresponding to each detection type;
And determining the detection type and the detection condition corresponding to each detection grade from the detection types to obtain the corresponding relation between the detection grade and the preset condition.
In some embodiments, the relationship establishment module is configured to:
Acquiring at least two preset image detection standards, wherein each preset image detection standard comprises at least one detection type and detection conditions corresponding to the detection type;
And determining a plurality of detection types and detection conditions for detecting the image to be detected according to the detection types and the detection conditions included in the at least two preset image detection standards.
In some embodiments, the relationship establishment module is configured to:
acquiring the detection type and the detection condition included in each preset image detection standard;
Fusion processing is carried out on the detection types of the at least two preset image detection standards to obtain the detection type for detecting the image to be detected;
And for any detection type for detecting the image to be detected, acquiring an intersection of corresponding detection conditions in the at least two preset image detection standards according to the detection type, and obtaining the detection conditions corresponding to the detection type.
In some embodiments, the preset image detection criteria include photo warehousing criteria for different geographic areas.
In some embodiments, the detection module is specifically configured to:
extracting features of the image to be detected to obtain image features corresponding to each detection type included in the target preset condition;
and determining that the image to be detected meets the target preset condition in response to the fact that each detection type is determined to pass through according to the image characteristics.
In some embodiments, the image to be detected is a face image; the detection type comprises at least one of the following:
image size; the face ratio; a face position; face angle; face brightness; face sharpness; the opening and closing degree of the mouth; facial features contours.
In some embodiments, the apparatus of embodiments of the present disclosure further comprises:
And the acquisition module is configured to receive the image to be detected, which is uploaded through the display interface.
In some embodiments, the detection module is further configured to:
And generating corresponding prompt information in response to the image to be detected not meeting the target preset condition, and outputting the prompt information through the display interface.
In some embodiments, the detection module is specifically configured to:
Responding to the image to be detected not meeting the target preset condition, and acquiring a detection type which is not passed by detection in the target preset condition;
And determining the prompt information according to the detection type of which the detection fails.
In some embodiments, the detection module is specifically configured to:
Determining user data according to the image to be detected and user information corresponding to the image to be detected;
And storing the user data in a database.
In a third aspect, embodiments of the present disclosure provide an electronic device, including:
a processor; and
A memory storing computer instructions readable by the processor, the processor performing the method of any of the embodiments of the first aspect when the computer instructions are read.
In a fourth aspect, embodiments of the present disclosure provide a storage medium storing computer readable instructions for causing a computer to perform the method according to any one of the embodiments of the first aspect.
The image detection method of the embodiment of the disclosure comprises the steps of determining a target preset condition corresponding to a current detection grade based on a corresponding relation between the detection grade and the preset condition, detecting an image to be detected, and carrying out warehousing processing on the image to be detected in response to the fact that the image to be detected meets the target preset condition. In the embodiment of the disclosure, different detection levels are established based on different scene requirements, so that the accuracy of the identification service is improved. When the image quality is detected, the quality detection model can be suitable for various scene requirements, and the robustness is stronger. And the quality detection model can be applied to various scenes only through one training, and only the corresponding preset conditions are required to be switched during scene transplanting, so that model training is not required to be performed again, and the model deployment efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the prior art, the drawings that are required in the detailed description or the prior art will be briefly described, it will be apparent that the drawings in the following description are some embodiments of the present disclosure, and other drawings may be obtained according to the drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a flow chart of an image detection method according to some embodiments of the present disclosure.
Fig. 2 is a flow chart of an image detection method in accordance with some embodiments of the present disclosure.
Fig. 3 is a flow chart of an image detection method in accordance with some embodiments of the present disclosure.
Fig. 4 is a flow chart of an image detection method in accordance with some embodiments of the present disclosure.
Fig. 5 is a flow chart of an image detection method in accordance with some embodiments of the present disclosure.
Fig. 6 is a schematic diagram of an image detection system according to some embodiments of the present disclosure.
Fig. 7 is a schematic diagram of a client display interface in accordance with some embodiments of the present disclosure.
Fig. 8 is a flow chart of an image detection method in accordance with some embodiments of the present disclosure.
Fig. 9 is a schematic diagram of a client display interface in accordance with some embodiments of the present disclosure.
Fig. 10 is a flow chart of an image detection method in accordance with some embodiments of the present disclosure.
Fig. 11 is a flow chart of an image detection method in accordance with some embodiments of the present disclosure.
Fig. 12 is a block diagram of an image detection apparatus according to some embodiments of the present disclosure.
Fig. 13 is a block diagram of an image detection apparatus according to some embodiments of the present disclosure.
Fig. 14 is a block diagram of an electronic device suitable for implementing the image detection method of the present disclosure.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure. In addition, technical features related to different embodiments of the present disclosure described below may be combined with each other as long as they do not make a conflict with each other.
Image quality detection is one of the important research directions in the field of Computer Vision (CV). Taking face recognition as an example, the standard of the user warehousing image is strongly related to the precision of the follow-up recognition service, so that the warehousing standard is standardized for face quality detection of the warehousing image, and the precision of the face recognition service is improved.
In an actual face recognition scenario, different scenarios have different detection accuracy requirements. Taking a conventional security scene as an example, the video stream form data is generally used as detection and identification data, and the requirements of relatively low identification precision and high-speed processing are met. Therefore, the quality detection requirement on the warehouse-in image is relatively low, and the warehouse-in image only meets the size requirement and ensures that the face is on the image.
Taking entrance guard scenes such as office buildings and residential buildings as examples, in order to avoid strangers going in and going out, the requirements on the accuracy of a face recognition system are relatively high, and therefore the requirements on the quality detection of warehouse-in images are relatively high. For example, in addition to meeting the above basic size and face requirements, the warehouse-in image needs to further limit factors such as face angle, face sharpness, face duty ratio, face occlusion, face integrity, etc.
Taking a high-level member scene as an example, the control personnel need to be strictly accessed, so that the precision requirement on the face recognition system is higher, and the quality detection requirement on the warehouse-in image is higher. For example, the warehouse-in image meets all the requirements, and needs to further limit comprehensive factors such as opening and closing of the mouth of the human face, outline of the five sense organs and the like, and in addition, the detection standard can be enhanced on each detection type.
In the related art, aiming at the scene requirements of different detection levels, if the same detection standard is adopted to train the quality detection model, the model is difficult to meet different actual scene requirements. And if the quality detection models with different detection levels are respectively trained according to different scene requirements, when the quality detection models are required to be transplanted to scenes with different detection levels, the models are required to be retrained, the workload of model training is certainly increased, and the model deployment efficiency is reduced.
Based on the above-mentioned drawbacks, the embodiments of the present disclosure provide an image detection method, apparatus, electronic device, and storage medium, which are intended to be suitable for image quality detection with different scene requirements, and not only meet each scene requirement, but also improve efficiency and robustness of a quality detection model without retraining the detection model.
In a first aspect, embodiments of the present disclosure provide an image detection method, which is applicable to an electronic device. In the embodiment of the present disclosure, the type of the electronic device is not limited, and may be any device type suitable for implementation, such as a computer, a server, a mobile terminal, a wearable device, and the like.
As shown in fig. 1, in some embodiments, an image detection method of an example of the present disclosure includes:
S110, determining a target preset condition corresponding to the current detection level based on the corresponding relation between the pre-established detection level and the preset condition.
Specifically, in the embodiment of the present disclosure, it is necessary to establish in advance a correspondence relationship between the detection level and the preset condition.
The detection level refers to the detection level for detecting and classifying the image quality according to different scene requirements. In one example, different detection levels may be set for the aforementioned conventional security scenario, entrance guard scenario, and membership scenario, respectively.
The preset condition is detection logic for detecting the image quality based on different detection levels, that is, each detection level corresponds to a preset condition, and the preset condition represents a condition meeting the requirement of the detection level for detecting the image quality.
In some embodiments, the pre-established correspondence between the detection level and the preset condition may be as shown in the following table one:
List one
Detection grade Preset conditions
Grade I Condition 1
Grade II Condition 2
Grade III Condition 3
The current detection level represents the detection level corresponding to the current implementation scene. In some embodiments, the current detection level may be preset by a background staff member, e.g., at the time of deployment of the image quality detection model, the staff member may set the current detection level based on the current scene demand.
Based on the correspondence relationship shown in, for example, table one, after the current detection level is obtained, a preset condition corresponding to the current detection level, that is, a target preset condition, can be obtained according to the correspondence relationship. For example, in one example, the current detection level is "level I", so that the target preset condition is determined to be "condition 1" according to the correspondence of table one.
The specific process of establishing the correspondence between the detection level and the preset condition is described in the following embodiments of the present disclosure, which will not be described in detail herein.
S120, performing image detection on the image to be detected, and performing warehousing processing on the image to be detected in response to the fact that the image to be detected meets the target preset condition.
Specifically, after the target preset condition is determined, the image to be detected can be detected according to the target preset condition, so as to determine whether the image to be detected meets the target preset condition.
The image to be detected represents an image for which quality detection is required. In some embodiments, the image to be detected may be uploaded actively by the user, or may be acquired passively by the image acquisition device, or may be acquired through a network, which is not limited in this disclosure.
It can be understood that the target preset condition represents a judging condition of whether the image meets the warehouse-in requirement or not under the current detection level. Therefore, the image detection technology can be used for detecting the image to be detected, so that whether the image to be detected meets the target preset condition or not is determined. If not, the image to be detected does not accord with the warehouse-in requirement under the current detection level. If yes, the image to be detected meets the warehousing requirement under the current detection level, so that warehousing processing can be carried out on the image to be detected.
After the image to be detected is subjected to warehouse entry processing, the image to be detected can be stored in a database and used as a reference image in subsequent business. Taking face recognition as an example, after the image to be detected of the user A is put in storage, the image can be used as a reference image of the user A, so that in a subsequent face recognition scene, the acquired face image is compared with the reference image to carry out face recognition on the user A. Those skilled in the art will appreciate that this disclosure is not repeated.
As can be seen from the foregoing, in the embodiment of the present disclosure, different detection levels are established based on different scene requirements, so as to improve the accuracy of identifying the service. When the image quality is detected, the quality detection model can be suitable for various scene requirements, and the robustness is stronger. And the quality detection model can be applied to various scenes only through one training, and only the corresponding preset conditions are required to be switched during scene transplanting, so that model training is not required to be performed again, and the model deployment efficiency is improved.
As shown in fig. 2, in some embodiments, in the image detection method of the examples of the present disclosure, a process of establishing a correspondence between a detection level and a preset condition includes:
s210, acquiring a plurality of detection types for detecting the image to be detected and detection conditions corresponding to each detection type.
S220, determining the detection type and the detection condition corresponding to each detection grade from the detection types, and obtaining the corresponding relation between the detection grade and the preset condition.
In particular, the detection type represents a plurality of detection dimensions in which an image to be detected is detected. For example, in some embodiments, taking the face recognition scenario as an example, the detection dimension may include the following aspects: whether the image has a face, the face range ratio, the face shielding condition, the face rotation angle, the face five officials and the like, and a plurality of corresponding detection types can be constructed by integrating the dimensions.
Wherein each detection type has a corresponding detection condition, and the detection condition represents a judgment condition whether the detection type passes detection. In one example, the detection type takes "face duty ratio" as an example, and the corresponding detection condition may be that the face duty ratio is not lower than 35%, when the face duty ratio in the image to be detected is lower than 35%, the face duty ratio is not passed, otherwise, the face duty ratio is passed. In another example, the detection type is exemplified by a "face angle", and the corresponding detection condition may be that the face deflection angle is not more than 15 ° ", when the face deflection angle in the image to be detected exceeds 15 °, the face angle detection is not passed, and otherwise, the face angle detection is passed.
In some embodiments, the process of obtaining the detection type and detection conditions may be described with reference to the embodiment of fig. 3, and is described below in conjunction with fig. 3.
As shown in fig. 3, in some embodiments, in the image detection method of the example of the present disclosure, a process of acquiring a plurality of detection types and detection conditions includes:
s221, acquiring at least two preset image detection standards, wherein each preset image detection standard comprises at least one detection type and detection conditions corresponding to the detection type.
S222, determining a plurality of detection types and detection conditions for detecting the image to be detected according to the detection types and the detection conditions included in at least two preset image detection standards.
Specifically, the preset image detection standard indicates a detection condition for quality detection of an image set in advance. It is understood that the preset image detection standard may be a detection standard for detecting quality of an image in different fields acquired based on a priori knowledge or through a channel such as a network. In some embodiments, the preset image detection criteria may be photo warehousing criteria for different geographic areas.
Taking the face recognition scenario as an example, for example, different countries have multi-dimensional requirements for passport photo warehousing, so that the detection standard of the passport photo can be used as one of the preset image detection standards. For example, in yet another example, the social security bureau has a multi-dimensional requirement for the social security photo warehousing of citizens, so that the detection standard of the social security photo can be used as one of the preset image detection standards.
Of course, the preset image detection standard in the present disclosure may be any other standard for image normalization suitable for implementation, and is not limited to the passport photo and social security photo warehousing standard. Those skilled in the art will appreciate that this disclosure is not repeated.
After obtaining a plurality of preset image detection standards, determining a plurality of detection types and detection conditions for detecting the image to be detected according to the detection types and the detection conditions included in the preset image detection standards.
As shown in fig. 4, in some embodiments, the image detection method of the present disclosure includes a process of determining a plurality of detection types and detection conditions for detecting an image to be detected, including:
s410, acquiring detection types and detection conditions included in each preset image detection standard.
S420, fusing the detection types of at least two preset image detection standards to obtain the detection type for detecting the image to be detected.
In some embodiments, in order to enable the quality detection model to have good precision for faces of different countries, the input standard of passport photos in different geographical ranges can be used as a preset image detection standard, and a plurality of detection types of the image to be detected can be established based on commonalities and differences of different detection conditions.
For example, the warehousing criteria for the national passport photo of country a requires: the size of the photo is 5*5 cm, the face range is not less than 50% of the whole photo, the face position is positioned in the middle of the photo, and the face angle cannot be inclined. The warehouse-in standard of the national passport photo requires: the photo size is 3cm by 4cm, the five sense organs are required to be exposed and not covered, the eyes are not required to be strabismus, the face range is not less than 35% of the whole photo, and the face angle cannot be inclined.
That is, the detection dimension of the passport photo in country a includes: photo size, face duty ratio, face position, face angle. The detection dimension of the passport photograph in country B includes: photo size, facial contours, gaze direction, face duty cycle, face angle.
In some embodiments, the detection dimensions of country a and country B may be integrated, and the union may be performed on each detection dimension, where the plurality of detection types constructed may include: photo size, face duty ratio, face position, face angle, facial contours, gaze direction.
It will be appreciated that the foregoing is merely exemplary of embodiments of the present disclosure, and in other embodiments, the detection types may also include other types, such as face brightness, face sharpness, mouth opening and closing degree, etc., and those skilled in the art may select the detection types according to specific scene requirements, which need not be exhaustive.
S430, for any detection type for detecting the image to be detected, taking intersection of corresponding detection conditions in at least two preset image detection standards according to the detection type, and obtaining the detection conditions corresponding to the detection type.
It is understood that each detection type in the preset image detection standard has a corresponding detection condition. In one example, the detection type is exemplified by an image size, and the corresponding detection condition may be "image size is not more than 5*5 cm". In another example, the detection type takes the face ratio as an example, and the corresponding detection condition may be that the face range/image size is not less than 35%. In yet another example, the detection type is exemplified by a face angle, and the corresponding detection condition may be that "the yaw angle/heading angle/pitch angle of the face does not exceed 15 °".
That is, each detection type has a corresponding detection condition that represents a determination logic of whether the detection type passes. For example, the detection type is exemplified by the image size, and the detection condition is that the image size is not more than 5*5 cm, and if the image size to be detected is more than 5*5 cm, the image to be detected is not passed for the image size detection.
In some embodiments, the plurality of preset image detection standards may have the same detection type, but the detection conditions of the same detection type in different preset image detection standards may be different. For example, the detection type is exemplified by an image size, and the detection condition corresponding to the detection type in the detection standard 1 is "the image size is not more than 3.5cm by 3.5cm", and the detection condition corresponding to the detection type in the detection standard 2 is "the image size is not less than 2cm by 2cm, and not more than 5cm by 5cm". Therefore, the detection conditions in each detection standard can be synthesized, for example, intersection sets are taken for each detection condition, and the detection condition corresponding to the image size is determined as 'the image size is not smaller than 2cm by 2cm and not more than 3.5cm by 3.5 cm', so that the detection conditions can simultaneously meet different detection standards. Those skilled in the art will understand and fully implement this and will not be described in detail in this disclosure.
After constructing a plurality of detection types and detection conditions for the image to be detected, the corresponding detection types and preset conditions can be set for different detection levels. It will be appreciated that different detection levels correspond to different scene requirements, and thus the types of detection included in different detection levels are also different. Since the preset conditions consist of the detection conditions of the detection types included in the detection levels, the preset conditions corresponding to the respective detection levels are different from each other.
For example, in some embodiments, the correspondence between the detection level and the preset condition may be as follows: watch II
As can be seen from the above table two, the detection types included in different detection levels are also different, so that the preset conditions in the subsequent quality detection of the image to be detected are also different.
In some embodiments, taking a quality detection scene of a face image as an example, the detection type may include at least one of:
1) Image size. The size of the image is indicated, for example 2.5cm by 3.5cm.
2) Face ratio. Representing the ratio of the face range in the image to the whole image, such as 35%, 50%, 80%, etc.
3) Face position. Representing the position of the face region in the image throughout the image, e.g., centered, left-hand, right-hand, etc.
4) Face angle. And the deflection condition of the human face in the image is represented, and the deflection condition comprises a pitch angle, a deflection angle and a deflection angle of a course angle.
5) Face brightness. Representing the luminance value of a face in the image.
6) Face sharpness. Indicating the sharpness of the face in the image.
7) The opening and closing degree of the mouth. The opening and closing size of the mouth of the human face in the image is represented.
8) Facial features contours. And (5) representing the shielding condition of the facial features in the image, and whether the key points of the facial features can be captured.
Based on the detection types, the established corresponding relation between the detection level and the preset condition can be shown in the following table III:
Watch III
It should be noted that the above detection types and corresponding relationships are merely exemplary embodiments of the present disclosure, and in other embodiments, the detection types may also include other types, and the corresponding relationships between the detection levels and the detection types are not limited to the above examples, and may be selected by those skilled in the art according to specific scene requirements, which need not be exhaustive in this disclosure.
As can be seen from the foregoing, in the embodiment of the present disclosure, different detection levels are established based on different scene requirements, so as to improve the accuracy of identifying the service. When the image quality is detected, the quality detection model can be suitable for various scene requirements, and the robustness is stronger. And the quality detection model can be applied to various scenes only through one training, and only the corresponding detection level is required to be switched during scene transplanting, so that model training is not required to be performed again, and the model deployment efficiency is improved. In addition, the corresponding relation is established based on the detection conditions of the passport photos in different regional ranges, so that the detection model has better robustness in use in different regional ranges.
And performing quality detection on the image to be detected under the current detection level through the corresponding relation shown in the table III.
As shown in fig. 5, in some embodiments, in the image detection method of the examples of the present disclosure, the process of determining whether the image to be detected meets the target preset condition includes:
s510, extracting features of the image to be detected to obtain image features corresponding to each detection type included in the target preset condition.
S520, determining that the image to be detected meets the target preset condition in response to determining that each detection type passes detection according to the image characteristics.
Specifically, the detection type and the preset condition corresponding to the current detection level may be determined based on the foregoing correspondence. For example, referring to the table three, according to each detection type corresponding to the current detection level, feature extraction is performed on the image to be detected, so as to obtain the image feature corresponding to each detection type.
In one example, the detection type takes an image size as an example, and the size of the image to be detected is obtained by processing the image to be detected, where the size is an image feature corresponding to the image size.
In one example, the detection type takes a face angle as an example, and the face range in the image to be detected is subjected to feature extraction based on a face recognition technology, so that a face deflection angle is determined, and the deflection angle is the image feature corresponding to the face angle.
In one example, the detection type takes a face duty ratio as an example, and feature extraction is performed on a face range in an image to be detected based on a face recognition technology, so as to determine the size of the face duty ratio, wherein the size of the face duty ratio is the image feature corresponding to the face duty ratio.
In other words, the image features refer to relevant parameters corresponding to the detection type, which can be understood and fully implemented by those skilled in the art, and are not enumerated in detail in this disclosure.
For each detection type, after obtaining the corresponding image feature, whether the detection type passes detection can be determined according to the image feature and the detection condition.
In one example, the detection type is exemplified by an image size, and the detection condition is "the image size does not exceed 2.5cm by 3.5cm". And extracting the characteristics of the image to be detected, and if the image characteristics of the image to be detected are 'the image size is 3.5cm and 3.5 cm', determining that the image size detection of the image to be detected fails. If the image characteristics of the image to be detected are 'the image size is 2cm x 2 cm', the image size detection passing of the image to be detected is determined.
In one example, the detection type is exemplified by a face ratio, and the detection condition is "the face ratio is not less than 50%". And extracting the characteristics of the image to be detected, and if the obtained image characteristics of the image to be detected are 38 percent of face ratio, determining that the face ratio of the image to be detected is not passed. If the image characteristic of the image to be detected is that the face ratio is 65%, the face ratio of the image to be detected is determined to pass detection.
Through the process of the above example, each detection type corresponding to the current detection level can be determined in turn. If each detection type corresponding to the current detection level passes detection, determining that the image to be detected meets the target preset condition, and storing the image to be detected. Otherwise, if a certain detection type fails to pass, the image to be detected does not meet the target preset condition, and the image cannot be stored.
As can be seen from the foregoing, in the embodiment of the present disclosure, different detection levels are established based on different scene requirements, so as to improve the accuracy of identifying the service. When the image quality is detected, the quality detection model can be suitable for various scene requirements, and the robustness is stronger. And the quality detection model can be applied to various scenes only through one training, and only the corresponding detection level is required to be switched during scene transplanting, so that model training is not required to be performed again, and the model deployment efficiency is improved.
Fig. 6 illustrates a schematic diagram of an image detection system in some embodiments of the present disclosure. As shown in fig. 6, the system includes a server 100 and clients 200, the quality detection model of the disclosure may be deployed in the server 100, and the server 100 may establish communication connection with each client 200 by way of wireless or wired communication.
In some implementations, the image to be detected in examples of the present disclosure may be an image uploaded by the client 200.
In particular, the client 200 may be a computer or a mobile terminal, and the client 200 has a display screen, so that a display interface may be output on the display screen. Display interface as shown in fig. 7, the user of the client 200 may add an image to be detected through an "upload image" button on the display interface, and after the user addition is completed, the client 200 may transmit the image to be detected added by the user to the server 100.
In some embodiments, after the image to be detected is acquired, the server 100 may perform image detection on the image to be detected by using a face detection model, and extract a valid face area in the image to be detected as a base image for subsequent feature extraction.
In some embodiments, the image detection method of the embodiments of the present disclosure further includes:
and generating corresponding prompt information in response to the image to be detected not meeting the target preset condition, and outputting the prompt information through a display interface.
Specifically, in combination with the foregoing, when a detection type of an image to be detected fails, a corresponding prompt message may be generated, and the prompt message may be output through a display interface. Therefore, the method and the device can prompt the user which detection type fails, and facilitate the user to adjust the image to be detected in a targeted manner, so that the image to be detected which accords with the preset condition is uploaded. The following is a detailed description with reference to fig. 7.
As shown in fig. 8, in some embodiments, in the image detection method of the examples of the present disclosure, a process of generating the hint information includes:
And S810, responding to the image to be detected not meeting the target preset condition, and acquiring the detection type which is not passed by the detection in the target preset condition.
S820, determining prompt information according to the detection type of failed detection.
Specifically, in combination with the foregoing embodiment of fig. 5, in the case where the image to be detected does not satisfy the target preset condition, it indicates that at least one detection type is not detected, so that all detection types that are detected to be failed may be obtained, and then the prompt information is determined according to the detection types that are detected to be failed.
In one example, it is assumed that an image size of an image to be detected is too large to cause image size detection to be failed, and that a face occupation ratio is relatively small to cause face occupation ratio detection to be failed. Therefore, according to the detection types of the two detection failure, the prompt message is determined as 'the image is oversized', and the image with the size not more than 2.5cm and 3.5cm is uploaded; the image face ratio is too small, and an image "with a face ratio of not less than 65% is uploaded, as shown in fig. 9.
As can be seen from the foregoing, in the embodiment of the present disclosure, when the image to be detected does not meet the target preset condition, the display interface outputs the prompt information, so that the user can be prompted as to which detection type is not passed, and the user can adjust the image to be detected in a targeted manner.
In some embodiments, after determining that the image to be detected meets the target preset condition, the image to be detected may be subjected to warehouse entry processing.
As shown in fig. 10, in some embodiments, in the image detection method of the examples of the present disclosure, a process of warehousing an image to be detected includes:
s1010, determining user data according to the image to be detected and user information corresponding to the image to be detected.
S1020, storing the user data in a database.
Specifically, when the image to be detected meets the target preset condition, the image to be detected is indicated to meet the warehouse-in requirement, and at the moment, the user information corresponding to the image to be detected can be acquired. The user information indicates a user identification, such as a user ID (Identity document, an identification number) or the like, of the uploaded image to be detected.
After obtaining the user information, the user information and the image to be detected can be packaged to be the user data of the user, and the user data is stored in a database to be used as the reference image data of the downstream recognition task.
As shown in fig. 11, in some embodiments, an image detection method of an example of the present disclosure includes:
S1110, determining a plurality of detection types and detection conditions of the image to be detected according to the detection standards of the passport photos in at least two different regions.
For example, a plurality of detection types of an image to be detected and corresponding detection conditions may be determined based on commonalities and differences in detection conditions according to detection standards of passport photographs of a plurality of countries. Those skilled in the art will understand and fully implement the foregoing and will not be further described herein.
S1120, according to at least one detection type and detection condition corresponding to each detection level, establishing a corresponding relation between the detection level and a preset condition.
Specifically, those skilled in the art may refer to the embodiment shown in fig. 2, and this will not be described in detail.
S1130, determining a target preset condition corresponding to the current detection level based on the corresponding relation between the detection level and the preset condition.
Specifically, the current detection level may be set by a staff on the server side based on specific scene requirements. After the detection level is set, the corresponding preset condition can be determined as the target preset condition according to the set detection level.
S1140, receiving an image to be detected sent by the client, and extracting features of the image to be detected to obtain image features corresponding to each detection type included in the target preset condition.
Specifically, as shown in fig. 6, the user may upload the image to be detected through the client 200, so that the server 100 may receive the image to be detected transmitted by the client. The server performs feature extraction on the image to be detected based on the embodiment of fig. 5, so as to obtain the image feature corresponding to each detection type in the target preset condition.
S1150, determining that the image to be detected meets the target preset condition in response to the fact that each detection type passes according to the image characteristics.
Specifically, whether each detection type passes or not may be determined based on the foregoing embodiment of fig. 5, and if each detection type passes, it is determined that the image to be detected meets the target preset condition, step S1160 is performed.
S1160, carrying out warehousing processing on the image to be detected.
Specifically, the image to be detected may be subjected to warehouse entry processing as shown in the embodiment of fig. 10, which will not be described again.
S1170, determining that the image to be detected does not meet the target preset condition in response to determining that at least one detection type is not passed according to the image characteristics.
Specifically, whether each detection type passes or not may be determined based on the foregoing embodiment of fig. 5, and if there is a detection failure of a certain detection type, it is determined that the image to be detected does not satisfy the target preset condition, step S1180 is performed.
S1180, generating and outputting prompt information.
Specifically, the prompt information may be generated based on the foregoing embodiment of fig. 8, and output on the display interface of the client 200, so as to prompt the user to modify the image to be detected and upload again.
As can be seen from the foregoing, in the embodiment of the present disclosure, different detection levels are established based on different scene requirements, so as to improve the accuracy of identifying the service. When the image quality is detected, the quality detection model can be suitable for various scene requirements, and the robustness is stronger. And the quality detection model can be applied to various scenes only through one training, and only the corresponding detection level is required to be switched during scene transplanting, so that model training is not required to be performed again, and the model deployment efficiency is improved. In addition, the corresponding relation is established based on the detection conditions of the passport photos in different regional ranges, so that the detection model has better robustness in use in different regional ranges. When the image to be detected does not meet the target preset condition, the display interface outputs prompt information, so that a user can be prompted as to which detection type fails, and the user can adjust the image to be detected in a targeted manner.
In a second aspect, embodiments of the present disclosure provide an image detection apparatus, which is applicable to an electronic device. In the embodiment of the present disclosure, the type of the electronic device is not limited, and may be any device type suitable for implementation, such as a computer, a server, a mobile terminal, a wearable device, and the like.
As shown in fig. 12, in some embodiments, an image detection apparatus of an example of the present disclosure includes:
a determining module 10 configured to determine a target preset condition corresponding to the current detection level based on a correspondence between a detection level established in advance and the preset condition;
The detection module 20 is configured to perform image detection on an image to be detected, and perform warehousing processing on the image to be detected in response to the image to be detected meeting the target preset condition.
As can be seen from the foregoing, in the embodiment of the present disclosure, different detection levels are established based on different scene requirements, so as to improve the accuracy of identifying the service. When the image quality is detected, the quality detection model can be suitable for various scene requirements, and the robustness is stronger. And the quality detection model can be applied to various scenes only through one training, and only the corresponding preset conditions are required to be switched during scene transplanting, so that model training is not required to be performed again, and the model deployment efficiency is improved.
As shown in fig. 13, in some embodiments, the disclosed apparatus further includes a relationship establishment module 40, the relationship establishment module 40 configured to:
Acquiring a plurality of detection types for detecting the image to be detected and detection conditions corresponding to each detection type;
And determining the detection type and the detection condition corresponding to each detection grade from the detection types to obtain the corresponding relation between the detection grade and the preset condition.
In some embodiments, the relationship establishment module 40 is configured to:
Acquiring at least two preset image detection standards, wherein each preset image detection standard comprises at least one detection type and detection conditions corresponding to the detection type;
And determining a plurality of detection types and detection conditions for detecting the image to be detected according to the detection types and the detection conditions included in the at least two preset image detection standards.
In some embodiments, the relationship establishment module 40 is configured to:
acquiring the detection type and the detection condition included in each preset image detection standard;
Fusion processing is carried out on the detection types of the at least two preset image detection standards to obtain the detection type for detecting the image to be detected;
And for any detection type for detecting the image to be detected, acquiring an intersection of corresponding detection conditions in the at least two preset image detection standards according to the detection type, and obtaining the detection conditions corresponding to the detection type.
In some embodiments, the preset image detection criteria include photo warehousing criteria for different geographic areas.
In some embodiments, the detection module 20 is specifically configured to:
extracting features of the image to be detected to obtain image features corresponding to each detection type included in the target preset condition;
and determining that the image to be detected meets the target preset condition in response to the fact that each detection type is determined to pass through according to the image characteristics.
In some embodiments, the image to be detected is a face image; the detection type comprises at least one of the following:
image size; the face ratio; a face position; face angle; face brightness; face sharpness; the opening and closing degree of the mouth; facial features contours.
As shown in fig. 13, in some embodiments, an apparatus of an embodiment of the disclosure further comprises:
an acquisition module 30 is configured to receive the image to be detected uploaded through the display interface.
In some embodiments, the detection module 20 is further configured to:
And generating corresponding prompt information in response to the image to be detected not meeting the target preset condition, and outputting the prompt information through the display interface.
In some embodiments, the detection module 20 is specifically configured to:
Responding to the image to be detected not meeting the target preset condition, and acquiring a detection type which is not passed by detection in the target preset condition;
And determining the prompt information according to the detection type of which the detection fails.
In some embodiments, the detection module 20 is specifically configured to:
Determining user data according to the image to be detected and user information corresponding to the image to be detected;
And storing the user data in a database.
As can be seen from the foregoing, in the embodiment of the present disclosure, different detection levels are established based on different scene requirements, so as to improve the accuracy of identifying the service. When the image quality is detected, the quality detection model can be suitable for various scene requirements, and the robustness is stronger. And the quality detection model can be applied to various scenes only through one training, and only the corresponding detection level is required to be switched during scene transplanting, so that model training is not required to be performed again, and the model deployment efficiency is improved. In addition, the corresponding relation is established based on the detection conditions of the passport photos in different regional ranges, so that the detection model has better robustness in use in different regional ranges. When the image to be detected does not meet the target preset condition, the display interface outputs prompt information, so that a user can be prompted as to which detection type fails, and the user can adjust the image to be detected in a targeted manner.
In a third aspect, embodiments of the present disclosure provide an electronic device, including:
a processor; and
A memory storing computer instructions readable by the processor, the processor performing the method of any of the embodiments of the first aspect when the computer instructions are read.
In a fourth aspect, embodiments of the present disclosure provide a storage medium storing computer readable instructions for causing a computer to perform the method according to any one of the embodiments of the first aspect.
Specifically, fig. 14 shows a schematic structural diagram of an electronic device 600 suitable for implementing the method of the present disclosure, and by using the electronic device shown in fig. 14, the corresponding functions of the processor and the storage medium described above may be implemented.
As shown in fig. 14, the electronic device 600 includes a processor 601 that can perform various appropriate actions and processes according to a program stored in a memory 602 or a program loaded into the memory 602 from a storage portion 608. In the memory 602, various programs and data required for the operation of the system 600 are also stored. The processor 601 and the memory 602 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the above method processes may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method described above. In such an embodiment, the computer program can be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be apparent that the above embodiments are merely examples for clarity of illustration and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the present disclosure.

Claims (10)

1. An image detection method, comprising:
Determining a target preset condition corresponding to the current detection level based on a corresponding relation between the pre-established detection level and the preset condition;
Performing image detection on an image to be detected, and performing warehousing processing on the image to be detected in response to the image to be detected meeting the target preset condition;
the process for pre-establishing the corresponding relation between the detection level and the preset condition comprises the following steps:
Acquiring at least two preset image detection standards, wherein each preset image detection standard comprises at least one detection type and detection conditions corresponding to the detection type; the at least two preset image detection standards comprise photo warehousing standards in different geographical ranges;
Determining a plurality of detection types and detection conditions for detecting the image to be detected according to the detection types and the detection conditions included in the at least two preset image detection standards;
And determining the detection type and the detection condition corresponding to each detection grade from the detection types to obtain the corresponding relation between the detection grade and the preset condition.
2. The method according to claim 1, wherein the determining a plurality of detection types and detection conditions for detecting the image to be detected according to the detection types and detection conditions included in the at least two preset image detection criteria includes:
acquiring the detection type and the detection condition included in each preset image detection standard;
Fusion processing is carried out on the detection types of the at least two preset image detection standards to obtain the detection type for detecting the image to be detected;
And for any detection type for detecting the image to be detected, acquiring an intersection of corresponding detection conditions in the at least two preset image detection standards according to the detection type, and obtaining the detection conditions corresponding to the detection type.
3. The method of claim 1, wherein the image detection of the image to be detected, and the response to the image to be detected meeting the target preset condition, comprises:
extracting features of the image to be detected to obtain image features corresponding to each detection type included in the target preset condition;
and determining that the image to be detected meets the target preset condition in response to the fact that each detection type is determined to pass through according to the image characteristics.
4. A method according to any one of claims 1 to 3, wherein the image to be detected is a face image; the detection type comprises at least one of the following:
image size; the face ratio; a face position; face angle; face brightness; face sharpness; the opening and closing degree of the mouth; facial features contours.
5. A method according to any one of claims 1 to 3, wherein the process of acquiring the image to be detected comprises:
Receiving the image to be detected uploaded through a display interface;
The method further comprises the steps of:
And generating corresponding prompt information in response to the image to be detected not meeting the target preset condition, and outputting the prompt information through the display interface.
6. The method of claim 5, wherein generating the corresponding hint information in response to the image to be detected not meeting the target preset condition comprises:
Responding to the image to be detected not meeting the target preset condition, and acquiring a detection type which is not passed by detection in the target preset condition;
And determining the prompt information according to the detection type of which the detection fails.
7. A method according to any one of claims 1 to 3, wherein said subjecting the image to be detected to a binning process comprises:
Determining user data according to the image to be detected and user information corresponding to the image to be detected;
And storing the user data in a database.
8. An image detection apparatus, comprising:
A determining module configured to determine a target preset condition corresponding to the current detection level based on a correspondence between a detection level established in advance and the preset condition;
The detection module is configured to detect images to be detected, and the images to be detected are subjected to warehousing processing in response to the images to be detected meeting the target preset conditions;
The relation establishing module is configured to acquire at least two preset image detection standards, wherein each preset image detection standard comprises at least one detection type and detection conditions corresponding to the detection types, and the at least two preset image detection standards comprise photo warehousing standards in different regional ranges; determining a plurality of detection types and detection conditions for detecting the image to be detected according to the detection types and the detection conditions included in the at least two preset image detection standards; and determining the detection type and the detection condition corresponding to each detection grade from the detection types to obtain the corresponding relation between the detection grade and the preset condition.
9. An electronic device, comprising:
a processor; and
A memory storing computer instructions readable by the processor, which when read, performs the method of any one of claims 1 to 7.
10. A storage medium storing computer readable instructions for causing a computer to perform the method of any one of claims 1 to 7.
CN202111080907.XA 2021-09-15 2021-09-15 Image detection method, device, electronic equipment and storage medium Active CN113792662B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111080907.XA CN113792662B (en) 2021-09-15 2021-09-15 Image detection method, device, electronic equipment and storage medium
PCT/CN2022/108681 WO2023040480A1 (en) 2021-09-15 2022-07-28 Image detection method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111080907.XA CN113792662B (en) 2021-09-15 2021-09-15 Image detection method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113792662A CN113792662A (en) 2021-12-14
CN113792662B true CN113792662B (en) 2024-05-21

Family

ID=78878394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111080907.XA Active CN113792662B (en) 2021-09-15 2021-09-15 Image detection method, device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113792662B (en)
WO (1) WO2023040480A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792662B (en) * 2021-09-15 2024-05-21 北京市商汤科技开发有限公司 Image detection method, device, electronic equipment and storage medium
CN115240265B (en) * 2022-09-23 2023-01-10 深圳市欧瑞博科技股份有限公司 User intelligent identification method, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509958A (en) * 2018-03-30 2018-09-07 北京金山安全软件有限公司 Defect type detection method, defect type detection device, electronic equipment and medium
CN109858381A (en) * 2019-01-04 2019-06-07 深圳壹账通智能科技有限公司 Biopsy method, device, computer equipment and storage medium
CN110070001A (en) * 2019-03-28 2019-07-30 上海拍拍贷金融信息服务有限公司 Behavioral value method and device, computer readable storage medium
CN111783663A (en) * 2020-06-30 2020-10-16 公安部第三研究所 Algorithm evaluation system and detection method for performance detection of human evidence verification equipment
CN112115886A (en) * 2020-09-22 2020-12-22 北京市商汤科技开发有限公司 Image detection method and related device, equipment and storage medium
WO2021017561A1 (en) * 2019-07-30 2021-02-04 深圳市商汤科技有限公司 Face recognition method and apparatus, electronic device, and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2187350A1 (en) * 2008-11-05 2010-05-19 Thomson Licensing Method and device for assessing image quality degradation
JP5693162B2 (en) * 2010-11-09 2015-04-01 キヤノン株式会社 Image processing system, imaging apparatus, image processing apparatus, control method therefor, and program
CN107169458B (en) * 2017-05-18 2018-04-06 深圳云天励飞技术有限公司 Data processing method, device and storage medium
CN107590212A (en) * 2017-08-29 2018-01-16 深圳英飞拓科技股份有限公司 The Input System and method of a kind of face picture
CN108391059A (en) * 2018-03-23 2018-08-10 华为技术有限公司 A kind of method and apparatus of image procossing
CN112001280A (en) * 2020-08-13 2020-11-27 浩鲸云计算科技股份有限公司 Real-time online optimization face recognition system and method
CN113792662B (en) * 2021-09-15 2024-05-21 北京市商汤科技开发有限公司 Image detection method, device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509958A (en) * 2018-03-30 2018-09-07 北京金山安全软件有限公司 Defect type detection method, defect type detection device, electronic equipment and medium
CN109858381A (en) * 2019-01-04 2019-06-07 深圳壹账通智能科技有限公司 Biopsy method, device, computer equipment and storage medium
CN110070001A (en) * 2019-03-28 2019-07-30 上海拍拍贷金融信息服务有限公司 Behavioral value method and device, computer readable storage medium
WO2021017561A1 (en) * 2019-07-30 2021-02-04 深圳市商汤科技有限公司 Face recognition method and apparatus, electronic device, and storage medium
CN111783663A (en) * 2020-06-30 2020-10-16 公安部第三研究所 Algorithm evaluation system and detection method for performance detection of human evidence verification equipment
CN112115886A (en) * 2020-09-22 2020-12-22 北京市商汤科技开发有限公司 Image detection method and related device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Face image quality inspection system according to ISO standard";PW Ong;《Human Face Recognition Image Processing》;全文 *
"基于多指标与支持向量回归的道路监控图像质量检测方法";郭兴隆;《公路交通技术》;第34卷(第06期);全文 *

Also Published As

Publication number Publication date
CN113792662A (en) 2021-12-14
WO2023040480A1 (en) 2023-03-23

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
EP4123503A1 (en) Image authenticity detection method and apparatus, computer device and storage medium
CN113792662B (en) Image detection method, device, electronic equipment and storage medium
TW201911130A (en) Method and device for remake image recognition
US20170262472A1 (en) Systems and methods for recognition of faces e.g. from mobile-device-generated images of faces
US9280804B2 (en) Rotation of an image based on image content to correct image orientation
CN112954450B (en) Video processing method and device, electronic equipment and storage medium
US9965882B2 (en) Generating image compositions
CN111985281B (en) Image generation model generation method and device and image generation method and device
EP3779775B1 (en) Media processing method and related apparatus
CN114463828B (en) Invigilation method and system based on testimony unification, electronic equipment and storage medium
CN113762107A (en) Object state evaluation method and device, electronic equipment and readable storage medium
KR101820456B1 (en) Method And Apparatus for Generating Depth MAP
CN113011254A (en) Video data processing method, computer equipment and readable storage medium
CN113792661B (en) Image detection method, device, electronic equipment and storage medium
CN116432152A (en) Cross-platform collaborative manufacturing system
CN116704579A (en) Student welcome new photo analysis system and method based on image processing
WO2021068485A1 (en) User identity verification method and apparatus for multi-party video, and computer device
CN113792661A (en) Image detection method, image detection device, electronic equipment and storage medium
CN114360015A (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN112560705A (en) Face detection method and device and electronic equipment
CN111325185A (en) Face fraud prevention method and system
CN113762156B (en) Video data processing method, device and storage medium
CN113014914B (en) Neural network-based single face-changing short video identification method and system
JP7110669B2 (en) Video conferencing system, video conferencing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40059695

Country of ref document: HK

GR01 Patent grant