CN117877095A - AI-based personnel identity recognition method and system - Google Patents

AI-based personnel identity recognition method and system Download PDF

Info

Publication number
CN117877095A
CN117877095A CN202410063328.1A CN202410063328A CN117877095A CN 117877095 A CN117877095 A CN 117877095A CN 202410063328 A CN202410063328 A CN 202410063328A CN 117877095 A CN117877095 A CN 117877095A
Authority
CN
China
Prior art keywords
photo
face
feature
detection
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410063328.1A
Other languages
Chinese (zh)
Inventor
毛舒乐
陶俊
赵云龙
余江斌
张天奇
宋杰
杨彬彬
黄杨翼
王俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Jiyuan Software Co Ltd
Original Assignee
Anhui Jiyuan Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Jiyuan Software Co Ltd filed Critical Anhui Jiyuan Software Co Ltd
Priority to CN202410063328.1A priority Critical patent/CN117877095A/en
Publication of CN117877095A publication Critical patent/CN117877095A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the invention provides a personnel identity recognition method and system based on AI, and belongs to the technical field of picture recognition. The identification method comprises the following steps: acquiring a photo to be identified; detecting photo meta information of the photo; detecting photo environment information of the photo; detecting face information of the photo; and carrying out instrument specification detection on the photo. Through the technical scheme, the invention provides the identification method and the identification system for the personnel identity based on the AI.

Description

AI-based personnel identity recognition method and system
Technical Field
The invention relates to the technical field of picture identification, in particular to an identification method and system for personnel identity based on AI.
Background
Before the power system works such as production, construction and the like, related workers need to upload access photos in advance to carry out security access auditing. Aiming at the auditing work, the traditional image recognition technology is difficult to directly complete the work because of the defect auditing involving different dimensions, so the prior art mainly relies on manual online auditing one by one.
Disclosure of Invention
The embodiment of the invention aims to provide an identification method and an identification system for personnel identity based on AI, which can improve the personnel identity.
In order to achieve the above object, an embodiment of the present invention provides a method for identifying a person identity based on AI, including:
acquiring a photo to be identified;
detecting photo meta information of the photo;
detecting photo environment information of the photo;
detecting face information of the photo;
and carrying out instrument specification detection on the photo.
Optionally, detecting photo meta information of the photo includes:
and judging whether the format, the size and/or the pixels of the photo meet preset conditions or not.
Optionally, detecting photo environmental information of the photo includes:
preprocessing the photo;
inputting the photo subjected to the pretreatment operation into a CNN convolutional neural network to extract a feature map;
performing dimension reduction operation on the feature map by adopting a pooling layer;
performing feature expansion and integration on the feature map after the dimension reduction operation by adopting a full-connection layer to obtain a classifier classification basis;
and classifying by adopting a feature classifier according to the classifier classification basis and the feature map after the dimension reduction operation so as to determine whether the environmental information of the photo is qualified.
Optionally, the preprocessing operations include denoising, resizing, and/or color space conversion operations.
Optionally, performing face information detection on the photo includes:
determining detection frame coordinates in the photo;
performing face alignment operation according to the detection frame coordinates;
establishing a feature vector by adopting a residual error learning deep neural network according to the face alignment operation result;
and calculating according to the feature vector and the comparison graph to determine the detection result of the face information.
Optionally, determining the coordinates of the detection frame in the photograph includes:
the get_front_face_detector method adopting the Dlib tool uses a face detector to detect a face region and output four coordinate points corresponding to a detection frame, wherein the face detector comprises a linear support vector machine and is obtained through training by a pre-trained direction gradient histogram method.
Optionally, performing face alignment operation according to the detection frame coordinates includes:
constructing a cascaded residual regression tree to represent the shape of a face in a photo, wherein each leaf node in the residual regression tree represents a residual regression quantity of the shape of the face;
and normalizing the residual regression quantity.
Optionally, calculating according to the feature vector and the comparison chart to determine a result of the face information detection, including:
determining the result of the face information detection according to formula (1),
wherein Dis is the calculated distance, x 1 、x 2 、x n As a feature vector, y 1 、y 2 、y n Is a vector in the alignment chart.
Optionally, performing meter specification detection on the photo includes:
performing data augmentation on the photo in a random scaling, random cutting and/or random arrangement mode;
inputting the data-amplified photos into a backup network to obtain demand feature diagrams under different scales;
inputting the demand feature graph into a Neck middle layer to perform feature fusion operation;
and inputting the result after the feature fusion operation into a Head output layer to obtain a prediction result.
In another aspect, the present invention also provides an AI-based person identity recognition system including a processor configured to perform the recognition method of any one of the above.
Through the technical scheme, the invention provides the identification method and the identification system for the personnel identity based on the AI.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain, without limitation, the embodiments of the invention. In the drawings:
FIG. 1 is a flow chart of a method of AI-based person identity recognition in accordance with one embodiment of the invention;
FIG. 2 is a flow chart of a method of photo context information detection according to one embodiment of the invention;
FIG. 3 is a flow chart of a method of face information detection according to one embodiment of the present invention;
fig. 4 is a block diagram of a network structure of face information detection according to an embodiment of the present invention;
FIG. 5 is a flow chart of a method of meter specification detection according to one embodiment of the invention.
Detailed Description
The following describes the detailed implementation of the embodiments of the present invention with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the invention, are not intended to limit the invention.
Fig. 1 is a flowchart of a method for AI-based person identity recognition, according to one embodiment of the invention. In this fig. 1, the identification method may include the steps of:
in step S10, a photo to be identified is acquired;
in step S11, photo meta information detection is performed on the photo;
in step S12, photo environment information detection is performed on the photo;
in step S13, face information detection is performed on the photo;
in step S14, meter specification detection is performed on the photo.
In this recognition method as shown in fig. 1, step S11 may be used to perform photo meta-information detection on the photo. The photo meta information detection is mainly used for detecting whether basic information of a photo meets input conditions of a subsequent neural network structure. Specifically, the photo meta-information detection may be to determine whether the format, size, and/or pixel of the photo satisfy a preset condition. More specifically, for the format of the photograph, it may be determined whether the format is JPG or PNG. For the size of the photo, it may be determined whether the size of the photo is between 20K and 1M. For a pixel of a photograph, it may be determined whether the pixel is greater than or equal to 320×240. The judgment is simple identification operation, so that the direct judgment can be directly carried out through a preset program without being operated through a complex neural network. More specifically, in step S11, the photo may be read using the imread () method of Opencv-python open source toolkit, then the photo format and the pixel size standard specification information are read using the image. Dtype and the image. Shape, and finally the picture size is obtained using the imancode () and the len () methods.
Step S12 may be used to detect photo environmental information of the photo, where the photo environmental information may be used to determine whether the photo is blurred and whether the background of the photo is too complex, so that the subsequent recognition of the face is not facilitated. The specific method for detecting the photo context information can take a variety of forms known to those skilled in the art. In one example of the present invention, the photo context information detection method may be a method including the steps as shown in fig. 2. In fig. 2, the photo environment information detection method may include the steps of:
in step S20, a preprocessing operation is performed on the photo;
in step S21, inputting the photo after the preprocessing operation into the CNN convolutional neural network to extract a feature map;
in step S22, the pooling layer is adopted to perform dimension reduction operation on the feature map;
in step S23, the feature map after the dimension reduction operation is subjected to feature expansion and integration by adopting a full connection layer, so as to obtain a classifier classification basis;
in step S24, a feature classifier is used to classify according to the classifier classification basis and the feature map after the dimension reduction operation, so as to determine whether the environmental information of the photo is qualified.
In the method shown in fig. 2, step S20 may be used to perform a preprocessing operation on the photo. The preprocessing operations may include denoising, resizing, and/or color space conversion operations. Step S21 may be used to input the photo after the preprocessing operation into the CNN convolutional neural network to extract the feature map. Step S22 can be used for performing dimension reduction operation on the feature map by adopting the pooling layer, so that the complexity of data is reduced, and the subsequent processing speed is improved.
Step S13 may be used to perform face information detection on the photograph. In one example of the present invention, this step S13 may further include a method as shown in fig. 3, and the corresponding network configuration block diagram may be as shown in fig. 4. In fig. 3 and 4, the step S13 may further include the steps of:
in step S30, determining coordinates of a detection frame in the photograph;
in step S31, a face alignment operation is performed according to the detection frame coordinates;
in step S32, a feature vector is created by using a residual learning deep neural network according to the result of the face alignment operation;
in step S33, calculation is performed based on the feature vector and the comparison map to determine the result of face information detection.
In this method as shown in fig. 3, step S30 may be used to determine the coordinates of the detection frame in the photograph. Specifically, the step S30 may be that the get_front_face_detector method using Dlib tool uses a face detector to detect a face region and output four coordinate points corresponding to a detection frame, where the face detector includes a linear support vector machine and is obtained by training by a pre-trained direction gradient histogram method. Step S31 is for performing face alignment operation according to the detection frame coordinates. Specifically, this step S31 may be to first construct a cascaded residual regression tree to represent the face shape in the photo, where each leaf node in the residual regression tree represents one residual regression quantity of the face shape. And then normalizing the residual regression quantity. Step S33 may be used to calculate from the feature vector and the comparison map to determine the result of face information detection. Specifically, this step S33 may be a result of determining face information detection according to formula (1),
wherein Dis is the calculated distance, x 1 、x 2 、x n As a feature vector, y 1 、y 2 、y n Is a vector in the alignment chart.
Step S14 may be used to perform meter specification detection on the photograph. In this embodiment, this step S14 may be used for meter specification detection of the photo. Specifically, this step S14 may be a method including as shown in fig. 5. In fig. 5, the step S14 may include the steps of:
in step S40, the data is amplified by performing random scaling, random clipping and/or random arrangement on the photo;
in step S41, inputting the photo with amplified data into a backup network to obtain a demand feature map under different scales;
in step S42, inputting the demand feature map into the neg middle layer to perform feature fusion operation;
in step S43, the result after the feature fusion operation is input into the Head output layer to obtain a prediction result.
Through the technical schemes shown in fig. 1 to 5, the identification method provided by the embodiment of the invention combines the characteristics of different defects in the photo identification process, sets different detection methods in a targeted manner, and sorts the detection methods in a targeted manner aiming at the layering problem of the different defects, thereby improving the detection efficiency while meeting the photo detection requirement. Specifically, the types of detected defects and the corresponding processing methods provided by the invention are shown in the following table 1,
TABLE 1
In table 1, photo meta information detection is to detect basic information of a photo, and is a one-step preprocessing process of the photo at the same time of detection, so that the photo environment information detection is facilitated. The face information detection is premised on the need to keep the background of the face not too complex and the image to remain sharp, so photo context information detection is placed before face information detection, which is also a filtering operation on the photo. The premise of the meter specification detection is that the face information detection is successful, and the meter specification detection can be successfully completed only if the face information can be detected, so the meter specification detection is set after the face information detection in this embodiment.
In another aspect, the present invention also provides an AI-based person identity recognition system including a processor configured to perform the recognition method of any one of the above. The identification method may be a method comprising the steps as shown in fig. 1. In this fig. 1, the identification method may include the steps of:
in step S10, a photo to be identified is acquired;
in step S11, photo meta information detection is performed on the photo;
in step S12, photo environment information detection is performed on the photo;
in step S13, face information detection is performed on the photo;
in step S14, meter specification detection is performed on the photo.
In this recognition method as shown in fig. 1, step S11 may be used to perform photo meta-information detection on the photo. The photo meta information detection is mainly used for detecting whether basic information of a photo meets input conditions of a subsequent neural network structure. Specifically, the photo meta-information detection may be to determine whether the format, size, and/or pixel of the photo satisfy a preset condition. More specifically, for the format of the photograph, it may be determined whether the format is JPG or PNG. For the size of the photo, it may be determined whether the size of the photo is between 20K and 1M. For a pixel of a photograph, it may be determined whether the pixel is greater than or equal to 320×240. The judgment is simple identification operation, so that the direct judgment can be directly carried out through a preset program without being operated through a complex neural network. More specifically, in step S11, the photo may be read using the imread () method of Opencv-python open source toolkit, then the photo format and the pixel size standard specification information are read using the image. Dtype and the image. Shape, and finally the picture size is obtained using the imancode () and the len () methods.
Step S12 may be used to detect photo environmental information of the photo, where the photo environmental information may be used to determine whether the photo is blurred and whether the background of the photo is too complex, so that the subsequent recognition of the face is not facilitated. The specific method for detecting the photo context information can take a variety of forms known to those skilled in the art. In one example of the present invention, the photo context information detection method may be a method including the steps as shown in fig. 2. In fig. 2, the photo environment information detection method may include the steps of:
in step S20, a preprocessing operation is performed on the photo;
in step S21, inputting the photo after the preprocessing operation into the CNN convolutional neural network to extract a feature map;
in step S22, the pooling layer is adopted to perform dimension reduction operation on the feature map;
in step S23, the feature map after the dimension reduction operation is subjected to feature expansion and integration by adopting a full connection layer, so as to obtain a classifier classification basis;
in step S24, a feature classifier is used to classify according to the classifier classification basis and the feature map after the dimension reduction operation, so as to determine whether the environmental information of the photo is qualified.
In the method shown in fig. 2, step S20 may be used to perform a preprocessing operation on the photo. The preprocessing operations may include denoising, resizing, and/or color space conversion operations. Step S21 may be used to input the photo after the preprocessing operation into the CNN convolutional neural network to extract the feature map. Step S22 can be used for performing dimension reduction operation on the feature map by adopting the pooling layer, so that the complexity of data is reduced, and the subsequent processing speed is improved.
Step S13 may be used to perform face information detection on the photograph. In one example of the present invention, this step S13 may further include a method as shown in fig. 3, and the corresponding network configuration block diagram may be as shown in fig. 4. In fig. 3 and 4, the step S13 may further include the steps of:
in step S30, determining coordinates of a detection frame in the photograph;
in step S31, a face alignment operation is performed according to the detection frame coordinates;
in step S32, a feature vector is created by using a residual learning deep neural network according to the result of the face alignment operation;
in step S33, calculation is performed based on the feature vector and the comparison map to determine the result of face information detection.
In this method as shown in fig. 3, step S30 may be used to determine the coordinates of the detection frame in the photograph. Specifically, the step S30 may be that the get_front_face_detector method using Dlib tool uses a face detector to detect a face region and output four coordinate points corresponding to a detection frame, where the face detector includes a linear support vector machine and is obtained by training by a pre-trained direction gradient histogram method. Step S31 is for performing face alignment operation according to the detection frame coordinates. Specifically, this step S31 may be to first construct a cascaded residual regression tree to represent the face shape in the photo, where each leaf node in the residual regression tree represents one residual regression quantity of the face shape. And then normalizing the residual regression quantity. Step S33 may be used to calculate from the feature vector and the comparison map to determine the result of face information detection. Specifically, this step S33 may be a result of determining face information detection according to formula (1),
wherein Dis is the calculated distance, x 1 、x 2 、x n As a feature vector, y 1 、y 2 、y n Is a vector in the alignment chart.
Step S14 may be used to perform meter specification detection on the photograph. In this embodiment, this step S14 may be used for meter specification detection of the photo. Specifically, this step S14 may be a method including as shown in fig. 5. In fig. 5, the step S14 may include the steps of:
in step S40, the data is amplified by performing random scaling, random clipping and/or random arrangement on the photo;
in step S41, inputting the photo with amplified data into a backup network to obtain a demand feature map under different scales;
in step S42, inputting the demand feature map into the neg middle layer to perform feature fusion operation;
in step S43, the result after the feature fusion operation is input into the Head output layer to obtain a prediction result.
Through the technical schemes shown in fig. 1 to 5, the identification method provided by the embodiment of the invention combines the characteristics of different defects in the photo identification process, sets different detection methods in a targeted manner, and sorts the detection methods in a targeted manner aiming at the layering problem of the different defects, thereby improving the detection efficiency while meeting the photo detection requirement. Specifically, the types of detected defects and the corresponding processing methods provided by the invention are shown in the following table 1,
TABLE 1
In table 1, photo meta information detection is to detect basic information of a photo, and is a one-step preprocessing process of the photo at the same time of detection, so that the photo environment information detection is facilitated. The face information detection is premised on the need to keep the background of the face not too complex and the image to remain sharp, so photo context information detection is placed before face information detection, which is also a filtering operation on the photo. The premise of the meter specification detection is that the face information detection is successful, and the meter specification detection can be successfully completed only if the face information can be detected, so the meter specification detection is set after the face information detection in this embodiment.
Through the technical scheme, the invention provides the identification method and the identification system for the personnel identity based on the AI.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. An AI-based person identity recognition method, the recognition method comprising:
acquiring a photo to be identified;
detecting photo meta information of the photo;
detecting photo environment information of the photo;
detecting face information of the photo;
and carrying out instrument specification detection on the photo.
2. The method of claim 1, wherein detecting photo meta-information for the photo comprises:
and judging whether the format, the size and/or the pixels of the photo meet preset conditions or not.
3. The method of claim 1, wherein detecting photo context information for the photo comprises:
preprocessing the photo;
inputting the photo subjected to the pretreatment operation into a CNN convolutional neural network to extract a feature map;
performing dimension reduction operation on the feature map by adopting a pooling layer;
performing feature expansion and integration on the feature map after the dimension reduction operation by adopting a full-connection layer to obtain a classifier classification basis;
and classifying by adopting a feature classifier according to the classifier classification basis and the feature map after the dimension reduction operation so as to determine whether the environmental information of the photo is qualified.
4. A method of identification as claimed in claim 3 wherein the pre-processing operations include denoising, resizing and/or color space conversion operations.
5. The method of claim 1, wherein the detecting of face information for the photograph comprises:
determining detection frame coordinates in the photo;
performing face alignment operation according to the detection frame coordinates;
establishing a feature vector by adopting a residual error learning deep neural network according to the face alignment operation result;
and calculating according to the feature vector and the comparison graph to determine the detection result of the face information.
6. The method of claim 5, wherein determining the coordinates of the detection frame in the photograph comprises:
the get_front_face_detector method adopting the Dlib tool uses a face detector to detect a face region and output four coordinate points corresponding to a detection frame, wherein the face detector comprises a linear support vector machine and is obtained through training by a pre-trained direction gradient histogram method.
7. The method according to claim 5, wherein performing face alignment operation according to the detection frame coordinates includes:
constructing a cascaded residual regression tree to represent the shape of a face in a photo, wherein each leaf node in the residual regression tree represents a residual regression quantity of the shape of the face;
and normalizing the residual regression quantity.
8. The recognition method according to claim 5, wherein calculating from the feature vector and the comparison map to determine the result of the face information detection includes:
determining the result of the face information detection according to formula (1),
wherein Dis is the calculated distance, x 1 、x 2 、x n As a feature vector, y 1 、y 2 、y n Is a vector in the alignment chart.
9. The identification method of claim 1, wherein performing meter specification detection on the photograph comprises:
performing data augmentation on the photo in a random scaling, random cutting and/or random arrangement mode;
inputting the data-amplified photos into a backup network to obtain demand feature diagrams under different scales;
inputting the demand feature graph into a Neck middle layer to perform feature fusion operation;
and inputting the result after the feature fusion operation into a Head output layer to obtain a prediction result.
10. An AI-based person identity recognition system, characterized in that the recognition system comprises a processor configured to perform the recognition method of any one of claims 1 to 9.
CN202410063328.1A 2024-01-16 2024-01-16 AI-based personnel identity recognition method and system Pending CN117877095A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410063328.1A CN117877095A (en) 2024-01-16 2024-01-16 AI-based personnel identity recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410063328.1A CN117877095A (en) 2024-01-16 2024-01-16 AI-based personnel identity recognition method and system

Publications (1)

Publication Number Publication Date
CN117877095A true CN117877095A (en) 2024-04-12

Family

ID=90591693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410063328.1A Pending CN117877095A (en) 2024-01-16 2024-01-16 AI-based personnel identity recognition method and system

Country Status (1)

Country Link
CN (1) CN117877095A (en)

Similar Documents

Publication Publication Date Title
WO2020038389A1 (en) Welding seam negative defect recognition method
CN107784288B (en) Iterative positioning type face detection method based on deep neural network
CN111612834B (en) Method, device and equipment for generating target image
CN111461212A (en) Compression method for point cloud target detection model
CN110689134A (en) Method, apparatus, device and storage medium for performing machine learning process
CN112948937B (en) Intelligent pre-judging method and device for concrete strength
CN111461101A (en) Method, device and equipment for identifying work clothes mark and storage medium
CN111652046A (en) Safe wearing detection method, equipment and system based on deep learning
CN114639102B (en) Cell segmentation method and device based on key point and size regression
CN112232336A (en) Certificate identification method, device, equipment and storage medium
CN110188495B (en) Method for generating three-dimensional house type graph based on two-dimensional house type graph of deep learning
CN112926595B (en) Training device of deep learning neural network model, target detection system and method
CN113763412A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN117115823A (en) Tamper identification method and device, computer equipment and storage medium
CN111950415A (en) Image detection method and device
CN116778182A (en) Sketch work grading method and sketch work grading model based on multi-scale feature fusion
CN117877095A (en) AI-based personnel identity recognition method and system
CN115115600A (en) Coated lens flaw detection method, device, equipment and storage medium
CN113496115B (en) File content comparison method and device
CN114612427A (en) Nameplate defect detection method and device, electronic equipment and storage medium
CN114119594A (en) Oil leakage detection method and device based on deep learning
CN111931920A (en) Target detection method, device and storage medium based on cascade neural network
CN112232390A (en) Method and system for identifying high-pixel large image
CN113034432A (en) Product defect detection method, system, device and storage medium
CN111260757A (en) Image processing method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination