CN111325069A - Production line data processing method and device, computer equipment and storage medium - Google Patents

Production line data processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111325069A
CN111325069A CN201811536905.5A CN201811536905A CN111325069A CN 111325069 A CN111325069 A CN 111325069A CN 201811536905 A CN201811536905 A CN 201811536905A CN 111325069 A CN111325069 A CN 111325069A
Authority
CN
China
Prior art keywords
detected
target
image
face
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811536905.5A
Other languages
Chinese (zh)
Other versions
CN111325069B (en
Inventor
赵尹发
谭龙田
陈彦宇
马雅奇
谭泽汉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201811536905.5A priority Critical patent/CN111325069B/en
Publication of CN111325069A publication Critical patent/CN111325069A/en
Application granted granted Critical
Publication of CN111325069B publication Critical patent/CN111325069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06398Performance of employee with respect to a job function
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The application relates to a production line management method, a production line data processing device, computer equipment and a storage medium. The method comprises the following steps: the method comprises the steps of detecting an image to be detected containing a target to be detected through a trained target detection model to obtain a corresponding target to be detected, segmenting the image to be detected to obtain a face image of the target to be detected, extracting face features of the face image through the trained face recognition model, determining recognition results corresponding to the target to be detected according to the face features, and determining work evaluation results corresponding to the target to be detected according to the target to be detected and the corresponding recognition results. The method comprises the steps of detecting workers on a production line through a target detection model, identifying faces of the workers to determine the workers, evaluating the work of the workers according to detection results and identification results to obtain evaluation results of the workers, and automatically managing the work evaluation results of the workers so as to improve the management efficiency of the production line.

Description

Production line data processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer vision, and in particular, to a production line management method, a production line data processing device, a computer device, and a storage medium.
Background
With the development of computer technology, the computer vision technology, machine learning, deep learning and hardware computing speed are continuously improved, so that the computer vision technology can be applied to various fields. A large amount of workers are needed to produce the equipment in a production workshop, and the production efficiency and the product quality of products can be improved by effectively managing the workers. At present, management of a production workshop is basically that management personnel evaluate the work of each employee, so that the management efficiency is low.
Disclosure of Invention
In order to solve the technical problems, the application provides a production line management method, a production line data processing device, a computer device and a storage medium.
In a first aspect, the present application provides a production line data processing method, including:
detecting an image to be detected containing a target to be detected through a trained target detection model to obtain a corresponding target to be detected, wherein the target to be detected is a worker on a production line;
segmenting an image to be detected to obtain a face image of a target to be detected;
extracting the face characteristics of the face image through the trained face recognition model, and determining a recognition result corresponding to the target to be detected according to the face characteristics;
and determining the work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding recognition result.
A production line management method, comprising:
detecting an image to be detected containing a target to be detected through a trained target detection model to obtain a corresponding target to be detected, wherein the target to be detected is a worker on a production line;
segmenting an image to be detected to obtain a face image of a target to be detected;
extracting the face characteristics of the face image through the trained face recognition model, and determining a recognition result corresponding to the target to be detected according to the face characteristics;
and determining the work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding recognition result.
In a second aspect, the present application provides a production line data processing apparatus, including:
the target detection module is used for detecting the image to be detected containing the target to be detected through the trained target detection model to obtain the corresponding target to be detected, wherein the target to be detected is a worker on a production line;
the image segmentation module is used for obtaining a face image of the target to be detected by segmenting the image to be detected;
the target recognition module is used for extracting the face characteristics of the face image through the trained face recognition model and determining a recognition result corresponding to the target to be detected according to the face characteristics;
and the work evaluation result module is used for determining the work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding identification result.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
detecting an image to be detected containing a target to be detected through a trained target detection model to obtain a corresponding target to be detected, wherein the target to be detected is a worker of a production line;
segmenting an image to be detected to obtain a face image of a target to be detected;
extracting the face characteristics of the face image through the trained face recognition model, and determining a recognition result corresponding to the target to be detected according to the face characteristics;
and determining the work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding recognition result.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
detecting an image to be detected containing a target to be detected through a trained target detection model to obtain a corresponding target to be detected, wherein the target to be detected is a worker of a production line;
segmenting an image to be detected to obtain a face image of a target to be detected;
extracting the face characteristics of the face image through the trained face recognition model, and determining a recognition result corresponding to the target to be detected according to the face characteristics;
and determining the work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding recognition result.
The production line management method, the production line data processing device, the computer equipment and the storage medium. The method comprises the following steps: detecting an image to be detected containing a target to be detected through a trained target detection model to obtain a corresponding target to be detected, wherein the target to be detected is a worker on a production line, segmenting the image to be detected to obtain a face image of the target to be detected, extracting face features of the face image through the trained face recognition model, determining recognition results corresponding to the target to be detected according to the face features, and determining work evaluation results corresponding to the targets to be detected according to the targets to be detected and the corresponding recognition results. The method comprises the steps of detecting workers on a production line through a target detection model, further identifying the faces of the workers to determine the workers, evaluating the work of the workers on the production line according to a detection result and an identification result to obtain an evaluation result of each worker, and automatically managing the evaluation result of the work of the workers, so that the management efficiency of the production line is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a diagram of an application scenario of a production line data processing method in one embodiment;
FIG. 2 is a schematic flow chart diagram of a process line data processing method in one embodiment;
FIG. 3 is a schematic flow chart diagram of a data processing method in a production line according to another embodiment;
FIG. 4 is a schematic flow chart diagram of a line data processing method in accordance with yet another embodiment;
FIG. 5 is a schematic flow chart diagram of a data processing method in a production line according to yet another embodiment;
FIG. 6 is a block diagram of a line data processing apparatus in one embodiment;
FIG. 7 is a block diagram showing an internal configuration of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
FIG. 1 is a diagram of an application environment of a production line data processing method in one embodiment. Referring to fig. 1, the production line data processing method is applied to a production line data processing system. The production line data processing system includes a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network. The method comprises the steps that a terminal or a server detects an image to be detected containing a target to be detected through a trained target detection model to obtain a corresponding target to be detected, wherein the target to be detected is a worker on a production line, a face image of the target to be detected is obtained through segmentation of the image to be detected, face features of the face image are extracted through a trained face recognition model, a recognition result corresponding to the target to be detected is determined according to the face features, and a work evaluation result corresponding to each target to be detected is determined according to each target to be detected and the corresponding recognition result. The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
As shown in FIG. 2, in one embodiment, a method of in-line data processing is provided. The embodiment is mainly illustrated by applying the method to the terminal 110 (or the server 120) in fig. 1. Referring to fig. 2, the production line data processing method specifically includes the following steps:
step S201, detecting the image to be detected containing the target to be detected through the trained target detection model to obtain the corresponding target to be detected.
In this embodiment, the object to be detected is a worker on the production line.
Specifically, the trained target detection model is obtained by training a large number of images carrying target labels. The target detection model can be a deep learning neural network model, and the training method of the model can adopt an end-to-end training mode, least square fitting, a gradient descent method and the like. The image to be detected is an image obtained by shooting through shooting equipment, and the image to be detected comprises at least one object to be detected. And extracting image characteristics of an image to be detected of the target to be detected through the trained target detection model, and determining which image areas in the image to be detected are the target to be detected according to the extracted image characteristics.
In one embodiment, the number of detected targets to be detected in the image to be detected is counted, and if the image of the whole production line of the image to be detected is the number of people working on the production line, the detected number is used as the number of people working on the production line. The personnel monitoring is more convenient and faster through the trained target detection model, and the inconvenience brought by human intervention is avoided.
In one embodiment, before detecting the image to be detected containing the target to be detected by the trained target detection model, the method further includes: and acquiring a storage address corresponding to each image to be detected, and acquiring the image to be detected according to the storage address corresponding to each image to be detected.
Specifically, the storage address is an address stored in the image to be detected, images acquired by different shooting devices can be stored in a distributed storage mode, and the corresponding image to be detected is acquired according to the storage address of the image to be detected. The distributed storage has the advantages of high performance, multiple copy consistency, good disaster tolerance and backup performance, easy elastic expansion and high data storage standardization degree, the adoption of the distributed storage can improve the data acquisition efficiency, can support high-efficiency read-write cache of data, automatic hierarchical storage of the data and high-speed data mapping of the data, and meanwhile, a multi-backup mechanism ensures the storage safety of the data.
In one embodiment, before detecting the image to be detected containing the target to be detected through the trained target detection model, the method further includes: and preprocessing an image to be detected. Including but not limited to image dessication, stitching, correction, affine transformation, etc.
In one embodiment, before preprocessing the image to be detected, the method further comprises: and screening the image to be detected according to a preset screening rule. The preset screening rule can be customized, such as image quality, image contrast and the like.
Step S202, a human face image of the target to be detected is obtained by segmenting the image to be detected.
Specifically, the segmentation is to divide the image to be detected into a plurality of regions with human face features. The image segmentation method comprises a threshold segmentation based method, an edge segmentation based method, a region segmentation based method, a graph theory based segmentation method and an energy functional based segmentation method. And according to the business requirements, any one or more segmentation methods can be selected to segment the image to be detected to obtain a face image containing the face of the target to be detected. The face image is segmented to facilitate subsequent face recognition, interference caused by background information in the image is reduced, and the recognition accuracy of the subsequent face recognition is improved.
Step S203, extracting the face characteristics of the face image through the trained face recognition model, and determining the recognition result corresponding to the target to be detected according to the face characteristics.
Specifically, the trained face recognition model is obtained by training a large amount of image data carrying face labels, where the face labels are label data for identifying the faces, and if the image a includes the face of the user a, the face labels are used to identify the faces in the image a as the faces of the user a. The face feature is feature data for describing a face. The feature data can be facial features, position relations among facial features and the like, and the facial labels of the faces of the targets to be detected are determined as recognition results through the learned facial features.
And S204, determining a work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding identification result.
Specifically, the work evaluation result refers to a result obtained by evaluating the work of the worker. And evaluating the working performance of the working personnel through the target to be detected and the corresponding recognition result which are obtained through the detection of the trained target detection model, so as to obtain the corresponding working evaluation result. The content of the work evaluation includes, but is not limited to, work hours, work status, work results, and the like. The working time can be determined according to the time information carried by the image to be detected or obtained from the working data corresponding to the identification result, and the working state can be determined according to the identification result
In one embodiment, the inter-working information of each target to be detected is determined according to the time information carried by the image to be detected, and the working data of the target to be detected corresponding to the identification result is obtained. And determining a work evaluation result of the target to be detected according to the work time information and the work data.
Specifically, the image to be detected includes time information, the working time information of each target to be detected is determined according to the identification result corresponding to the target to be detected, and the working data of the target to be detected corresponding to each identification result is acquired, where the working data includes working content, working state, working attitude, working result, and the like. And performing work evaluation on each target to be detected according to the work data and the work time information to obtain a corresponding work evaluation result. The job evaluation result may be set to a plurality of grades such as excellent, qualified, unqualified, and the like.
In one embodiment, after determining the work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding recognition result, the method further includes: and acquiring the reward and penalty rules corresponding to the work evaluation results corresponding to the targets to be detected.
Specifically, the reward and penalty rules are used for guiding managers on the production line to manage workers, and different reward and penalty rules correspond to different work evaluation results. And different reward and penalty rules corresponding to different work evaluation results, such as issuing bonus for employees with excellent work evaluation results, and giving notice criticism, economic penalty and the like for unqualified employees. The staff on the production line is managed through the established reward-penalty system, so that the problem caused by the unclear reward-penalty system is avoided, and the management is more convenient.
According to the production line data processing method, the trained target detection model is used for detecting the to-be-detected image containing the to-be-detected target to obtain the corresponding to-be-detected target, the to-be-detected target is a worker on a production line, the to-be-detected image is segmented to obtain the face image of the to-be-detected target, the trained face recognition model is used for extracting the face features of the face image, the recognition result corresponding to the to-be-detected target is determined according to the face features, and the work evaluation result corresponding to each to-be-detected target is determined according to each to-be-detected target and the corresponding recognition. The method comprises the steps of detecting workers on a production line through a detection model, identifying the workers through a recognition model, evaluating the work of the workers according to a detection result and a recognition result to obtain evaluation results of the workers, and managing the workers according to the evaluation results to enable management to be more convenient.
In one embodiment, as shown in fig. 3, before step S201, the method further includes:
in step S301, initial images acquired by a plurality of photographing apparatuses are acquired.
And S302, splicing the initial images according to the position relation among the shooting devices to obtain the image to be detected.
Specifically, the initial image is an image shot by shooting equipment, and because the number of people in a production line of a production workshop is large, and the occupied position is also large, the whole production line cannot be monitored by one shooting equipment, a plurality of shooting equipment can be arranged, the image in a monitoring range is shot by each shooting equipment, the initial image is obtained, and the initial image shot by each shooting equipment is obtained. The image splicing refers to image splicing of an overlapped area, so that the spliced image can show more information on a production line, for example, the image to be detected can show all workers on the production line. The image can present more production line information through image splicing, so that production lines can be better managed.
In one embodiment, step S302 includes:
step S3021, a thread for image stitching is created.
And step S3022, distributing corresponding threads for each shooting device, and performing image splicing according to the position relationship among the shooting devices through the threads corresponding to the shooting devices to obtain the image to be detected.
Specifically, a thread is the smallest unit of a program execution stream, and is a process for processing image stitching. The number of threads is set by user according to specific production line requirements. And performing image splicing on images shot by corresponding shooting equipment through all threads, and performing splicing according to the position relation among all cameras during splicing, wherein if the left side of No. 1 shooting equipment is No. 2 shooting equipment, the image spliced on the left side of the image of No. 1 shooting equipment is the image shot by No. 2 shooting equipment. If every ten shooting devices correspond to one thread, the scene relates to 20 cameras, two threads can be set for speed and synchronization requirements, images corresponding to No. 1-10 cameras can be processed by one thread, and images corresponding to No. 11-20 cameras can be processed by the other thread. The adoption of multiple threads can improve the data processing rate and ensure the synchronism.
In one embodiment, as shown in FIG. 4, the step of generating a trained target detection model comprises:
step S401, a plurality of first training images containing training targets are obtained, and the first training images carry target labels.
Specifically, the training target refers to a target existing in the training first training image, the target is a human body included in the training first training image, and the target label refers to labeling information of each region in the training first training image, where the labeling information includes, but is not limited to, region coordinate information, a target number, and the like where the target is located. First training images each including a training target are acquired.
In one embodiment, before acquiring the plurality of first training images including the training targets, the method further comprises: and labeling each first training image. The labeling can be manual and/or automatic.
Step S402, inputting each first training image and the corresponding target label into an initial target detection model, and detecting the first training image through the initial target detection model to obtain a corresponding detection result.
Step S403, adjusting model parameters of the initial target detection model according to the detection result of each first training image and the matching result of the corresponding target label until the initial target detection model meets a first preset convergence condition, to obtain a trained target detection model.
Specifically, each first training image and the corresponding target label are input into an initial target detection model, the human body characteristics of each first training image are learned through the initial target detection model, the position information of the human body in each first training image is determined according to the learned human body characteristics, the difference degree of each human body position information and the human body position information in the corresponding target label is calculated, the difference degree is counted, the total difference degree is obtained, when the total difference degree is smaller than the preset difference degree, the initial target detection model meets a first preset convergence condition, and the trained target detection model is obtained. The first preset convergence condition refers to a preset critical condition for judging whether the initial target detection model converges. And when the total difference is greater than or equal to the preset difference, the initial target detection model does not meet the first preset convergence condition, updating the model parameters of the initial target detection model, and obtaining the trained target detection model until the total difference is less than the preset difference. By training a large number of images carrying target labels, the human body can be accurately identified as the target detection model learns human body characteristics in a large number of images.
In one embodiment, as shown in FIG. 5, the step of generating a trained face recognition model comprises:
step S501, a plurality of second training images containing faces of training targets are obtained, and the second training images carry face labels.
Step S502, inputting each second training image and the corresponding face label into the initial face recognition model, and detecting the second training image through the initial face recognition model to obtain the corresponding recognition result.
Step S503, adjusting model parameters of the initial face recognition model according to the recognition result of each second training image and the matching result of the corresponding face label until the initial face recognition model meets a second preset convergence condition, and obtaining a trained face recognition model.
Specifically, the face of the training target refers to a face of a target existing in the second training image, the target is a human body included in the image, the face label refers to labeling information of each face in the second training image, and the labeling information of the face includes, but is not limited to, area coordinate information where the face is located, face identification and the like. Two training images of each face containing a training target are obtained.
Inputting each second training image and the corresponding face label into an initial face recognition model, learning the face characteristics of each second training image through the initial face recognition model, determining the user identification corresponding to the face in each second training image according to the learned face characteristics, counting whether each user identification obtained through recognition is matched with the user identification in the corresponding face label or not to obtain a corresponding matching result, counting the recognition accuracy according to the matching result, and when the recognition accuracy is greater than or equal to the preset recognition accuracy, indicating that the initial face recognition model meets a second preset convergence condition to obtain the trained face recognition model. The second preset convergence condition is a preset critical condition for judging whether the initial face recognition model converges. And when the recognition accuracy is smaller than the preset accuracy, the initial face recognition model does not meet a second preset convergence condition, updating the model parameters of the initial face recognition model, and obtaining the trained face recognition model until the total difference is smaller than the preset difference. By training a large number of images carrying face labels, the face recognition model learns the face features in a large number of images, so that the face can be accurately recognized.
In a specific embodiment, the production line data processing method includes:
step S601, concatenating data. And (3) connecting videos or pictures input by a plurality of cameras in series according to a preset series rule by using the Opencv visual processing library. Wherein, the serial connection refers to the sequential connection of the video images based on the installation space position. After images are connected in series, the full view of the target area can be seen on a limited display screen. For example, a production line may be shown from beginning to end. When the camera is installed, deployment and installation height and direction angle debugging are carried out strictly according to the actual depth of field requirements of image series connection.
And the fact concatenation and display of videos or pictures are realized by utilizing the multithreading technology of the python language, so that preparation is made for next target identification. Multithreading can increase the processing rate of video or images. The number of threads and the scale of a single thread are divided, and can be determined according to debugging and specific business requirements. If a scene relates to 20 cameras, for speed and synchronization requirements, the No. 1-10 cameras can adopt one thread to process, and the No. 11-20 cameras can adopt another thread to process.
Step S602, a target detection model is trained. And acquiring an image data set, and screening and labeling the pictures of the production line by using an image labeling tool. And storing, screening and labeling the images according to the data labeling rule and the data storage rule. And using the marked image for training a deep learning network model to obtain a trained target detection model.
And (3) constructing an end-to-end deep learning neural network model, and detecting and identifying in real time by adopting the deep learning neural network model, wherein the detection is ten times faster than that of the traditional deep neural network utilizing a convolutional neural network. From the aspect of feature extraction, the difference between the end-to-end Neural networks (CNNs) and the traditional CNNs is that the input of the end-to-end CNN is an image to be detected, rather than the image features of the image to be detected, and the specific features are extracted and selected by being fused into the end-to-end CNN network, and no human intervention is needed. From the aspect of training and detection speed, the end-to-end network parameter adjustment and optimization parameters do not need manual intervention, the optimization process is automatically realized by a machine, and the iteration efficiency is improved from one time in minutes to multiple times in one second.
Step S603, training a face recognition model. And acquiring a face data set. The face data set is subjected to secondary image processing. The secondary processing comprises image affine transformation, image denoising, image scaling and the like. The image after the secondary processing is used for training the face data to obtain a trained face recognition model in order to meet the input requirement of the face recognition model. Wherein the trained face recognition model is end-to-end CNN. The minimum pixel of the face pixel picture which can be identified by the model is 50 x 50.
And step S604, managing the production line. And acquiring an image to be detected, inputting the image to be detected into a trained target detection model obtained after training, detecting the image to be detected through the trained target detection model, and grading and counting detected people. And segmenting the image to be detected according to the detected person to obtain a face image containing the face, inputting the face image into the trained face recognition model, and recognizing the face image according to the trained face recognition model to obtain a corresponding recognition result. And acquiring corresponding working data according to the identification result, and evaluating the work of the target to be detected according to the working data to obtain a corresponding work evaluation result. The production line condition and production line personnel can be reasonably and effectively monitored and managed through image detection and face recognition, so that the production efficiency is improved, the product quality is improved, and the intelligent manufacturing goal of a factory is realized.
In one embodiment, a line management method is provided, including, but not limited to, the method described in any of the above embodiments of the line data processing method.
FIG. 2 is a flow diagram of a production line data processing method in one embodiment. It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in FIG. 6, there is provided a in-line data processing apparatus 200, comprising:
the target detection module 201 is configured to detect, through the trained target detection model, an image to be detected including a target to be detected, and obtain a corresponding target to be detected, where the target to be detected is a worker on a production line.
The image segmentation module 202 is configured to obtain a face image of the target to be detected by segmenting the image to be detected.
And the target recognition module 203 is configured to extract a face feature of the face image through the trained face recognition model, and determine a recognition result corresponding to the target to be detected according to the face feature.
And the work evaluation result module 204 is configured to determine a work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding identification result.
In an embodiment, the production line data processing apparatus 200 further includes:
the first data acquisition module is used for acquiring initial images acquired by a plurality of shooting devices.
And the image splicing module is used for splicing the initial images according to the position relation among the shooting devices to obtain the image to be detected.
In one embodiment, the image stitching module 302 includes:
and the thread creating unit is used for creating a thread for image splicing.
And the image splicing unit is used for distributing corresponding threads for each shooting device, and performing image splicing according to the position relation among the shooting devices through the threads corresponding to the shooting devices to obtain the image to be detected.
In an embodiment, the production line data management apparatus 200 further includes:
the first training image acquisition module is used for acquiring a plurality of first training images containing training targets, and the first training images carry target labels.
And the detection model training module is used for inputting each first training image and the corresponding target label into the initial target detection model, and detecting the first training image through the initial target detection model to obtain a corresponding detection result.
And the detection model updating module is used for adjusting model parameters of the initial target detection model according to the detection result of each first training image and the matching result of the corresponding target label until the initial target detection model meets a first preset convergence condition, so as to obtain the trained target detection model.
In an embodiment, the production line data management apparatus 200 further includes:
and the second training data acquisition module is used for acquiring a plurality of second training images containing faces of the training targets, and the second training images carry face labels.
And the recognition model training module is used for inputting each second training image and the corresponding face label into the initial face recognition model, and detecting the second training image through the initial face recognition model to obtain a corresponding recognition result.
And the recognition model updating module is used for adjusting model parameters of the initial face recognition model according to the recognition result of each second training image and the matching result of the corresponding face label until the initial face recognition model meets a second preset convergence condition, so as to obtain the trained face recognition model.
In one embodiment, the job evaluation results module 204 includes:
and the working time calculation unit is used for determining the working time information of each target to be detected according to the time information carried by the image to be detected.
And the working time calculation unit is used for acquiring the working data of the target to be detected corresponding to the identification result.
And the working time calculation unit is used for determining a working evaluation result of the target to be detected according to the working time information and the working data.
In one embodiment, the production line data processing apparatus 200 includes:
and the address acquisition module is used for acquiring the storage address corresponding to each image to be detected.
And the second data acquisition module is used for acquiring the images to be detected according to the storage addresses corresponding to the images to be detected.
FIG. 7 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the terminal 110 (or the server 120) in fig. 1. As shown in fig. 7, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the in-line data processing method. The internal memory may also have a computer program stored therein that, when executed by the processor, causes the processor to perform a line management method and/or a line data processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the in-line data processing apparatus provided herein may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 7. The memory of the computer device may store therein various program modules constituting the line data processing apparatus, such as an object detection module 201, an image segmentation module 202, an object recognition module 203, and a job evaluation result module 204 shown in fig. 6. The computer program constituted by the program modules causes the processor to execute the steps in the production line data management method and/or the production line data processing method according to the embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 7 may perform, by using the target detection module 201 in the production line data processing apparatus shown in fig. 6, detection on the to-be-detected image including the to-be-detected target through the trained target detection model to obtain the corresponding to-be-detected target, where the to-be-detected target is a worker on the production line. The computer device can execute the process of obtaining the face image of the target to be detected by segmenting the image to be detected through the image segmentation module 202. The computer device can execute the extraction of the face features of the face image through the trained face recognition model through the target recognition module 203, and determine the recognition result corresponding to the target to be detected according to the face features. The computer device may determine the work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding recognition result through the work evaluation result module 204.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: detecting an image to be detected containing a target to be detected through a trained target detection model to obtain a corresponding target to be detected, wherein the target to be detected is a worker on a production line, segmenting the image to be detected to obtain a face image of the target to be detected, extracting face features of the face image through the trained face recognition model, determining recognition results corresponding to the target to be detected according to the face features, and determining work evaluation results corresponding to the targets to be detected according to the targets to be detected and the corresponding recognition results.
In one embodiment, before detecting the image to be detected containing the object to be detected through the trained object detection model, the computer program when executed by the processor further implements the following steps: acquiring initial images acquired by a plurality of shooting devices, and splicing the initial images according to the position relation among the shooting devices to obtain an image to be detected.
In one embodiment, the splicing the initial images according to a preset series rule to obtain an image to be detected includes: and creating an image splicing thread, distributing corresponding threads for each shooting device, and splicing the images according to the position relation among the shooting devices through the threads corresponding to the shooting devices to obtain the images to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of: the method comprises the steps of obtaining a plurality of first training images containing training targets, enabling the first training images to carry target labels, inputting each first training image and the corresponding target label into an initial target detection model, detecting the first training images through the initial target detection model to obtain corresponding detection results, adjusting model parameters of the initial target detection model according to the detection results of each first training image and the matching results of the corresponding target labels until the initial target detection model meets a first preset convergence condition, and obtaining a trained target detection model.
In one embodiment, the computer program when executed by the processor further performs the steps of: the method comprises the steps of obtaining a plurality of second training images containing faces of training targets, enabling the second training images to carry face labels, inputting each second training image and the corresponding face label into an initial face recognition model, detecting the second training images through the initial face recognition model to obtain corresponding recognition results, adjusting model parameters of the initial face recognition model according to the recognition results of each second training image and the matching results of the corresponding face labels until the initial face recognition model meets a second preset convergence condition, and obtaining a trained face recognition model.
In one embodiment, the determining, by the image to be detected carrying time information, the work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding recognition result includes: determining the inter-working information of each target to be detected according to the time information carried by the image to be detected, acquiring the working data of the target to be detected corresponding to the identification result, and determining the working evaluation result of the target to be detected according to the working time information and the working data.
In one embodiment, the computer program when executed by the processor further performs the steps of: and formulating a reward and penalty rule corresponding to the work evaluation result, and acquiring the reward and penalty rule corresponding to the work evaluation result corresponding to each target to be detected, wherein the reward and penalty rule is used for guiding managers on a production line to manage the workers.
In one embodiment, before detecting the image to be detected containing the object to be detected through the trained object detection model, the computer program when executed by the processor further implements the following steps: and acquiring a storage address corresponding to each image to be detected, and acquiring the image to be detected according to the storage address corresponding to each image to be detected.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: detecting an image to be detected containing a target to be detected through a trained target detection model to obtain a corresponding target to be detected, wherein the target to be detected is a worker on a production line, segmenting the image to be detected to obtain a face image of the target to be detected, extracting face features of the face image through the trained face recognition model, determining recognition results corresponding to the target to be detected according to the face features, and determining work evaluation results corresponding to the targets to be detected according to the targets to be detected and the corresponding recognition results.
In one embodiment, before detecting the image to be detected containing the object to be detected through the trained object detection model, the computer program when executed by the processor further implements the following steps: acquiring initial images acquired by a plurality of shooting devices, and splicing the initial images according to the position relation among the shooting devices to obtain an image to be detected.
In one embodiment, the splicing the initial images according to a preset series rule to obtain an image to be detected includes: and creating an image splicing thread, distributing corresponding threads for each shooting device, and splicing the images according to the position relation among the shooting devices through the threads corresponding to the shooting devices to obtain the images to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of: the method comprises the steps of obtaining a plurality of first training images containing training targets, enabling the first training images to carry target labels, inputting each first training image and the corresponding target label into an initial target detection model, detecting the first training images through the initial target detection model to obtain corresponding detection results, adjusting model parameters of the initial target detection model according to the detection results of each first training image and the matching results of the corresponding target labels until the initial target detection model meets a first preset convergence condition, and obtaining a trained target detection model.
In one embodiment, the computer program when executed by the processor further performs the steps of: the method comprises the steps of obtaining a plurality of second training images containing faces of training targets, enabling the second training images to carry face labels, inputting each second training image and the corresponding face label into an initial face recognition model, detecting the second training images through the initial face recognition model to obtain corresponding recognition results, adjusting model parameters of the initial face recognition model according to the recognition results of each second training image and the matching results of the corresponding face labels until the initial face recognition model meets a second preset convergence condition, and obtaining a trained face recognition model.
In one embodiment, the determining, by the image to be detected carrying time information, the work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding recognition result includes: determining the inter-working information of each target to be detected according to the time information carried by the image to be detected, acquiring the working data of the target to be detected corresponding to the identification result, and determining the working evaluation result of the target to be detected according to the working time information and the working data.
In one embodiment, the computer program when executed by the processor further performs the steps of: and formulating a reward and penalty rule corresponding to the work evaluation result, and acquiring the reward and penalty rule corresponding to the work evaluation result corresponding to each target to be detected, wherein the reward and penalty rule is used for guiding managers on a production line to manage the workers.
In one embodiment, before detecting the image to be detected containing the object to be detected through the trained object detection model, the computer program when executed by the processor further implements the following steps: and acquiring a storage address corresponding to each image to be detected, and acquiring the image to be detected according to the storage address corresponding to each image to be detected.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A production line data processing method, the method comprising:
detecting an image to be detected containing a target to be detected through a trained target detection model to obtain the corresponding target to be detected, wherein the target to be detected is a worker on the production line;
segmenting an image to be detected to obtain a face image of the target to be detected;
extracting the face features of the face image through a trained face recognition model, and determining a recognition result corresponding to the target to be detected according to the face features;
and determining a work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding identification result.
2. The method according to claim 1, wherein before the detecting the image to be detected containing the object to be detected by the trained object detection model, the method further comprises:
acquiring initial images acquired by a plurality of shooting devices;
and splicing the initial images according to the position relation among the shooting devices to obtain the image to be detected.
3. The method according to claim 2, wherein the stitching the initial images according to a preset concatenation rule to obtain the image to be detected comprises:
creating a thread for image splicing;
and distributing corresponding threads for each shooting device, and splicing images according to the position relation among the shooting devices through the threads corresponding to the shooting devices to obtain the to-be-detected image.
4. The method of claim 1, wherein the step of generating the trained object detection model comprises:
acquiring a plurality of first training images containing training targets, wherein the first training images carry target labels;
inputting each first training image and the corresponding target label into an initial target detection model, and detecting the first training image through the initial target detection model to obtain a corresponding detection result;
and adjusting the model parameters of the initial target detection model according to the detection result of each first training image and the matching result of the corresponding target label until the initial target detection model meets a first preset convergence condition, so as to obtain the trained target detection model.
5. The method of claim 1, wherein the step of generating the trained face recognition model comprises:
acquiring a plurality of second training images containing faces of training targets, wherein the second training images carry face labels;
inputting each second training image and the corresponding face label into an initial face recognition model, and detecting the second training image through the initial face recognition model to obtain a corresponding recognition result;
and adjusting the model parameters of the initial face recognition model according to the recognition result of each second training image and the matching result of the corresponding face label until the initial face recognition model meets a second preset convergence condition, so as to obtain the trained face recognition model.
6. The method according to claim 1, wherein the image to be detected carries time information, and the determining the work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding recognition result comprises:
determining the workshop information of each target to be detected according to the time information carried by the image to be detected;
acquiring working data of the target to be detected corresponding to the identification result;
and determining the work evaluation result of the target to be detected according to the work time information and the work data.
7. The method of claim 1, further comprising:
and acquiring a reward and penalty rule corresponding to the work evaluation result corresponding to each target to be detected, wherein the reward and penalty rule is used for guiding managers on the production line to manage the workers.
8. The method according to any one of claims 1 to 7, wherein before the detecting the image to be detected containing the target to be detected by the trained target detection model, the method further comprises:
acquiring a storage address corresponding to each image to be detected;
and acquiring the image to be detected according to the storage address corresponding to each image to be detected.
9. A line management method, comprising at least one of claims 1 to 8.
10. A production line data processing apparatus, the apparatus comprising:
the target detection module is used for detecting an image to be detected containing a target to be detected through the trained target detection model to obtain a corresponding target to be detected, wherein the target to be detected is a worker on the production line;
the image segmentation module is used for obtaining a face image of the target to be detected by segmenting an image to be detected;
the target recognition module is used for extracting the face features of the face image through the trained face recognition model and determining a recognition result corresponding to the target to be detected according to the face features;
and the work evaluation result module is used for determining the work evaluation result corresponding to each target to be detected according to each target to be detected and the corresponding identification result.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 9 are implemented when the computer program is executed by the processor.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 9.
CN201811536905.5A 2018-12-14 2018-12-14 Production line data processing method and device, computer equipment and storage medium Active CN111325069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811536905.5A CN111325069B (en) 2018-12-14 2018-12-14 Production line data processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811536905.5A CN111325069B (en) 2018-12-14 2018-12-14 Production line data processing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111325069A true CN111325069A (en) 2020-06-23
CN111325069B CN111325069B (en) 2022-06-10

Family

ID=71163405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811536905.5A Active CN111325069B (en) 2018-12-14 2018-12-14 Production line data processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111325069B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950621A (en) * 2020-08-10 2020-11-17 中国平安人寿保险股份有限公司 Target data detection method, device, equipment and medium based on artificial intelligence
CN112766059A (en) * 2020-12-30 2021-05-07 深圳市裕展精密科技有限公司 Method and device for detecting product processing quality
CN113030996A (en) * 2021-03-03 2021-06-25 首钢京唐钢铁联合有限责任公司 Production line equipment position offset detection method, system, equipment and medium
CN113128876A (en) * 2021-04-22 2021-07-16 北京房江湖科技有限公司 Image-based object management method, device and computer-readable storage medium
WO2023279785A1 (en) * 2021-07-06 2023-01-12 上海商汤智能科技有限公司 Method and apparatus for detecting whether staff member is compatible with post, computer device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279768A (en) * 2013-05-31 2013-09-04 北京航空航天大学 Method for identifying faces in videos based on incremental learning of face partitioning visual representations
CN104376611A (en) * 2014-10-20 2015-02-25 胡昔兵 Method and device for attendance of persons descending well on basis of face recognition
WO2016088369A1 (en) * 2014-12-04 2016-06-09 日本電気株式会社 Information processing device, conduct evaluation method, and program storage medium
CN108229855A (en) * 2018-02-06 2018-06-29 上海小蚁科技有限公司 Method for monitoring service quality and device, computer readable storage medium, terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279768A (en) * 2013-05-31 2013-09-04 北京航空航天大学 Method for identifying faces in videos based on incremental learning of face partitioning visual representations
CN104376611A (en) * 2014-10-20 2015-02-25 胡昔兵 Method and device for attendance of persons descending well on basis of face recognition
WO2016088369A1 (en) * 2014-12-04 2016-06-09 日本電気株式会社 Information processing device, conduct evaluation method, and program storage medium
CN108229855A (en) * 2018-02-06 2018-06-29 上海小蚁科技有限公司 Method for monitoring service quality and device, computer readable storage medium, terminal

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950621A (en) * 2020-08-10 2020-11-17 中国平安人寿保险股份有限公司 Target data detection method, device, equipment and medium based on artificial intelligence
CN112766059A (en) * 2020-12-30 2021-05-07 深圳市裕展精密科技有限公司 Method and device for detecting product processing quality
CN112766059B (en) * 2020-12-30 2024-05-03 富联裕展科技(深圳)有限公司 Method and device for detecting product processing quality
CN113030996A (en) * 2021-03-03 2021-06-25 首钢京唐钢铁联合有限责任公司 Production line equipment position offset detection method, system, equipment and medium
CN113030996B (en) * 2021-03-03 2022-12-13 首钢京唐钢铁联合有限责任公司 Production line equipment position offset detection method, system, equipment and medium
CN113128876A (en) * 2021-04-22 2021-07-16 北京房江湖科技有限公司 Image-based object management method, device and computer-readable storage medium
WO2023279785A1 (en) * 2021-07-06 2023-01-12 上海商汤智能科技有限公司 Method and apparatus for detecting whether staff member is compatible with post, computer device, and storage medium

Also Published As

Publication number Publication date
CN111325069B (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN111325069B (en) Production line data processing method and device, computer equipment and storage medium
CN109583489B (en) Defect classification identification method and device, computer equipment and storage medium
CN108764048B (en) Face key point detection method and device
WO2019232843A1 (en) Handwritten model training method and apparatus, handwritten image recognition method and apparatus, and device and medium
CN109543627A (en) A kind of method, apparatus and computer equipment judging driving behavior classification
US11004204B2 (en) Segmentation-based damage detection
WO2020238256A1 (en) Weak segmentation-based damage detection method and device
TWI716012B (en) Sample labeling method, device, storage medium and computing equipment, damage category identification method and device
CN109934196A (en) Human face posture parameter evaluation method, apparatus, electronic equipment and readable storage medium storing program for executing
WO2019232850A1 (en) Method and apparatus for recognizing handwritten chinese character image, computer device, and storage medium
CN106295598A (en) A kind of across photographic head method for tracking target and device
CN111126339A (en) Gesture recognition method and device, computer equipment and storage medium
US11640660B2 (en) Industrial internet of things, control methods and storage medium based on machine visual detection
CN112613569A (en) Image recognition method, and training method and device of image classification model
CN113344862A (en) Defect detection method, defect detection device, electronic equipment and storage medium
CN113706481A (en) Sperm quality detection method, sperm quality detection device, computer equipment and storage medium
CN110517221B (en) Gap positioning method and device based on real coordinates and storage medium
CN112989901A (en) Deep learning-based liquid level meter reading identification method
CN109919017B (en) Face recognition optimization method, device, computer equipment and storage medium
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN116863288A (en) Target detection and alarm method, device and equipment based on deep learning
CN111126376A (en) Picture correction method and device based on facial feature point detection and computer equipment
CN115690514A (en) Image recognition method and related equipment
CN111191706A (en) Picture identification method, device, equipment and storage medium
Midwinter et al. Unsupervised defect segmentation with pose priors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant