CN110309768B - Method and equipment for detecting staff at vehicle inspection station - Google Patents

Method and equipment for detecting staff at vehicle inspection station Download PDF

Info

Publication number
CN110309768B
CN110309768B CN201910576535.6A CN201910576535A CN110309768B CN 110309768 B CN110309768 B CN 110309768B CN 201910576535 A CN201910576535 A CN 201910576535A CN 110309768 B CN110309768 B CN 110309768B
Authority
CN
China
Prior art keywords
image
vehicle inspection
inspection station
clothing
personnel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910576535.6A
Other languages
Chinese (zh)
Other versions
CN110309768A (en
Inventor
周康明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN201910576535.6A priority Critical patent/CN110309768B/en
Publication of CN110309768A publication Critical patent/CN110309768A/en
Application granted granted Critical
Publication of CN110309768B publication Critical patent/CN110309768B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention aims to provide a method and equipment for detecting staff at a vehicle inspection station, wherein a first comparison result is obtained by comparing a face image in an image of the vehicle inspection station with a face template image of the staff; comparing the clothing region image in the image of the vehicle inspection station with the clothing template image of the worker to obtain a second comparison result; comparing the mutual position relationship between the human body key point positions in the image of the vehicle inspection station with the mutual position relationship between the template human body key point positions to obtain a third comparison result; whether the staff detection of the vehicle inspection station is qualified or not is judged based on the first comparison result, the second comparison result and the third comparison result, the accuracy of staff detection is improved, the identity identification, the dressing identification and the specified action identification of the staff are realized, and meanwhile, the whole automatic detection in the process of auditing is realized, so that the manpower is saved, and the openness and the justice of the detection work are ensured.

Description

Method and equipment for detecting staff at vehicle inspection station
Technical Field
The invention relates to the field of computers, in particular to a method and equipment for detecting workers at a vehicle inspection station.
Background
With the continuous development of social economy and the continuous improvement of the living standard of people, the quantity of motor vehicles in cities is rapidly increased. The workload of annual inspection of motor vehicles is also rapidly increased. The staff of the station is examined to traditional vehicle annual inspection well car detects mainly through artifical the detection, and this method cost of labor is higher, and efficiency is lower, and long-time repeatability detection operation makes the detection personnel produce bad states such as tired, attention is not concentrated easily, influences the detection accuracy.
How to accurately and quickly carry out dynamic detection on the workers at the vehicle inspection stations, and meanwhile, the defects that the manual detection cost is high, the detection personnel are easy to fatigue, easy to neglect errors and the like are overcome, and the method is an urgent technical problem to be solved.
Disclosure of Invention
The invention aims to provide a method and equipment for detecting workers at a vehicle inspection station.
According to one aspect of the invention, a staff detection method of a vehicle inspection station is provided, and the method comprises the following steps:
acquiring an image of a vehicle inspection station;
comparing the face image in the image of the vehicle inspection station with a face template image of a worker to obtain a first comparison result;
comparing the clothing region image in the image of the vehicle inspection station with the clothing template image of the worker to obtain a second comparison result;
comparing the mutual position relationship between the human body key point positions in the image of the vehicle inspection station with the mutual position relationship between the template human body key point positions to obtain a third comparison result;
and judging whether the detection of the staff at the vehicle inspection station is qualified or not based on the first comparison result, the second comparison result and the third comparison result.
Further, in the above method, comparing the face image in the image of the vehicle inspection station with the face template image of the worker to obtain a first comparison result, the method includes:
judging whether a personnel area image exists in the image of the vehicle inspection station or not by adopting a personnel detection model based on a deep learning network, and marking a personnel area zone bit corresponding to the image of the vehicle inspection station if the personnel area image does not exist; if the personnel area image exists, judging whether personnel exist in the personnel area image by adopting a personnel classification model based on a deep learning network, and if the personnel do not exist, marking a personnel zone bit corresponding to the image of the vehicle inspection station;
if the person exists, acquiring the person region image, detecting whether the person region image exists in the person region image by adopting a face detection model based on a deep learning network, and if the person region image does not exist, marking a face zone bit corresponding to the image of the vehicle inspection station; and if the human face exists, acquiring the human face area image, and comparing the human face area image with the human face template image of the worker to obtain a first comparison result.
Further, in the above method, before determining whether there is a person region image in the image of the vehicle inspection station by using a person detection model based on a deep learning network, the method further includes:
acquiring images of sample vehicle inspection stations under different illumination and different shooting angles;
marking the positions of the personnel in the image of the sample vehicle inspection station by using a rectangular frame, and marking the positions as the personnel to obtain a marked sample image;
and training a target detection deep neural network model by using the marked sample image to obtain a personnel detection model based on a deep learning network.
Further, in the above method, determining whether there is a person in the person region image before using a person classification model based on a deep learning network, the method further includes:
obtaining sample personnel area images at different positions in the image of the sample vehicle inspection station by utilizing the personnel detection model based on the deep learning network;
classifying the sample personnel area images into manned images and unmanned images;
and training a target classification deep neural network model by using the manned image and the unmanned image to obtain a personnel classification model based on a deep learning network.
Further, in the above method, before detecting whether a face region image exists in the person region image by using a face detection model based on a deep learning network, the method further includes:
acquiring different sample personnel area images;
marking the position of a face region image in the sample personnel region image by using a rectangular frame, and marking the face region image as a face;
and training a target detection deep neural network model by using the marked sample personnel area image to obtain a human face detection model based on a deep learning network.
Further, in the above method, comparing the clothing region image in the image of the vehicle inspection station with the clothing template image of the worker to obtain a second comparison result, including:
judging whether a clothing region image exists in the image of the vehicle inspection station or not by adopting a clothing detection model based on a deep learning network, and marking a clothing zone bit corresponding to the image of the vehicle inspection station if the clothing region image does not exist;
and if the clothing region image exists, acquiring the clothing region image, and comparing the clothing region image with the clothing template image of the worker to obtain a second comparison result.
Further, in the above method, before determining whether there is a clothing region image in the image of the vehicle inspection station by using a clothing detection model based on a deep learning network, the method further includes:
acquiring different sample personnel area images;
marking the position of the clothing region image in the sample personnel region image by using a rectangular frame, and marking the clothing region image as clothing;
and training a target detection deep neural network model by using the marked personnel area image to obtain a clothing detection model based on a deep learning network.
Further, in the above method, comparing the mutual position relationship between the positions of the human key points in the image of the vehicle inspection station with the mutual position relationship between the positions of the template human key points to obtain a third comparison result, including:
acquiring key point positions of the human body in the personnel area image by adopting a human body key point detection network model based on deep learning, and marking key point mark positions corresponding to the image of the vehicle inspection station if the key point positions of the human body are not acquired;
and if the human body key point positions are obtained, comparing the mutual position relationship of the obtained human body key point positions with the mutual position relationship of the template human body key point positions to obtain a third comparison result.
Further, in the above method, before the human key point position is obtained from the person region image by using a deep learning-based human key point detection network model, the method further includes:
acquiring different sample personnel area images;
marking the coordinates of key points such as the top of the head, the neck, the left shoulder, the right shoulder, the left elbow, the right elbow, the left wrist, the right wrist, the chest cavity, the pelvis and the like in each sample personnel area image;
and training a key point target detection deep neural network model by using the marked sample personnel area image to obtain a human body key point detection network model based on deep learning.
Further, in the above method, determining whether the detection of the staff at the vehicle inspection station is qualified based on the first, second and third comparison results includes:
and judging whether the detection of the staff at the vehicle inspection station is qualified or not based on the staff region zone bit, the staff zone bit, the face zone bit, the clothing zone bit and the first, second and third comparison results.
According to another aspect of the present invention, there is also provided a staff detecting apparatus of a car inspection station, the apparatus including:
the acquisition device is used for acquiring an image of the vehicle inspection station;
the first comparison device is used for comparing the face image in the image of the vehicle inspection station with the face template image of the worker to obtain a first comparison result;
the second comparison device is used for comparing the clothing region image in the image of the vehicle inspection station with the clothing template image of the worker to obtain a second comparison result;
the third comparison device is used for comparing the mutual position relationship between the human key point positions in the image of the vehicle inspection station with the mutual position relationship between the template human key point positions to obtain a third comparison result;
and the judging device is used for judging whether the detection of the staff at the vehicle inspection station is qualified or not based on the first comparison result, the second comparison result and the third comparison result.
According to another aspect of the present invention, there is also provided a computing-based device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of any of the methods described above.
Compared with the prior art, the method can be applied to chassis station personnel detection in annual inspection of motor vehicles, and a first comparison result is obtained by comparing the face image in the image of the vehicle inspection station with the face template image of a worker; comparing the clothing region image in the image of the vehicle inspection station with the clothing template image of the worker to obtain a second comparison result; comparing the mutual position relationship between the human body key point positions in the image of the vehicle inspection station with the mutual position relationship between the template human body key point positions to obtain a third comparison result; whether the staff detection of the vehicle inspection station is qualified or not is judged based on the first comparison result, the second comparison result and the third comparison result, the accuracy of staff detection is improved, the identity identification, the dressing identification and the specified action identification of the staff are realized, and meanwhile, the whole automatic detection in the process of auditing is realized, so that the manpower is saved, and the openness and the justice of the detection work are ensured.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
fig. 1 is a flowchart illustrating a method for detecting a worker at a vehicle inspection station according to an embodiment of the present invention.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
The invention provides a method for detecting workers at a vehicle inspection station, which comprises the following steps:
step S1, acquiring an image of a vehicle inspection station;
the image of the vehicle inspection station can be an image of a chassis station of vehicle inspection, the image of the chassis station can be collected and uploaded to the server, and when the image needs to be inspected, the image of the chassis station is obtained from the server;
step S2, comparing the face image in the image of the vehicle inspection station with the face template image of the worker to obtain a first comparison result;
step S3, comparing the clothing region image in the image of the vehicle inspection station with the clothing template image of the worker to obtain a second comparison result;
step S4, comparing the mutual position relationship between the human body key point positions in the image of the vehicle inspection station with the mutual position relationship between the template human body key point positions to obtain a third comparison result;
and step S5, judging whether the detection of the staff at the vehicle inspection station is qualified or not based on the first comparison result, the second comparison result and the third comparison result.
The method can be applied to chassis station personnel detection in annual inspection of motor vehicles, and a first comparison result is obtained by comparing a face image in an image of the vehicle inspection station with a face template image of a worker; comparing the clothing region image in the image of the vehicle inspection station with the clothing template image of the worker to obtain a second comparison result; comparing the mutual position relationship between the human body key point positions in the image of the vehicle inspection station with the mutual position relationship between the template human body key point positions to obtain a third comparison result; whether the staff detection of the vehicle inspection station is qualified or not is judged based on the first comparison result, the second comparison result and the third comparison result, the accuracy of staff detection is improved, the identity identification, the dressing identification and the specified action identification of the staff are realized, and meanwhile, the whole automatic detection in the process of auditing is realized, so that the manpower is saved, and the openness and the justice of the detection work are ensured.
In an embodiment of the method for detecting a worker at a vehicle inspection station, in step S2, the method compares a face image in an image of the vehicle inspection station with a face template image of the worker to obtain a first comparison result, including:
step S21, judging whether a personnel area image exists in the image of the vehicle inspection station by adopting a personnel detection model based on a deep learning network, and if the personnel area image does not exist, marking a personnel area zone bit corresponding to the image of the vehicle inspection station; if the personnel area image exists, judging whether personnel exist in the personnel area image by adopting a personnel classification model based on a deep learning network, and if the personnel do not exist, marking a personnel zone bit corresponding to the image of the vehicle inspection station;
step S22, if a person exists, acquiring the person region image, detecting whether the person region image exists in the person region image by adopting a face detection model based on a deep learning network, and if the person region image does not exist, marking a face zone bit corresponding to the image of the vehicle detection station; and if the human face exists, acquiring the human face area image, and comparing the human face area image with the human face template image of the worker to obtain a first comparison result.
In this case, as shown in fig. 1, a person detection model based on a deep learning network is used to detect persons in the chassis workstation image, and whether a person region image is acquired is judged,
if the personnel area image cannot be acquired, marking a personnel area zone bit corresponding to the image of the vehicle inspection station as 0, and ending the process;
if the personnel area image can be acquired, judging whether the personnel in the personnel area image really exist or not by combining a deep learning network-based personnel classification model,
if not, marking the personnel zone bit corresponding to the image of the vehicle inspection station as 0, and ending the process;
if yes, entering the next process;
detecting human face in the personnel area image by adopting a human face detection model based on a deep learning network, judging whether the human face area image exists or not,
if the face region image does not exist, marking a face zone bit corresponding to the image of the vehicle inspection station as 0, and entering the next process;
if the face area image exists, the face area image is obtained, the face area image is compared with the face template image of the worker in the database to judge whether the worker is the worker, and the related result is recorded, for example,
if the face detection is qualified, marking a flag bit of whether the corresponding face is qualified as 1, and then entering the next process;
and if the face detection is unqualified, marking the flag bit of whether the corresponding face is qualified as 0, and then entering the next process.
In an embodiment of the method for detecting a worker at a vehicle inspection station, in step S21, before determining whether there is a person region image in an image of the vehicle inspection station by using a person detection model based on a deep learning network, the method further includes:
step S211, obtaining images of sample vehicle inspection stations under different illumination and different shooting angles;
step S212, marking the positions of the personnel in the image of the sample vehicle inspection station by adopting a rectangular frame, and marking the positions as the personnel to obtain a marked sample image;
step S213, training a target detection deep neural network model by using the marked sample image to obtain a personnel detection model based on a deep learning network.
In an embodiment of the method for detecting a worker at a vehicle inspection station, in step S21, determining whether a worker exists in the image of the worker area before the worker is determined by using a worker classification model based on a deep learning network, the method further includes:
s214, obtaining sample personnel area images at different positions in the image of the sample vehicle inspection station by utilizing the personnel detection model based on the deep learning network;
s215, classifying the sample personnel area images into manned images and unmanned images;
and S216, training a target classification deep neural network model by using the manned image and the unmanned image to obtain a personnel classification model based on a deep learning network.
In an embodiment of the method for detecting a worker at a vehicle inspection station, in step S22, before detecting whether a face region image exists in the person region image by using a face detection model based on a deep learning network, the method further includes:
s221, acquiring different sample personnel area images;
s222, marking the position of a face region image in the sample person region image by using a rectangular frame, and marking the face region image as a face;
and S223, training a target detection deep neural network model by using the marked sample personnel area image to obtain a human face detection model based on a deep learning network.
In an embodiment of the method for detecting a worker at a vehicle inspection station, in step S3, the clothing region image in the image of the vehicle inspection station is compared with the clothing template image of the worker to obtain a second comparison result, which includes:
step S31, judging whether a clothing region image exists in the image of the vehicle inspection station by adopting a clothing detection model based on a deep learning network, and if not, marking a clothing zone bit corresponding to the image of the vehicle inspection station;
and step S32, if the clothing region image exists, acquiring the clothing region image, and comparing the clothing region image with the clothing template image of the worker to obtain a second comparison result.
Here, as shown in fig. 1, a clothing detection model based on a deep learning network is used to detect clothing of a person in a person region image, and whether a clothing region image exists or not is judged,
if the clothing region image does not exist, marking the clothing zone bit corresponding to the image of the vehicle inspection station as 0, and entering the next process;
if the clothing region image exists, the clothing region image is obtained, the clothing region image is compared with the worker clothing pattern in the database, whether the worker clothing conforms to the regulations or not is judged, and the relevant results are recorded, for example,
if the clothing of the person is detected to be qualified, marking a flag bit of whether the corresponding clothing is qualified as 1, and then entering the next process;
and if the clothing detection of the personnel is unqualified, marking the flag bit of whether the corresponding clothing is qualified as 0, and then entering the next process.
In an embodiment of the method for detecting a worker at a vehicle inspection station, in step S31, before determining whether a clothing region image exists in an image of the vehicle inspection station by using a clothing detection model based on a deep learning network, the method further includes:
s311, obtaining different sample personnel area images;
s312, marking the position of the clothing region image in the sample personnel region image by using a rectangular frame, and marking the position as clothing;
s313, training a target detection deep neural network model by using the marked personnel area image to obtain a clothing detection model based on a deep learning network.
In an embodiment of the method for detecting a worker at a vehicle inspection station, in step S4, the step of comparing the mutual position relationship between the human key point positions in the image of the vehicle inspection station with the mutual position relationship between the template human key point positions to obtain a third comparison result includes:
step S41, acquiring the positions of the key points of the human body in the image of the personnel area by adopting a human body key point detection network model based on deep learning, and marking the key point marker bits corresponding to the image of the vehicle inspection station if the positions of the key points of the human body are not acquired;
step S42, if the human body key point positions are obtained, comparing the mutual position relationship of the obtained human body key point positions with the mutual position relationship of the template human body key point positions to obtain a third comparison result.
Here, as shown in fig. 1, a human body key point detection network model based on deep learning is used to detect key points such as head, shoulder, elbow, wrist, chest and the like of a person in a person region image, and the positions of the key points are acquired,
if the positions of the key points cannot be obtained, marking the key point zone bits corresponding to the images of the vehicle inspection station as 0, and entering the next process;
if the positions of the key points can be obtained, the position relation of the key points is utilized to judge whether the action of the staff is in accordance with the regulation or not, and the relevant result is recorded, for example,
if the key point position is qualified, marking a flag bit of whether the corresponding key point position is qualified as 1, and then entering the next process;
and if the detection of the key point position is unqualified, marking the flag bit of whether the corresponding key point position is qualified as 0, and then entering the next process.
In an embodiment of the method for detecting a worker at a vehicle inspection station, in step S41, before obtaining a position of a human key point in the person region image, using a human key point detection network model based on deep learning, the method further includes:
s411, acquiring different sample personnel area images;
s412, marking the coordinates of key points including the top of the head, the neck, the left shoulder, the right shoulder, the left elbow, the right elbow, the left wrist, the right wrist, the chest, the pelvis and the like in each sample personnel area image;
and S413, training a key point target detection deep neural network model by using the marked sample personnel area image to obtain a human body key point detection network model based on deep learning.
In an embodiment of the method for detecting a worker at a vehicle inspection station, in step S5, the step of determining whether the worker at the vehicle inspection station is qualified based on the first, second, and third comparison results includes:
and judging whether the detection of the staff at the vehicle inspection station is qualified or not based on the staff region zone bit, the staff zone bit, the face zone bit, the clothing zone bit and the first, second and third comparison results.
Whether the detection of the staff at the vehicle inspection station is qualified or not can be judged according to the staff region zone bit, the staff zone bit, the face zone bit, the clothing zone bit, the first comparison result, the second comparison result and the third comparison result, wherein the first comparison result, the second comparison result and the third comparison result respectively correspond to the qualified or not face zone bit, the qualified or not clothing zone bit and the qualified or not key point position zone bit. For example, the result of the whole process may be analyzed, if all the flag bits recorded finally are 1, the image detection of the vehicle inspection station is qualified, if at least one of the flag bits recorded finally is 0, the image detection of the vehicle inspection station is unqualified, and the reason for the unqualified detection and the related image may be output according to which flag bit is marked as 0.
As shown in fig. 1, in a specific embodiment of the staff detection method of the vehicle inspection station of the present invention, the method includes:
s101, acquiring an image of a chassis station from a server;
s102, detecting personnel in the chassis station image by adopting a target detection model based on deep learning, judging whether to acquire a personnel region image, if not, marking a personnel region zone bit corresponding to the image of the vehicle inspection station as 0, and ending the process; if the personnel area image can be acquired, judging whether personnel in the personnel area image really exist or not by combining a target classification model based on deep learning, if not, marking a personnel zone bit corresponding to the image of the vehicle inspection station as 0, and ending the process; if yes, entering the next process;
s103, detecting a face in the personnel area image by adopting a target detection network model based on deep learning, judging whether the face area image exists, if not, marking a face zone bit corresponding to the image of the vehicle inspection station as 0, and entering the next process; if the face area image exists, acquiring the face area image, comparing the face area image with a face template image of a worker in a database, judging whether the worker is the worker, and recording a related result, for example, if the face detection is qualified, marking a corresponding face qualified flag bit as 1, if the face detection is unqualified, marking a corresponding face qualified flag bit as 0, and then entering the next process;
s104, detecting the clothing of the person by adopting a target detection network model based on deep learning in the person region image, judging whether the clothing region image exists, if not, marking the clothing zone bit corresponding to the image of the vehicle detection station as 0, and entering the next process; if the clothing region image exists, acquiring the clothing region image, comparing the clothing region image with a worker clothing pattern in a database, judging whether the clothing of the worker meets the regulations, and recording related results, for example, if the clothing of the worker is detected to be qualified, marking a flag bit of whether the corresponding clothing is qualified as 1, and if the clothing of the worker is detected to be unqualified, marking a flag bit of whether the corresponding clothing is qualified as 0, and then entering the next process;
s105, detecting key points such as heads, shoulders, elbows, wrists and chests of the personnel in the personnel area image by adopting a human body key point detection network model based on deep learning, acquiring the positions of the key points, marking the key point mark positions corresponding to the image of the vehicle inspection station as 0 if the positions of the key points cannot be acquired, and entering the next process; if the positions of the key points can be obtained, judging whether the actions of the workers meet the regulations or not by utilizing the mutual position relation of the key points, recording the relevant results, marking the flag bit of whether the corresponding key point position is qualified or not as 1 if the key point position is tested to be qualified, marking the flag bit of whether the corresponding key point position is qualified or not as 0 if the key point position is not tested to be qualified, and then entering the next process;
and S106, analyzing the result of the whole process, if all the flag bits recorded finally are 1, indicating that the chassis station photos are qualified, if at least one of the finally recorded flag bits is 0, indicating that the chassis station photos are unqualified, and outputting the reason for unqualified detection and related images according to the position of the 0 flag bit.
According to another aspect of the present invention, there is also provided a staff detecting apparatus of a car inspection station, the apparatus including:
the acquisition device is used for acquiring an image of the vehicle inspection station;
the first comparison device is used for comparing the face image in the image of the vehicle inspection station with the face template image of the worker to obtain a first comparison result;
the second comparison device is used for comparing the clothing region image in the image of the vehicle inspection station with the clothing template image of the worker to obtain a second comparison result;
the third comparison device is used for comparing the mutual position relationship between the human key point positions in the image of the vehicle inspection station with the mutual position relationship between the template human key point positions to obtain a third comparison result;
and the judging device is used for judging whether the detection of the staff at the vehicle inspection station is qualified or not based on the first comparison result, the second comparison result and the third comparison result.
According to another aspect of the present invention, there is also provided a computing-based device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of any of the methods described above.
For details of embodiments of each device and storage medium of the present invention, reference may be made to corresponding parts of each method embodiment, and details are not described herein again.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
It should be noted that the present invention may be implemented in software and/or in a combination of software and hardware, for example, as an Application Specific Integrated Circuit (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. Also, the software programs (including associated data structures) of the present invention can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Further, some of the steps or functions of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present invention can be applied as a computer program product, such as computer program instructions, which when executed by a computer, can invoke or provide the method and/or technical solution according to the present invention through the operation of the computer. Program instructions which invoke the methods of the present invention may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the invention herein comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or solution according to embodiments of the invention as described above.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (10)

1. A staff detection method of a vehicle inspection station is characterized by comprising the following steps:
acquiring an image of a vehicle inspection station, wherein the image of the vehicle inspection station is an image of a chassis station of the vehicle inspection;
judging whether a personnel area image exists in the image of the vehicle inspection station, if not, marking a personnel area zone bit corresponding to the image of the vehicle inspection station and marking the personnel area zone bit as 0; if the personnel area image exists, judging whether personnel exist in the personnel area image, if not, marking a personnel zone bit corresponding to the image of the vehicle inspection station and marking the personnel zone bit as 0; if the person exists, acquiring the person region image, detecting whether the person region image exists or not, if not, marking a face zone bit corresponding to the image of the vehicle inspection station and marking the face zone bit as 0; if the human face exists, comparing the human face image in the image of the vehicle inspection station with a human face template image of a worker to obtain a first comparison result, wherein the first comparison result is a result that whether the human face is qualified or not when the human face detection is qualified is marked as 1 or a result that whether the human face is qualified or not when the human face detection is unqualified is marked as 0;
judging whether a clothing region image exists in the image of the vehicle inspection station, if not, marking a clothing marker bit corresponding to the image of the vehicle inspection station and marking the clothing marker bit as 0; if the clothing region image exists, comparing the clothing region image in the image of the vehicle inspection station with the clothing template image of the worker, judging whether the clothing of the worker meets the regulations or not, and obtaining a second comparison result, wherein the second comparison result is a result that whether the corresponding clothing is qualified or not when the clothing of the worker is qualified in detection is marked as 1 or whether the corresponding clothing is qualified or not when the clothing of the worker is unqualified in detection is marked as 0;
acquiring human body key point positions in the personnel area image, if the human body key point positions are not acquired, marking key point mark positions corresponding to the image of the vehicle inspection station and marking the key point mark positions as 0, if the human body key point positions are acquired, comparing the mutual position relation between the human body key point positions in the image of the vehicle inspection station with the mutual position relation between the template human body key point positions to obtain a third comparison result, wherein the third comparison result is a result that whether the corresponding key point positions are qualified when the key point position detection is qualified or not is marked as 1 or whether the corresponding key point positions are qualified when the key point position detection is unqualified or is marked as 0;
judging whether the detection of the staff at the vehicle inspection station is qualified or not based on the marked staff region zone bit, the staff zone bit, the face bit, the clothing bit, the key point bit, the qualified bit of the face in the first comparison result, the qualified bit of the clothing in the second comparison result and the qualified bit of the key point position in the third comparison result, if the marks corresponding to the respective bit of the staff region bit, the staff bit, the face bit, the clothing bit, the key point bit, the qualified bit of the face, the qualified bit of the clothing and the qualified bit of the key point position are all 1, determining that the detection of the workers at the vehicle inspection station is qualified, and determining that the detection of the workers at the vehicle inspection station is unqualified if at least one marker of the marker exists in the markers respectively corresponding to the marker;
the judging whether the image of the vehicle inspection station has a personnel area image comprises the following steps: judging whether a personnel area image exists in the image of the vehicle inspection station or not by adopting a personnel detection model based on a deep learning network; if the personnel area image exists, judging whether personnel exist in the personnel area image or not by adopting a personnel classification model based on a deep learning network; if the person exists, acquiring the person region image, and detecting whether the person region image exists in the person region image by adopting a face detection model based on a deep learning network;
the judging whether the clothing region image exists in the image of the vehicle inspection station comprises the following steps: judging whether a clothing region image exists in the image of the vehicle inspection station or not by adopting a clothing detection model based on a deep learning network;
the acquiring of the position of the human body key point in the personnel area image comprises the following steps: and acquiring the positions of the human key points in the personnel region image by adopting a human key point detection network model based on deep learning.
2. The method of claim 1, wherein before determining whether the image of the person region exists in the image of the vehicle inspection station by using a person detection model based on a deep learning network, the method further comprises:
acquiring images of sample vehicle inspection stations under different illumination and different shooting angles;
marking the positions of the personnel in the image of the sample vehicle inspection station by using a rectangular frame, and marking the positions as the personnel to obtain a marked sample image;
and training a target detection deep neural network model by using the marked sample image to obtain a personnel detection model based on a deep learning network.
3. The method of claim 2, wherein determining whether a person is in front of the person region image by using a deep learning network-based person classification model further comprises:
obtaining sample personnel area images at different positions in the image of the sample vehicle inspection station by utilizing the personnel detection model based on the deep learning network;
classifying the sample personnel area images into manned images and unmanned images;
and training a target classification deep neural network model by using the manned image and the unmanned image to obtain a personnel classification model based on a deep learning network.
4. The method of claim 1, wherein before detecting whether the face region image exists in the person region image by using a face detection model based on a deep learning network, the method further comprises:
acquiring different sample personnel area images;
marking the position of a face region image in the sample personnel region image by using a rectangular frame, and marking the face region image as a face;
and training a target detection deep neural network model by using the marked sample personnel area image to obtain a human face detection model based on a deep learning network.
5. The method of claim 1, wherein before determining whether the clothing region image exists in the image of the vehicle inspection station by using a clothing detection model based on a deep learning network, the method further comprises:
acquiring different sample personnel area images;
marking the position of the clothing region image in the sample personnel region image by using a rectangular frame, and marking the clothing region image as clothing;
and training a target detection deep neural network model by using the marked personnel area image to obtain a clothing detection model based on a deep learning network.
6. The method of claim 1, wherein before obtaining the human key point position in the person region image by using the deep learning-based human key point detection network model, further comprising:
acquiring different sample personnel area images;
marking the coordinates of key points such as the top of the head, the neck, the left shoulder, the right shoulder, the left elbow, the right elbow, the left wrist, the right wrist, the chest cavity, the pelvis and the like in each sample personnel area image;
and training a key point target detection deep neural network model by using the marked sample personnel area image to obtain a human body key point detection network model based on deep learning.
7. The utility model provides a staff check out test set at car inspection station which characterized in that includes:
the system comprises an acquisition device, a detection device and a control device, wherein the acquisition device is used for acquiring an image of a vehicle inspection station, and the image of the vehicle inspection station is an image of a chassis station of the vehicle inspection;
the first comparison device is used for judging whether a personnel area image exists in the image of the vehicle inspection station, if not, marking a personnel area flag bit corresponding to the image of the vehicle inspection station and marking the personnel area flag bit as 0; if the personnel area image exists, judging whether personnel exist in the personnel area image, if not, marking a personnel zone bit corresponding to the image of the vehicle inspection station and marking the personnel zone bit as 0; if the person exists, acquiring the person region image, detecting whether the person region image exists or not, if not, marking a face zone bit corresponding to the image of the vehicle inspection station and marking the face zone bit as 0; if the human face exists, comparing the human face image in the image of the vehicle inspection station with a human face template image of a worker to obtain a first comparison result, wherein the first comparison result is a result that whether the human face is qualified or not when the human face detection is qualified is marked as 1 or a result that whether the human face is qualified or not when the human face detection is unqualified is marked as 0;
the second comparison device is used for judging whether a clothing region image exists in the image of the vehicle inspection station, if not, marking a clothing marker bit corresponding to the image of the vehicle inspection station and marking the clothing marker bit as 0; if the clothing region image exists, comparing the clothing region image in the image of the vehicle inspection station with the clothing template image of the worker, judging whether the clothing of the worker meets the regulations or not, and obtaining a second comparison result, wherein the second comparison result is a result that whether the corresponding clothing is qualified or not when the clothing of the worker is qualified in detection is marked as 1 or whether the corresponding clothing is qualified or not when the clothing of the worker is unqualified in detection is marked as 0;
a third comparison device, configured to obtain positions of key points of a human body in the person area image, if the positions of the key points of the human body are not obtained, mark the key point flag corresponding to the image of the vehicle inspection station and mark the key point flag as 0, and if the positions of the key points of the human body are obtained, compare a mutual position relationship between positions of the key points of the human body in the image of the vehicle inspection station and a mutual position relationship between positions of key points of the template human body to obtain a third comparison result, where the third comparison result is a result of marking, as 1, whether the positions of the corresponding key points are qualified when the detection of the key points is qualified or a result of marking, as 0, whether the positions of the corresponding key points are qualified when the detection of the key points is not qualified;
a judging device for judging whether the detection of the staff at the vehicle inspection station is qualified or not based on the marked staff region zone bit, the staff zone bit, the face bit, the clothing bit, the key point bit, the qualified or not of the face in the first comparison result, the qualified or not of the clothing in the second comparison result and the qualified or not of the key point position in the third comparison result, if the marks corresponding to the staff region bit, the staff bit, the face bit, the clothing bit, the key point bit, the qualified or not of the face, the qualified or not of the clothing bit and the qualified or not of the key point position are all 1, determining that the detection of the workers at the vehicle inspection station is qualified, and determining that the detection of the workers at the vehicle inspection station is unqualified if at least one marker of the marker exists in the markers respectively corresponding to the marker;
the first comparison device is specifically configured to determine whether a person region image exists in the image of the vehicle inspection station, and includes: judging whether a personnel area image exists in the image of the vehicle inspection station or not by adopting a personnel detection model based on a deep learning network; if the personnel area image exists, judging whether personnel exist in the personnel area image or not by adopting a personnel classification model based on a deep learning network; if the person exists, acquiring the person region image, and detecting whether the person region image exists in the person region image by adopting a face detection model based on a deep learning network;
the second comparison device is specifically configured to determine whether a clothing region image exists in the image of the vehicle inspection station, and includes: judging whether a clothing region image exists in the image of the vehicle inspection station or not by adopting a clothing detection model based on a deep learning network;
the third comparison device is specifically configured to obtain the position of a human body key point in the personnel area image, and includes: and acquiring the positions of the human key points in the personnel region image by adopting a human key point detection network model based on deep learning.
8. The apparatus according to claim 7, further comprising a model obtaining device for obtaining images of the sample car inspection station under different illumination and different shooting angles; marking the positions of the personnel in the image of the sample vehicle inspection station by using a rectangular frame, and marking the positions as the personnel to obtain a marked sample image; and training a target detection deep neural network model by using the marked sample image to obtain a personnel detection model based on a deep learning network.
9. A computing-based device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the steps of the method of any one of claims 1 to 6.
10. A computer-readable storage medium having computer-executable instructions stored thereon, which, when executed by a processor, cause the processor to implement the steps of the method of any one of claims 1 to 6.
CN201910576535.6A 2019-06-28 2019-06-28 Method and equipment for detecting staff at vehicle inspection station Expired - Fee Related CN110309768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910576535.6A CN110309768B (en) 2019-06-28 2019-06-28 Method and equipment for detecting staff at vehicle inspection station

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910576535.6A CN110309768B (en) 2019-06-28 2019-06-28 Method and equipment for detecting staff at vehicle inspection station

Publications (2)

Publication Number Publication Date
CN110309768A CN110309768A (en) 2019-10-08
CN110309768B true CN110309768B (en) 2020-11-20

Family

ID=68078640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910576535.6A Expired - Fee Related CN110309768B (en) 2019-06-28 2019-06-28 Method and equipment for detecting staff at vehicle inspection station

Country Status (1)

Country Link
CN (1) CN110309768B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717449A (en) * 2019-10-09 2020-01-21 上海眼控科技股份有限公司 Vehicle annual inspection personnel behavior detection method and device and computer equipment
CN110852212A (en) * 2019-10-29 2020-02-28 上海眼控科技股份有限公司 Method and device for checking operation object in vehicle detection
CN111652046A (en) * 2020-04-17 2020-09-11 济南浪潮高新科技投资发展有限公司 Safe wearing detection method, equipment and system based on deep learning
CN112907199B (en) * 2021-01-22 2024-02-02 陕西交通电子工程科技有限公司 Intelligent management system for assisting vehicle inspection
CN112560817B (en) * 2021-02-22 2021-07-06 西南交通大学 Human body action recognition method and device, electronic equipment and storage medium
CN113469132A (en) * 2021-07-26 2021-10-01 浙江大华技术股份有限公司 Violation detection method and device, electronic equipment and storage medium
CN114882597B (en) * 2022-07-11 2022-10-28 浙江大华技术股份有限公司 Target behavior identification method and device and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10089520B2 (en) * 2015-03-26 2018-10-02 Krishna V Motukuri System for displaying the contents of a refrigerator
CN107920223B (en) * 2016-10-08 2020-08-28 杭州海康威视数字技术股份有限公司 Object behavior detection method and device
CN107590807A (en) * 2017-09-29 2018-01-16 百度在线网络技术(北京)有限公司 Method and apparatus for detection image quality
CN108174165A (en) * 2018-01-17 2018-06-15 重庆览辉信息技术有限公司 Electric power safety operation and O&M intelligent monitoring system and method
CN108229855A (en) * 2018-02-06 2018-06-29 上海小蚁科技有限公司 Method for monitoring service quality and device, computer readable storage medium, terminal
CN109146322A (en) * 2018-09-12 2019-01-04 深圳市商汤科技有限公司 Monitoring method and device and system, electronic equipment and storage medium
CN109299683B (en) * 2018-09-13 2019-12-10 嘉应学院 Security protection evaluation system based on face recognition and behavior big data

Also Published As

Publication number Publication date
CN110309768A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN110309768B (en) Method and equipment for detecting staff at vehicle inspection station
CN107730905A (en) Multitask fake license plate vehicle vision detection system and method based on depth convolutional neural networks
CN109086756A (en) A kind of text detection analysis method, device and equipment based on deep neural network
WO2006020249B1 (en) System for test response diagnosis and assessment
CN107818322A (en) A kind of vehicle VIN code tampering detection system and methods for vehicle annual test
CN105574550A (en) Vehicle identification method and device
TWI716012B (en) Sample labeling method, device, storage medium and computing equipment, damage category identification method and device
CN110378258B (en) Image-based vehicle seat information detection method and device
CN110473211B (en) Method and equipment for detecting number of spring pieces
CN104077568A (en) High-accuracy driver behavior recognition and monitoring method and system
CN111369801B (en) Vehicle identification method, device, equipment and storage medium
CN110288612B (en) Nameplate positioning and correcting method and device
CN111753877B (en) Product quality detection method based on deep neural network migration learning
CN113597614A (en) Image processing method and device, electronic device and storage medium
CN104463240A (en) Method and device for controlling list interface
CN110796078A (en) Vehicle light detection method and device, electronic equipment and readable storage medium
CN110765963A (en) Vehicle brake detection method, device, equipment and computer readable storage medium
CN114937293B (en) GIS-based agricultural service management method and system
CN106156713A (en) A kind of image processing method automatically monitored for examination hall behavior
CN107633201A (en) A kind of answering card intelligent identification Method and system
CN110276347B (en) Text information detection and identification method and equipment
CN110689028A (en) Site map evaluation method, site survey record evaluation method and site survey record evaluation device
CN112016542A (en) Urban waterlogging intelligent detection method and system
CN110363761A (en) A kind of start-stop Mark Detection system and method for vehicle chassis dynamic detection
CN113569645B (en) Track generation method, device and system based on image detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Staff detection method and equipment of vehicle inspection station

Effective date of registration: 20220211

Granted publication date: 20201120

Pledgee: Shanghai Bianwei Network Technology Co.,Ltd.

Pledgor: SHANGHAI EYE CONTROL TECHNOLOGY Co.,Ltd.

Registration number: Y2022310000023

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201120