CN111814636A - Safety belt detection method and device, electronic equipment and storage medium - Google Patents

Safety belt detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111814636A
CN111814636A CN202010611343.7A CN202010611343A CN111814636A CN 111814636 A CN111814636 A CN 111814636A CN 202010611343 A CN202010611343 A CN 202010611343A CN 111814636 A CN111814636 A CN 111814636A
Authority
CN
China
Prior art keywords
safety belt
image
belt detection
network
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010611343.7A
Other languages
Chinese (zh)
Inventor
袁宇辰
沈辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010611343.7A priority Critical patent/CN111814636A/en
Publication of CN111814636A publication Critical patent/CN111814636A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application discloses a safety belt detection method and device, electronic equipment and a storage medium, relates to the fields of artificial intelligence, deep learning and image detection, and can be applied to the field of automatic driving. The specific scheme is as follows: inputting an image to be recognized into a pre-trained semantic segmentation network, and carrying out safety belt region detection on the image to be recognized through the semantic segmentation network to obtain a safety belt detection region of the image to be recognized; and inputting the safety belt detection area into a safety belt detection network trained in advance, and carrying out safety belt detection on the safety belt detection area through the safety belt detection network to obtain a safety belt detection result corresponding to the safety belt detection area. The embodiment of the application can accurately detect whether the driver fastens the safety belt or not, thereby effectively assisting the vehicle supervision department in operation supervision and providing guarantee for the safe driving of the vehicle.

Description

Safety belt detection method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of computers, in particular to a safety belt detection method, a safety belt detection device, electronic equipment and a storage medium, and further relates to the fields of artificial intelligence, deep learning and image detection, which can be applied to the field of automatic driving.
Background
With the continuous development of the internet and artificial intelligence technology, more and more fields are related to automatic calculation and analysis, wherein the field of monitoring security is one of the most important scenes.
For public operation vehicles, such as taxies, buses, long-distance buses and the like, the driving safety of a driver is particularly important due to the safety of numerous passengers. Therefore, many public operation vehicles are provided with vehicle-mounted monitoring cameras, and the corresponding companies or supervision departments can conveniently monitor the driving behaviors of drivers. For some dangerous driving behaviors which are frequently appeared by a driver, such as smoking, calling, not fastening a safety belt and the like, timely finding and warning are needed, and the driving safety of the vehicle is ensured to the maximum extent.
Aiming at driver safety belt judgment, the traditional method generally adopts spot check on a monitoring video and then judges by naked eyes; in recent years, with the rise of Convolutional Neural Networks (CNN), some methods have introduced artificial intelligence to assist recognition, but these methods usually only perform direct binary classification on the whole monitoring picture or the body area of the driver to make a judgment. In the existing scheme, the artificial naked eye mode has the defects of low speed, large error, high time and labor cost and the like; the CNN-based direct classification method has the advantages that as the safety belt target is small in the image, the extracted features are rare, and meanwhile, a large amount of interference information exists around the safety belt target, so that the identification accuracy rate in a real vehicle-mounted scene is low, and the identification effect is not ideal.
Disclosure of Invention
The application provides a safety belt detection method, a safety belt detection device, electronic equipment and a storage medium, which can accurately detect whether a driver fastens a safety belt or not, thereby effectively assisting a vehicle supervision department in operation supervision and providing guarantee for safe driving of a vehicle.
In a first aspect, the present application provides a seat belt detection method, including:
inputting an image to be recognized into a pre-trained semantic segmentation network, and carrying out safety belt region detection on the image to be recognized through the semantic segmentation network to obtain a safety belt detection region of the image to be recognized;
and inputting the safety belt detection area into a safety belt detection network trained in advance, and carrying out safety belt detection on the safety belt detection area through the safety belt detection network to obtain a safety belt detection result corresponding to the safety belt detection area.
In a second aspect, the present application provides a seat belt detection apparatus, the apparatus comprising: a segmentation module and a detection module; wherein the content of the first and second substances,
the segmentation module is used for inputting an image to be recognized into a pre-trained semantic segmentation network, and detecting a safety belt region of the image to be recognized through the semantic segmentation network to obtain a safety belt detection region of the image to be recognized;
the detection module is used for inputting the safety belt detection area to a safety belt detection network trained in advance, and carrying out safety belt detection on the safety belt detection area through the safety belt detection network to obtain a safety belt detection result corresponding to the safety belt detection area.
In a third aspect, an embodiment of the present application provides an electronic device, including:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the seat belt detection method of any embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a seat belt detection method according to any of the embodiments of the present application.
According to the technical scheme provided by the application, whether a driver fastens the safety belt or not can be accurately detected, so that the vehicle supervision department can be effectively assisted to operate and supervise, and a guarantee is provided for safe driving of the vehicle.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flowchart of a seat belt detection method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a seat belt detection method according to a second embodiment of the present application;
fig. 3 is a schematic flowchart of a seat belt detection method according to a third embodiment of the present application;
fig. 4 is a first structural schematic diagram of a seat belt detection device according to a fourth embodiment of the present application;
fig. 5 is a second structural schematic diagram of a seat belt detection device according to a fourth embodiment of the present application;
FIG. 6 is a schematic structural diagram of a preprocessing module provided in the fourth embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing a seat belt detection method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Example one
Fig. 1 is a flowchart of a seat belt detection method according to an embodiment of the present application, where the method may be performed by a seat belt detection apparatus or an electronic device, where the apparatus or the electronic device may be implemented by software and/or hardware, and the apparatus or the electronic device may be integrated in any intelligent device with a network communication function. As shown in fig. 1, the seat belt detection method may include the steps of:
s101, inputting an image to be recognized into a pre-trained semantic segmentation network, and carrying out safety belt region detection on the image to be recognized through the semantic segmentation network to obtain a safety belt detection region of the image to be recognized.
In a specific embodiment of the application, the electronic device may input the image to be recognized to a pre-trained semantic segmentation network, and perform seat belt region detection on the image to be recognized through the semantic segmentation network to obtain a seat belt detection region of the image to be recognized. In one embodiment, the electronic device may treat a first convolution unit of the semantic segmentation network as a current convolution unit; taking an image to be identified as a detection object of a current convolution unit; then, image feature extraction is carried out on the detection object of the current convolution unit through the current convolution unit, and an image feature extraction result corresponding to the current convolution unit is obtained; then, taking the image feature extraction result corresponding to the current convolution unit as a detection object of the next convolution unit of the current convolution unit; repeatedly executing the operation by taking the next convolution unit as the current convolution unit until the face feature extraction result corresponding to the last convolution unit is extracted from the detection object of the last convolution unit of the semantic segmentation network; and finally, obtaining a safety belt detection area of the image to be recognized based on the face feature extraction result corresponding to the last convolution unit. Considering that if the safety belt is judged to exist only by the segmentation result of the semantic segmentation network, the judgment result is not accurate, and various tiny areas are frequently generated in the segmentation result, so that errors are caused. The application reforms existing semantic segmentation networks, and the specific mode is as follows: removing one layer of the final output mask (mask) of the semantic segmentation network, sequentially connecting a 3 × 3 convolution layer, a pooling layer and a full-connection layer at the penultimate layer respectively, outputting 1 × 3 vectors which represent the respective probabilities of 3 classes of classification results (fastening safety belts, not fastening safety belts and uncertain), wherein the sum of the three is equal to 1; the last category of "uncertain" corresponds to situations where the driver's body is not visible or the picture is too blurred. The network transformation in the step is equivalent to directly carrying out classification judgment based on the features extracted by the semantic segmentation network, so that the manual establishment of a judgment strategy for a segmentation result is avoided, and better accuracy and robustness can be achieved.
S102, inputting the safety belt detection area into a safety belt detection network trained in advance, and carrying out safety belt detection on the safety belt detection area through the safety belt detection network to obtain a safety belt detection result corresponding to the safety belt detection area.
In a specific embodiment of the application, the electronic device may input the seat belt detection area to a seat belt detection network trained in advance, and perform seat belt detection on the seat belt detection area through the seat belt detection network to obtain a seat belt detection result corresponding to the seat belt detection area. In one embodiment, the electronic device may input the seat belt detection area to a convolution layer in the seat belt detection network, and perform convolution operation on the seat belt detection area through the convolution layer to obtain a feature extraction result corresponding to the convolution layer; inputting the feature extraction result corresponding to the convolutional layer into a pooling layer in the safety belt detection network, and performing pooling operation on the feature extraction result corresponding to the convolutional layer through the pooling layer to obtain a feature extraction result corresponding to the pooling layer; and inputting the feature extraction result corresponding to the pooling layer to a full connection layer in the safety belt detection network, and performing classification operation on the face feature extraction features corresponding to the pooling layer through the full connection layer to obtain a safety belt detection result corresponding to a safety belt detection area.
The safety belt detection method provided by the embodiment of the application comprises the steps that firstly, an image to be recognized is input into a pre-trained semantic segmentation network, safety belt detection is carried out on the image to be recognized through the semantic segmentation network, and a safety belt detection area of the image to be recognized is obtained; and then inputting the safety belt detection area into a safety belt detection network trained in advance, and carrying out safety belt area detection on the safety belt detection area through the safety belt detection network to obtain a safety belt detection result corresponding to the safety belt detection area. That is to say, the method and the device can perform region segmentation on the image to be recognized first, and then detect the safety belt detection region. In the existing safety belt detection method, the image to be recognized is directly detected based on the CNN. Because the safety belt detection area is firstly divided from the image to be recognized, and then the technical means of detection is carried out based on the safety belt detection area, the technical problems that the recognition accuracy rate is low and the recognition effect is not ideal in a real vehicle-mounted scene due to the fact that the safety belt is small in the image, the extractable characteristics are rare and a large amount of interference information exists around the safety belt, and the recognition accuracy rate is low in the real vehicle-mounted scene in the prior art are solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
Example two
Fig. 2 is a schematic flowchart of a seat belt detection method according to a second embodiment of the present application. As shown in fig. 2, the seat belt detection method may include the steps of:
s201, image preprocessing is carried out on the image to be recognized, and the image to be recognized after the image preprocessing is obtained.
In a specific embodiment of the application, the electronic device may perform image preprocessing on an image to be recognized to obtain the image to be recognized after the image preprocessing. In one embodiment, the electronic device may first perform scaling on the image to be identified to obtain a scaled image to be identified; then, normalizing the image to be identified after the scaling processing to obtain the image to be identified after the normalization processing; and taking the normalized image to be identified as the image to be identified after image preprocessing. In computer image processing and computer graphics, image scaling (image scaling) refers to a process of adjusting the size of a digital image. Image scaling requires a trade-off between processing efficiency and smoothness (smoothness) and sharpness (sharpness) of the results. As the size of an image increases, the visibility of the pixels making up the image becomes higher, causing the image to appear "soft". Conversely, reducing an image will enhance its smoothness and sharpness. Further, the image normalization refers to a process of transforming an image into a fixed standard form by performing a series of standard processing transformations, and the standard image is referred to as a normalized image. The image normalization is to convert the original image to be processed into a corresponding unique standard form through a series of transformations (namely, a set of parameters is found by using the invariant moment of the image so that the influence of other transformation functions on the image transformation can be eliminated), and the standard form has invariant characteristics to affine transformations such as translation, rotation and scaling. In this step, the image to be recognized is pre-processed, scaled to a fixed size (e.g., 512 x 512), normalized (each pixel divided by 255), subtracted by a uniform red-green-blue (RGB) mean (e.g., [0.485, 0.456, 0.406]) and divided by a uniform RGB variance (e.g., [0.229, 0.224, 0.225 ]). The purpose of the pre-processing is to improve the robustness of the image data input to the network. Because the training data generally meet a certain distribution, common parts in the data can be eliminated through preprocessing such as normalization, mean value reduction, variance removal and the like, so that the characteristics and the difference between individuals are highlighted, and the network can learn the image characteristics with the discrimination more easily.
S202, inputting the image to be recognized after image preprocessing into a semantic segmentation network, and carrying out safety belt region detection on the image to be recognized after image preprocessing through the semantic segmentation network to obtain a safety belt detection region of the image to be recognized after image preprocessing.
In a specific embodiment of the application, the electronic device may input the image to be recognized after the image preprocessing to a semantic segmentation network, and perform seat belt region detection on the image to be recognized after the image preprocessing by using the semantic segmentation network to obtain a seat belt detection region of the image to be recognized after the image preprocessing. In one embodiment, the electronic device may treat a first convolution unit of the semantic segmentation network as a current convolution unit; taking the preprocessed image to be identified as a detection object of the current convolution unit; then, image feature extraction is carried out on the detection object of the current convolution unit through the current convolution unit, and an image feature extraction result corresponding to the current convolution unit is obtained; taking an image feature extraction result corresponding to the current convolution unit as a detection object of a next convolution unit of the current convolution unit; repeatedly executing the operation by taking the next convolution unit as the current convolution unit until the face feature extraction result corresponding to the last convolution unit is extracted from the detection object of the last convolution unit of the semantic segmentation network; and finally, obtaining a safety belt detection area of the image to be recognized based on the face feature extraction result corresponding to the last convolution unit. Specifically, the feature map (feature map) output by one convolution unit has a size of 64 × 64 × 320.
S203, inputting the safety belt detection area into a safety belt detection network trained in advance, and carrying out safety belt detection on the safety belt detection area through the safety belt detection network to obtain a safety belt detection result corresponding to the safety belt detection area.
The safety belt detection method provided by the embodiment of the application comprises the steps that firstly, an image to be recognized is input into a pre-trained semantic segmentation network, and safety belt region detection is carried out on the image to be recognized through the semantic segmentation network to obtain a safety belt detection region of the image to be recognized; and then inputting the safety belt detection area into a safety belt detection network trained in advance, and carrying out safety belt detection on the safety belt detection area through the safety belt detection network to obtain a safety belt detection result corresponding to the safety belt detection area. That is to say, the method and the device can perform region segmentation on the image to be recognized first, and then detect the safety belt detection region. In the existing safety belt detection method, the image to be recognized is directly detected based on the CNN. Because the safety belt detection area is firstly divided from the image to be recognized, and then the technical means of detection is carried out based on the safety belt detection area, the technical problems that the recognition accuracy rate is low and the recognition effect is not ideal in a real vehicle-mounted scene due to the fact that the safety belt is small in the image, the extractable characteristics are rare and a large amount of interference information exists around the safety belt, and the recognition accuracy rate is low in the real vehicle-mounted scene in the prior art are solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
EXAMPLE III
Fig. 3 is a schematic flowchart of a seat belt detection method according to a third embodiment of the present application. As shown in fig. 3, the seat belt detection method may include the steps of:
s301, image preprocessing is carried out on the image to be recognized, and the image to be recognized after the image preprocessing is obtained.
S302, inputting the image to be recognized after image preprocessing into a semantic segmentation network, and carrying out safety belt region detection on the image to be recognized after image preprocessing through the semantic segmentation network to obtain a safety belt detection region of the image to be recognized after image preprocessing.
S303, inputting the safety belt detection area into a convolution layer in the safety belt detection network, and performing convolution operation on the safety belt detection area through the convolution layer to obtain a feature extraction result corresponding to the convolution layer.
In a specific embodiment of the application, the electronic device may input the seat belt detection area to a convolution layer in the seat belt detection network, and perform convolution operation on the seat belt detection area through the convolution layer to obtain a feature extraction result corresponding to the convolution layer. A Convolutional layer (Convolutional layer) in a safety belt detection network consists of a plurality of Convolutional units, and parameters of each Convolutional unit are optimized through a back propagation algorithm. The convolution operation aims to extract different input features, the convolution layer at the first layer can only extract some low-level features such as edges, lines, angles and other levels, and more layers of networks can iteratively extract more complex features from the low-level features.
S304, inputting the feature extraction result corresponding to the convolutional layer into a pooling layer in the safety belt detection network, and performing pooling operation on the feature extraction result corresponding to the convolutional layer through the pooling layer to obtain the feature extraction result corresponding to the pooling layer.
In a specific embodiment of the application, the electronic device may input the feature extraction result corresponding to the convolutional layer to a pooling layer in the seat belt detection network, and perform pooling operation on the feature extraction result corresponding to the convolutional layer through the pooling layer to obtain the feature extraction result corresponding to the pooling layer. The input to each node of the pooling layer is also a small block of the previous layer (convolutional layer), the size of this small block being determined by the window size of the pooling core, the pooling layer does not change the depth of the node matrix, but it can change the size of the matrix. For image processing, pooling in the pooling layer may be understood as converting a high resolution picture into a low resolution picture. The common pooling operations include maximum pooling, average pooling, and the like, and the number of parameters in the network model can be further reduced after passing through the convolutional layer and the pooling layer.
S305, inputting the feature extraction result corresponding to the pooling layer to a full connection layer in the safety belt detection network, and performing classification operation on the face feature extraction feature corresponding to the pooling layer through the full connection layer to obtain the safety belt detection result corresponding to the safety belt detection area.
In a specific embodiment of the application, the electronic device may input the feature extraction result corresponding to the pooling layer to a full connection layer in the seat belt detection network, and perform a classification operation on the face feature extraction feature corresponding to the pooling layer through the full connection layer to obtain a seat belt detection result corresponding to the seat belt detection area. Each node of the fully connected layer is connected to all nodes of the previous layer for integrating the extracted features. The parameters of a fully connected layer are also typically the most due to its fully connected nature. In the CNN structure, 1 or more than 1 full connection layer is connected after passing through the convolution layer and the pooling layer; each neuron in the fully-connected layer is fully connected with all neurons in the previous layer, the fully-connected layer can integrate local information with category distinction in a convolutional layer or a pooling layer, and in order to improve the performance of the CNN network, a ReLU function is generally adopted as an excitation function of each neuron in the fully-connected layer. The output value of the last fully connected layer is passed to an output, which may be classified using softmax logistic regression, which may also be referred to as softmax layer (softmax layer).
Preferably, in an embodiment of the present application, the electronic device may further train the semantic segmentation network before inputting the image to be recognized into the pre-trained semantic segmentation network. Specifically, the electronic device may first use a first seat belt image sample acquired in advance as a current seat belt image sample; if the semantic segmentation network does not meet the preset convergence condition corresponding to the semantic segmentation network, inputting the current safety belt image sample into the semantic segmentation network, and training the semantic segmentation network by using the current safety belt image sample; and taking the next safety belt image sample of the current safety belt image sample as the current safety belt image sample, and repeatedly executing the operation until the semantic segmentation network meets the convergence condition corresponding to the semantic segmentation network.
Preferably, in an embodiment of the present application, the electronic device may train the seat belt detection network before inputting the seat belt detection area into the seat belt detection network trained in advance. Specifically, the electronic device may first use a first pre-acquired seat belt detection area as a current seat belt detection area sample; if the safety belt detection network does not meet the preset convergence condition corresponding to the safety belt detection network, inputting the current safety belt detection area sample into the safety belt detection network, and training the safety belt detection network by using the current safety belt detection area sample; and then, taking the next safety belt detection area sample of the current safety belt detection area sample as the current safety belt detection area sample, and repeatedly executing the operation until the safety belt detection network meets the convergence condition corresponding to the safety belt detection network.
The safety belt detection network based on semantic segmentation network transformation provided by the application can further perform classification judgment based on the segmentation characteristics of the safety belt of a driver. The segmentation features are completely aimed at the small area corresponding to the safety belt, so that the method has very strong discrimination and robustness, and can effectively support the subsequent classification network to obtain an accurate judgment result whether the driver fastens the safety belt or not. The method and the system can effectively assist the corresponding company or supervision department to carry out operation supervision, and provide guarantee for safe driving of the vehicle.
The safety belt detection method provided by the embodiment of the application comprises the steps that firstly, an image to be recognized is input into a pre-trained semantic segmentation network, and safety belt region detection is carried out on the image to be recognized through the semantic segmentation network to obtain a safety belt detection region of the image to be recognized; and then inputting the safety belt detection area into a safety belt detection network trained in advance, and carrying out safety belt detection on the safety belt detection area through the safety belt detection network to obtain a safety belt detection result corresponding to the safety belt detection area. That is to say, the method and the device can perform region segmentation on the image to be recognized first, and then detect the safety belt detection region. In the existing safety belt detection method, the image to be recognized is directly detected based on the CNN. Because the safety belt detection area is firstly divided from the image to be recognized, and then the technical means of detection is carried out based on the safety belt detection area, the technical problems that the recognition accuracy rate is low and the recognition effect is not ideal in a real vehicle-mounted scene due to the fact that the safety belt is small in the image, the extractable characteristics are rare and a large amount of interference information exists around the safety belt, and the recognition accuracy rate is low in the real vehicle-mounted scene in the prior art are solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
Example four
Fig. 4 is a first structural schematic diagram of a seat belt detection device according to a fourth embodiment of the present application. As shown in fig. 4, the apparatus 400 includes: a segmentation module 401 and a detection module 402; wherein the content of the first and second substances,
the segmentation module 401 is configured to input an image to be recognized to a pre-trained semantic segmentation network, and perform seat belt region detection on the image to be recognized through the semantic segmentation network to obtain a seat belt detection region of the image to be recognized;
the detection module 402 is configured to input the seat belt detection area to a seat belt detection network trained in advance, and perform seat belt detection on the seat belt detection area through the seat belt detection network to obtain a seat belt detection result corresponding to the seat belt detection area.
Fig. 5 is a second structural schematic diagram of a seat belt detection device according to a fourth embodiment of the present application. As shown in fig. 5, the apparatus 400 further includes: the preprocessing module 403 is configured to perform image preprocessing on the image to be recognized to obtain an image to be recognized after the image preprocessing; and inputting the image to be recognized after the image preprocessing into the semantic segmentation network.
Fig. 6 is a schematic structural diagram of a preprocessing module according to a fourth embodiment of the present application. As shown in fig. 6, the preprocessing module 403 includes: a scaling processing sub-module 4031 and a normalization processing sub-module 4032; wherein the content of the first and second substances,
the scaling sub-module 4031 is configured to scale the image to be identified to obtain a scaled image to be identified;
the normalization processing sub-module 4032 is configured to perform normalization processing on the image to be identified after the scaling processing to obtain an image to be identified after the normalization processing; and taking the normalized image to be identified as the image to be identified after the image preprocessing.
Further, the segmentation module 401 is specifically configured to use a first convolution unit of the semantic segmentation network as a current convolution unit; taking the image to be identified as a detection object of the current convolution unit; performing image feature extraction on a detection object of the current convolution unit through the current convolution unit to obtain an image feature extraction result corresponding to the current convolution unit; taking an image feature extraction result corresponding to the current convolution unit as a detection object of a convolution unit next to the current convolution unit; taking the next convolution unit as the current convolution unit, and repeatedly executing the operation until the face feature extraction result corresponding to the last convolution unit is extracted from the detection object of the last convolution unit of the semantic segmentation network; and obtaining a safety belt detection area of the image to be recognized based on the face feature extraction result corresponding to the last convolution unit.
Further, the detection module 402 is specifically configured to input the seat belt detection area to a convolution layer in the seat belt detection network, and perform convolution operation on the seat belt detection area through the convolution layer to obtain a feature extraction result corresponding to the convolution layer; inputting the feature extraction result corresponding to the convolutional layer into a pooling layer in the safety belt detection network, and performing pooling operation on the feature extraction result corresponding to the convolutional layer through the pooling layer to obtain a feature extraction result corresponding to the pooling layer; and inputting the feature extraction result corresponding to the pooling layer to a full connection layer in the safety belt detection network, and performing classification operation on the face feature extraction features corresponding to the pooling layer through the full connection layer to obtain the safety belt detection result corresponding to the safety belt detection area.
Further, the apparatus further comprises: a segmentation training module 404 (not shown in the figure) configured to use a first pre-acquired seat belt image sample as a current seat belt image sample; if the semantic segmentation network does not meet the preset convergence condition corresponding to the semantic segmentation network, inputting the current safety belt image sample to the semantic segmentation network, and training the semantic segmentation network by using the current safety belt image sample; and taking the next safety belt image sample of the current safety belt image sample as the current safety belt image sample, and repeatedly executing the operations until the semantic segmentation network meets the convergence condition corresponding to the semantic segmentation network.
Further, the apparatus further comprises: a detection training module 405 (not shown in the figure) configured to use a first pre-acquired seat belt detection area as a current seat belt detection area sample; if the safety belt detection network does not meet the preset convergence condition corresponding to the safety belt detection network, inputting the current safety belt detection area sample into the safety belt detection network, and training the safety belt detection network by using the current safety belt detection area sample; and taking a safety belt detection area sample next to the current safety belt detection area sample as the current safety belt detection area sample, and repeatedly executing the operations until the safety belt detection network meets the convergence condition corresponding to the safety belt detection network.
The safety belt detection device can execute the method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For details of the technology that are not described in detail in this embodiment, reference may be made to the seat belt detection method provided in any embodiment of the present application.
EXAMPLE five
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 7 is a block diagram of an electronic device according to the seat belt detection method of the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the seat belt detection method provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the seat belt detection method provided herein.
The memory 702, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the segmentation module 401 and the detection module 402 shown in fig. 4) corresponding to the seat belt detection method in the embodiments of the present application. The processor 701 executes various functional applications of the server and data processing by running non-transitory software programs, instructions and modules stored in the memory 702, that is, implements the seat belt detection method in the above method embodiment.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device of the seat belt detection method, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 702 may optionally include memory located remotely from the processor 701, and these remote memories may be connected to the seat belt detection method electronics over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the seat belt detection method may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus of the seat belt detection method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the image to be recognized is input into a pre-trained semantic segmentation network, and safety belt region detection is carried out on the image to be recognized through the semantic segmentation network to obtain a safety belt detection region of the image to be recognized; and then inputting the safety belt detection area into a safety belt detection network trained in advance, and carrying out safety belt detection on the safety belt detection area through the safety belt detection network to obtain a safety belt detection result corresponding to the safety belt detection area. That is to say, the method and the device can perform region segmentation on the image to be recognized first, and then detect the safety belt detection region. In the existing safety belt detection method, the image to be recognized is directly detected based on the CNN. Because the safety belt detection area is firstly divided from the image to be recognized, and then the technical means of detection is carried out based on the safety belt detection area, the technical problems that the recognition accuracy rate is low and the recognition effect is not ideal in a real vehicle-mounted scene due to the fact that the safety belt is small in the image, the extractable characteristics are rare and a large amount of interference information exists around the safety belt, and the recognition accuracy rate is low in the real vehicle-mounted scene in the prior art are solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (16)

1. A seat belt detection method, characterized in that the method comprises:
inputting an image to be recognized into a pre-trained semantic segmentation network, and carrying out safety belt region detection on the image to be recognized through the semantic segmentation network to obtain a safety belt detection region of the image to be recognized;
and inputting the safety belt detection area into a safety belt detection network trained in advance, and carrying out safety belt detection on the safety belt detection area through the safety belt detection network to obtain a safety belt detection result corresponding to the safety belt detection area.
2. The method of claim 1, wherein prior to inputting the image to be recognized into the pre-trained semantic segmentation network, the method further comprises:
carrying out image preprocessing on the image to be recognized to obtain the image to be recognized after the image preprocessing; and inputting the image to be recognized after the image preprocessing into the semantic segmentation network.
3. The method according to claim 2, wherein the image preprocessing the image to be recognized to obtain the image to be recognized after the image preprocessing comprises:
zooming the image to be recognized to obtain a zoomed image to be recognized;
normalizing the image to be identified after the scaling processing to obtain the image to be identified after the normalization processing; and taking the normalized image to be identified as the image to be identified after the image preprocessing.
4. The method according to claim 1, wherein the performing seat belt region detection on the image to be recognized through the semantic segmentation network to obtain a seat belt detection region of the image to be recognized comprises:
taking a first convolution unit of the semantic segmentation network as a current convolution unit; taking the image to be identified as a detection object of the current convolution unit;
performing image feature extraction on a detection object of the current convolution unit through the current convolution unit to obtain an image feature extraction result corresponding to the current convolution unit; taking an image feature extraction result corresponding to the current convolution unit as a detection object of a convolution unit next to the current convolution unit; taking the next convolution unit as the current convolution unit, and repeatedly executing the operation until the face feature extraction result corresponding to the last convolution unit is extracted from the detection object of the last convolution unit of the semantic segmentation network;
and obtaining a safety belt detection area of the image to be recognized based on the face feature extraction result corresponding to the last convolution unit.
5. The method according to claim 1, wherein the inputting the seat belt detection area into a seat belt detection network trained in advance, and performing seat belt detection on the seat belt detection area through the seat belt detection network to obtain a seat belt detection result corresponding to the seat belt detection area comprises:
inputting the safety belt detection area into a convolution layer in the safety belt detection network, and performing convolution operation on the safety belt detection area through the convolution layer to obtain a feature extraction result corresponding to the convolution layer;
inputting the feature extraction result corresponding to the convolutional layer into a pooling layer in the safety belt detection network, and performing pooling operation on the feature extraction result corresponding to the convolutional layer through the pooling layer to obtain a feature extraction result corresponding to the pooling layer;
and inputting the feature extraction result corresponding to the pooling layer to a full connection layer in the safety belt detection network, and performing classification operation on the face feature extraction features corresponding to the pooling layer through the full connection layer to obtain the safety belt detection result corresponding to the safety belt detection area.
6. The method of claim 1, wherein prior to inputting the image to be recognized into the pre-trained semantic segmentation network, the method further comprises:
taking a first safety belt image sample obtained in advance as a current safety belt image sample;
if the semantic segmentation network does not meet the preset convergence condition corresponding to the semantic segmentation network, inputting the current safety belt image sample to the semantic segmentation network, and training the semantic segmentation network by using the current safety belt image sample; and taking the next safety belt image sample of the current safety belt image sample as the current safety belt image sample, and repeatedly executing the operations until the semantic segmentation network meets the convergence condition corresponding to the semantic segmentation network.
7. The method of claim 1, wherein prior to said inputting the seat belt detection zone to a pre-trained seat belt detection network, the method further comprises:
taking a first safety belt detection area obtained in advance as a current safety belt detection area sample;
if the safety belt detection network does not meet the preset convergence condition corresponding to the safety belt detection network, inputting the current safety belt detection area sample into the safety belt detection network, and training the safety belt detection network by using the current safety belt detection area sample; and taking a safety belt detection area sample next to the current safety belt detection area sample as the current safety belt detection area sample, and repeatedly executing the operations until the safety belt detection network meets the convergence condition corresponding to the safety belt detection network.
8. A seat belt detection apparatus, characterized in that the apparatus comprises: a segmentation module and a detection module; wherein the content of the first and second substances,
the segmentation module is used for inputting an image to be recognized into a pre-trained semantic segmentation network, and detecting a safety belt region of the image to be recognized through the semantic segmentation network to obtain a safety belt detection region of the image to be recognized;
the detection module is used for inputting the safety belt detection area to a safety belt detection network trained in advance, and carrying out safety belt detection on the safety belt detection area through the safety belt detection network to obtain a safety belt detection result corresponding to the safety belt detection area.
9. The apparatus of claim 8, further comprising: the preprocessing module is used for preprocessing the image to be recognized to obtain the image to be recognized after image preprocessing; and inputting the image to be recognized after the image preprocessing into the semantic segmentation network.
10. The apparatus of claim 9, wherein the pre-processing module comprises: a scaling processing submodule and a normalization processing submodule; wherein the content of the first and second substances,
the scaling processing submodule is used for scaling the image to be identified to obtain the scaled image to be identified;
the normalization processing submodule is used for performing normalization processing on the image to be identified after the scaling processing to obtain the image to be identified after the normalization processing; and taking the normalized image to be identified as the image to be identified after the image preprocessing.
11. The apparatus of claim 8, wherein:
the segmentation module is specifically configured to use a first convolution unit of the semantic segmentation network as a current convolution unit; taking the image to be identified as a detection object of the current convolution unit; performing image feature extraction on a detection object of the current convolution unit through the current convolution unit to obtain an image feature extraction result corresponding to the current convolution unit; taking an image feature extraction result corresponding to the current convolution unit as a detection object of a convolution unit next to the current convolution unit; taking the next convolution unit as the current convolution unit, and repeatedly executing the operation until the face feature extraction result corresponding to the last convolution unit is extracted from the detection object of the last convolution unit of the semantic segmentation network; and obtaining a safety belt detection area of the image to be recognized based on the face feature extraction result corresponding to the last convolution unit.
12. The apparatus of claim 8, wherein:
the detection module is specifically configured to input the safety belt detection area to a convolution layer in the safety belt detection network, and perform convolution operation on the safety belt detection area through the convolution layer to obtain a feature extraction result corresponding to the convolution layer; inputting the feature extraction result corresponding to the convolutional layer into a pooling layer in the safety belt detection network, and performing pooling operation on the feature extraction result corresponding to the convolutional layer through the pooling layer to obtain a feature extraction result corresponding to the pooling layer; and inputting the feature extraction result corresponding to the pooling layer to a full connection layer in the safety belt detection network, and performing classification operation on the face feature extraction features corresponding to the pooling layer through the full connection layer to obtain the safety belt detection result corresponding to the safety belt detection area.
13. The apparatus of claim 8, further comprising: the segmentation training module is used for taking a first safety belt image sample obtained in advance as a current safety belt image sample; if the semantic segmentation network does not meet the preset convergence condition corresponding to the semantic segmentation network, inputting the current safety belt image sample to the semantic segmentation network, and training the semantic segmentation network by using the current safety belt image sample; and taking the next safety belt image sample of the current safety belt image sample as the current safety belt image sample, and repeatedly executing the operations until the semantic segmentation network meets the convergence condition corresponding to the semantic segmentation network.
14. The apparatus of claim 8, further comprising: the detection training module is used for taking a first safety belt detection area obtained in advance as a current safety belt detection area sample; if the safety belt detection network does not meet the preset convergence condition corresponding to the safety belt detection network, inputting the current safety belt detection area sample into the safety belt detection network, and training the safety belt detection network by using the current safety belt detection area sample; and taking a safety belt detection area sample next to the current safety belt detection area sample as the current safety belt detection area sample, and repeatedly executing the operations until the safety belt detection network meets the convergence condition corresponding to the safety belt detection network.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN202010611343.7A 2020-06-29 2020-06-29 Safety belt detection method and device, electronic equipment and storage medium Pending CN111814636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010611343.7A CN111814636A (en) 2020-06-29 2020-06-29 Safety belt detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010611343.7A CN111814636A (en) 2020-06-29 2020-06-29 Safety belt detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111814636A true CN111814636A (en) 2020-10-23

Family

ID=72856353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010611343.7A Pending CN111814636A (en) 2020-06-29 2020-06-29 Safety belt detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111814636A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553938A (en) * 2021-07-19 2021-10-26 黑芝麻智能科技(上海)有限公司 Safety belt detection method and device, computer equipment and storage medium
CN113743199A (en) * 2021-07-26 2021-12-03 中广核工程有限公司 Tool wearing detection method and device, computer equipment and storage medium
CN113822197A (en) * 2021-09-23 2021-12-21 南方电网电力科技股份有限公司 Work dressing identification method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485224A (en) * 2016-10-13 2017-03-08 北京智芯原动科技有限公司 A kind of seatbelt wearing recognition methodss and device
US20180239987A1 (en) * 2017-02-22 2018-08-23 Alibaba Group Holding Limited Image recognition method and apparatus
CN109086716A (en) * 2018-08-01 2018-12-25 北京嘀嘀无限科技发展有限公司 A kind of method and device of seatbelt wearing detection
CN109145843A (en) * 2018-08-29 2019-01-04 上海萃舟智能科技有限公司 A kind of full vehicle information identification system of bayonet high definition camera and method
WO2020024395A1 (en) * 2018-08-02 2020-02-06 平安科技(深圳)有限公司 Fatigue driving detection method and apparatus, computer device, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485224A (en) * 2016-10-13 2017-03-08 北京智芯原动科技有限公司 A kind of seatbelt wearing recognition methodss and device
US20180239987A1 (en) * 2017-02-22 2018-08-23 Alibaba Group Holding Limited Image recognition method and apparatus
CN109086716A (en) * 2018-08-01 2018-12-25 北京嘀嘀无限科技发展有限公司 A kind of method and device of seatbelt wearing detection
WO2020024395A1 (en) * 2018-08-02 2020-02-06 平安科技(深圳)有限公司 Fatigue driving detection method and apparatus, computer device, and storage medium
CN109145843A (en) * 2018-08-29 2019-01-04 上海萃舟智能科技有限公司 A kind of full vehicle information identification system of bayonet high definition camera and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴天舒 等: "结合 YOLO 检测和语义分割的驾驶员安全带检测", 计算机辅助设计与图形学学报, 31 January 2019 (2019-01-31), pages 126 - 131 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553938A (en) * 2021-07-19 2021-10-26 黑芝麻智能科技(上海)有限公司 Safety belt detection method and device, computer equipment and storage medium
CN113743199A (en) * 2021-07-26 2021-12-03 中广核工程有限公司 Tool wearing detection method and device, computer equipment and storage medium
CN113822197A (en) * 2021-09-23 2021-12-21 南方电网电力科技股份有限公司 Work dressing identification method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111783870B (en) Human body attribute identification method, device, equipment and storage medium
CN112528976B (en) Text detection model generation method and text detection method
CN111507914B (en) Training method, repairing method, device, equipment and medium for face repairing model
CN111814636A (en) Safety belt detection method and device, electronic equipment and storage medium
WO2022001091A1 (en) Dangerous driving behavior recognition method and apparatus, and electronic device and storage medium
CN114118124B (en) Image detection method and device
CN113920307A (en) Model training method, device, equipment, storage medium and image detection method
CN111783878B (en) Target detection method, target detection device, electronic equipment and readable storage medium
CN112966742A (en) Model training method, target detection method and device and electronic equipment
CN114120253B (en) Image processing method, device, electronic equipment and storage medium
CN111709873A (en) Training method and device of image conversion model generator
EP3961498A1 (en) Dangerous driving behavior recognition method and apparatus, and electronic device and storage medium
CN114863437B (en) Text recognition method and device, electronic equipment and storage medium
CN113591573A (en) Training and target detection method and device for multi-task learning deep network model
EP4080470A2 (en) Method and apparatus for detecting living face
CN113901909A (en) Video-based target detection method and device, electronic equipment and storage medium
CN112215243A (en) Image feature extraction method, device, equipment and storage medium
CN111950345A (en) Camera identification method and device, electronic equipment and storage medium
CN111932530A (en) Three-dimensional object detection method, device and equipment and readable storage medium
CN112016523A (en) Cross-modal face recognition method, device, equipment and storage medium
CN113344121B (en) Method for training a sign classification model and sign classification
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN113989300A (en) Lane line segmentation method and device, electronic equipment and storage medium
CN113657398A (en) Image recognition method and device
CN113869317A (en) License plate recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination