WO2020155518A1 - Procédé et dispositif de détection d'objet, dispositif informatique et support d'informations - Google Patents

Procédé et dispositif de détection d'objet, dispositif informatique et support d'informations Download PDF

Info

Publication number
WO2020155518A1
WO2020155518A1 PCT/CN2019/091100 CN2019091100W WO2020155518A1 WO 2020155518 A1 WO2020155518 A1 WO 2020155518A1 CN 2019091100 W CN2019091100 W CN 2019091100W WO 2020155518 A1 WO2020155518 A1 WO 2020155518A1
Authority
WO
WIPO (PCT)
Prior art keywords
object detection
loss
module
model
training
Prior art date
Application number
PCT/CN2019/091100
Other languages
English (en)
Chinese (zh)
Inventor
巢中迪
庄伯金
王少军
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020155518A1 publication Critical patent/WO2020155518A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • This application relates to the field of artificial intelligence, and in particular to an object detection method, device, computer equipment and storage medium.
  • Object detection is one of the classic problems in computer vision. Its task is to use a box to mark the position of the object in the image and give the object category. From the traditional artificially designed feature plus shallow classifier framework to the end-to-end detection framework based on deep learning, object detection is improving step by step. However, currently commonly used object detection methods such as YOLO (You Only Look Once) detection Methods, SSD (Single Shot Multi-Box Detection) and other detection methods still generally have the problem of low object detection accuracy.
  • YOLO You Only Look Once
  • SSD Single Shot Multi-Box Detection
  • the embodiments of the present application provide an object detection method, device, computer equipment, and storage medium to solve the problem that the object detection accuracy rate is still low.
  • an object detection method including:
  • the object detection model includes a detection module, a classification module, and a discrimination module
  • the object detection model is updated according to the detection loss, the classification loss, and the discrimination loss to obtain a target object detection model.
  • an object detection model training device including:
  • the image acquisition module to be detected is used to acquire the image to be detected
  • the object detection result acquisition module is used to input the to-be-detected image into a target object detection model for object detection to obtain the object detection result of the to-be-detected image, wherein the target object detection model adopts a training sample acquisition module,
  • the model training module, the loss acquisition module and the target object detection model acquisition module obtain:
  • the training sample acquisition module is used to acquire training samples
  • a model training module is used to input the training samples into an object detection model for model training, where the object detection model includes a detection module, a classification module, and a discrimination module;
  • a loss acquisition module configured to acquire the detection loss generated by the detection module, the classification loss generated by the classification module, and the discrimination loss generated by the discrimination module during the model training process;
  • the target object detection model acquisition module is used to update the object detection model according to the detection loss, the classification loss and the discrimination loss to obtain a target object detection model.
  • a computer device in a third aspect, includes a memory, a processor, and computer-readable instructions that are stored in the memory and can run on the processor.
  • the processor executes the computer-readable instructions, the foregoing The steps of the object detection method.
  • an embodiment of the present application provides a computer non-volatile readable storage medium, including: computer readable instructions, which implement the steps of the above object detection method when the computer readable instructions are executed by a processor.
  • the image to be detected is first obtained; then the image to be detected is input into the target object detection model for object detection, and the object detection result of the image to be detected is obtained ,
  • the target object detection model combines the detection loss, classification loss and discrimination loss to update the object detection model, which has better detection and classification effects, and can obtain detection results with higher accuracy.
  • FIG. 1 is a flowchart of an object-based detection method in an embodiment of the present application
  • Figure 2 is a schematic diagram of an object-based detection device in an embodiment of the present application.
  • Fig. 3 is a schematic diagram of a computer device in an embodiment of the present application.
  • first, second, third, etc. may be used in the embodiments of the present application to describe the preset range, etc., these preset ranges should not be limited to these terms. These terms are only used to distinguish the preset ranges from each other.
  • the first preset range may also be referred to as the second preset range, and similarly, the second preset range may also be referred to as the first preset range.
  • the word “if” as used herein can be interpreted as “when” or “when” or “in response to determination” or “in response to detection”.
  • the phrase “if determined” or “if detected (statement or event)” can be interpreted as “when determined” or “in response to determination” or “when detected (statement or event) )” or “in response to detection (statement or event)”.
  • Fig. 1 shows a flow chart of the object detection method in this embodiment.
  • the object detection method can be applied to an object detection system, and the object detection system can be used to realize the detection and classification of objects, and the object detection system can be specifically applied to computer equipment.
  • the computer device is a device that can perform human-computer interaction with the user, including but not limited to devices such as computers, smart phones, and tablets.
  • the object detection method includes the following steps:
  • step S2 Input the image to be detected into the target object detection model for object detection, and obtain the object detection result of the image to be detected.
  • the model training steps adopted by the target object detection model specifically include:
  • training samples required for model training are obtained.
  • images related to a certain type of scene can be selected as training samples according to the needs of object detection.
  • the images saved in the driving recorder can be used as training samples.
  • the pictures saved in the driving recorder can reflect the road conditions ahead during the driving of the vehicle.
  • the image can be used as a training sample to train the target object detection model, so that the trained target object detection model can be The object is detected, so that the vehicle makes a preset response according to the received detection result. Understandably, it is necessary to pre-label the objects appearing in the image saved in the driving recorder before the model training (label the objects that need to be detected, and the objects that are not required for detection may not be labeled).
  • deep neural A network such as a convolutional neural network extracts deep features of images belonging to the same category as the object to be annotated to identify the category of the object when the object detection model (including the corresponding deep neural network for extracting image features) detects.
  • S20 Input training samples into the object detection model for model training, where the object detection model includes a detection module, a classification module, and a discrimination module.
  • model training refers to the training of the target object detection model.
  • the detection module is used to detect objects in the image, and the classification module is used to identify and classify the detected objects.
  • the judgment module includes a first judgment module and/or a second judgment module.
  • the first judgment module is used to judge the output of the detection module. Whether the result is correct or not, the second judgment module is used to judge whether the output result of the classification module is correct.
  • the first judgment module and the second judgment module can exist at the same time, or only the second judgment module exists, and the second judgment module is used as the judgment Module.
  • the training samples are input into the object detection model for model training, where the object detection model includes not only a detection model and a classification model, but also a discriminant model. Understandably, model training with training samples is the process of inputting training samples into the object detection model for detection.
  • step S20 it further includes:
  • S211 Obtain a detection model of the object to be processed, which includes a detection module and a classification module.
  • the detection model of the object to be processed is obtained. It is understandable that the detection model of the object to be processed may specifically be a detection model such as YOLO (You Only Look Once) detection model and SSD (Single Shot Multi-Box Detection). These models include detection modules and classification modules. This embodiment is an improvement based on these object detection models to be processed.
  • S212 Add a discrimination module to the detection model of the object to be processed, where the discrimination module is used to discriminate the results output by the detection module and/or the classification module.
  • a discrimination module is added on the original basis of the object detection model to be processed, so as to determine the output result of the object detection model to be processed. Adding a discrimination module can help to know the accuracy of the detection model of the object to be processed, so as to update the model according to the detection error of the object detection model to improve the accuracy of detection.
  • S213 Perform model initialization operation on the object detection model to be processed after adding the discrimination module to obtain the object detection model.
  • the initialization operation of the model refers to the initialization of the network parameters in the model, and the initial values of the network parameters may be preset based on experience.
  • the network parameters in the detection module and the classification module in the object detection model to be processed have actually been updated through multiple trainings.
  • the discrimination module is then based on the detection module and/or classification
  • the result of the module output will be discriminated and updated, because the detection module and the classification module have been learning for a long time at the beginning of the training. It is more thorough if the discrimination module is used for updating in a short time; on the contrary, the initialization After the operation, the discrimination module will make a judgment every time the detection module and/or classification module outputs a result during the training phase, and can update the network parameters according to the output result in time with the training process, so as to achieve better detection accuracy.
  • steps S211-S213 an implementation method for obtaining an object detection model is provided. Specifically, a discrimination module is added to the object detection model to be processed, and the initialization operation of the model is performed, which is beneficial to improve the subsequent training and update of the object detection model. The detection accuracy of the target object detection model.
  • step S20 inputting the training samples into the object detection model for model training includes:
  • S221 Input the training sample, and extract the feature vector of the training sample through the object detection model.
  • the object detection model includes a deep neural network for extracting feature vectors of training samples, which may specifically be a convolutional neural network.
  • the object detection model will use the deep neural network to extract the feature vectors of the training samples to provide a technical basis for model training.
  • the feature value in the feature vector specifically refers to the pixel value.
  • the feature vector is normalized, that is, the feature value in the feature vector is normalized to the interval of [0,1].
  • the pixel values of the image are 28, 212 and 216 the pixel value level, etc., a large number of different images may be contained in the pixel value, which makes the computational efficiency is low, so the use of the normalized manner
  • the eigenvalues in the eigenvectors are compressed in the same range, so that the calculation efficiency is improved and the model training time is shorter.
  • S223 Perform model training on the object detection model according to the normalized feature vector.
  • steps S221-S223 an implementation method for inputting training samples into the object detection model for model training is provided, and the extracted training sample features are normalized, and the feature values in the feature vector are compressed in the same range Within the interval, training time can be significantly shortened and training efficiency improved.
  • the detection module, the classification module, and the discrimination module respectively represent a function.
  • the detection loss generated by the detection module the classification
  • the classification loss generated by the module and the discrimination loss generated by the discrimination module can be used for reference to help adjust the object detection model, so that the target object detection model can be used as much as possible when the detection module, classification module and discrimination module are used again to achieve functions. Errors to improve the accuracy of the target object detection model detection.
  • step S30 the detection loss generated by the detection module, the classification loss generated by the classification module, and the discrimination loss generated by the discrimination module are obtained during the model training process, which specifically include:
  • the first training feature vector is the result output by the detection module
  • the first label vector is a feature vector used to verify whether the first training feature vector is correct, and represents the real result.
  • a preset detection loss function is used to calculate the loss between the first training feature vector and the pre-stored first label vector to obtain the detection loss, so as to update the network parameters of the model according to the detection loss.
  • the detection loss function may include a loss function for the predicted center coordinates, which is expressed as: Among them, ⁇ represents the adjustment factor, which is a preset parameter value, i represents the grid unit divided during detection, I represents the total number of grid units, j represents the predicted value of the bounding box, and J represents the total number of predicted values of the bounding box.
  • object detection models such as yolo need to perform image segmentation on the input training samples to obtain I grid units.
  • J prediction bounding boxes are obtained.
  • the obj stands for object, which means to detect the object.
  • the detection loss function may also include a loss function about the width and height of the predicted bounding box, expressed as: among them, Represents the square root of the predicted width and the square root of the predicted height, Represents the true value of the square root of the width and the square root of the height output by the training sample (other repeated parameters will not be explained, so as not to repeat them).
  • the above provides two aspects of the center coordinates predicted by the model and the width and height of the predicted bounding box to measure the loss during detection, where the first training feature vector output by the detection module specifically includes (x i , y i ) and The first label vector specifically includes with Through the detection loss function, the network parameters of the object detection model can be updated more accurately.
  • the second training feature vector is the result output by the classification module
  • the second label vector is a feature vector used to verify whether the second training feature vector is correct, and represents the real result.
  • a preset classification loss function is used to calculate the loss between the second training feature vector and the pre-stored second label vector to obtain the classification loss, so as to update the network parameters of the model according to the classification loss.
  • the classification loss function can be expressed as: Among them, i represents the grid unit divided during detection, I represents the total number of grid units, Indicates that when there is a target in the i-th grid cell, Take 1, otherwise Take 0, p i represents the predicted classification, Represents the true situation of the classification output by the training sample.
  • the second training feature vector output by the classification module specifically includes p i
  • the second label vector specifically includes Through the classification loss function, the network parameters of the object classification model can be updated more accurately.
  • S33 In the model training process, obtain the third training feature vector output by the discrimination module, and calculate the discrimination loss by using the preset discriminant loss function according to the third training feature vector.
  • the third training feature vector is the result output by the discrimination module.
  • a preset discriminant loss function is used to calculate the discriminant loss, so as to update the network parameters of the model according to the discriminant loss.
  • the discrimination loss function can be specifically expressed as: Wherein, I represents the total number of grid cells, i denotes grid cells obtained divided detection, classification D (p i) denotes the prediction result of the discrimination output of the module.
  • the discriminant loss function can reflect the loss generated by the discriminant module during training, so as to more accurately update the network parameters of the object discriminant model.
  • steps S31-S33 take the loss of one training sample as an example when calculating the detection loss, classification loss, and discrimination loss.
  • the values of each training sample will be The detection loss, classification loss, and discrimination loss arithmetic are added to obtain the total detection loss, classification loss, and discrimination loss, and the model is updated according to the total detection loss, classification loss and discrimination loss.
  • Steps S31-S33 provide specific implementations for obtaining detection loss, classification loss, and discrimination loss.
  • the obtained detection loss, classification loss, and discrimination loss can accurately describe the loss generated during the training process, so that the model can be updated more accurately. accurate.
  • S40 Update the object detection model according to the detection loss, classification loss and discrimination loss to obtain the target object detection model.
  • step S40 specifically includes:
  • the back-propagation algorithm is a learning algorithm suitable for multi-layer neural networks under the guidance of a tutor. It is based on the gradient descent method.
  • updating the object detection model using a back propagation algorithm can speed up the update and improve the training efficiency of model training.
  • the total loss of detection loss, classification loss and discrimination loss is large, the use of backpropagation algorithm has better results.
  • the update process can be stopped, the training ends, and the detection accuracy rate is higher. High target object detection model.
  • Steps S41-S42 provide an implementation manner for updating the object detection model, which can quickly complete the update process and obtain a target object detection model with higher detection accuracy.
  • the image to be detected is first acquired, and then the image to be detected is input into the target object detection model for object detection, and the object detection result of the image to be detected is obtained, wherein the target object detection model combines The detection loss, classification loss and discrimination loss jointly update the object detection model, so that the target object detection model obtained by training has better detection and classification effects.
  • the embodiments of the present application further provide device embodiments that implement the steps and methods in the foregoing method embodiments.
  • Fig. 2 shows a principle block diagram of an object detection device corresponding to the object detection method in the embodiment one to one.
  • the object detection device includes a to-be-detected image acquisition module 10, an object detection result acquisition module 20, and also includes a training sample acquisition module 30, a model training module 40, a loss acquisition module 50, and a target object detection model acquisition module 60 .
  • the realization functions of the to-be-detected image acquisition module 10, the object detection result acquisition module 20, the training sample acquisition module 30, the model training module 40, the loss acquisition module 50, and the target object detection model acquisition module 60 correspond to the object detection method in the embodiment
  • the steps of are one-to-one correspondence, in order to avoid redundant description, this embodiment will not describe them one by one.
  • the to-be-detected image acquisition module 10 is used to acquire the to-be-detected image.
  • the object detection result acquisition module 20 is used to input the image to be detected into the target object detection model for object detection, and obtain the object detection result of the image to be detected.
  • the training sample acquisition module 30 is used to acquire training samples.
  • the model training module 40 is used to input training samples into the object detection model for model training, where the object detection model includes a detection module, a classification module and a discrimination module.
  • the loss acquisition module 50 is used to obtain the detection loss generated by the detection module, the classification loss generated by the classification module, and the discrimination loss generated by the discrimination module during the model training process.
  • the target object detection model acquisition module 60 is used to update the object detection model according to the detection loss, classification loss and discrimination loss to obtain the target object detection model.
  • the object detection device further includes a detection model acquisition unit of the object to be processed, a discrimination module adding unit and an initialization unit.
  • the to-be-processed object detection model acquisition unit is used to acquire the to-be-processed object detection model.
  • the to-be-processed object detection model includes a detection module and a classification module.
  • the discrimination module adding unit is used to add a discrimination module to the object detection model to be processed, wherein the discrimination module is used to discriminate the results output by the detection module and/or the classification module.
  • the initialization unit is used to initialize the object detection model to be processed after adding the discrimination module to obtain the object detection model.
  • the model training module 40 includes a feature vector extraction unit, a normalized feature vector acquisition unit, and a model training unit.
  • the feature vector extraction unit is used to input training samples, and extract feature vectors of the training samples through the object detection model.
  • the model training unit is used to perform model training on the object detection model according to the normalized feature vector.
  • the loss acquisition module 50 includes a detection loss acquisition unit, a classification loss acquisition unit, and a discrimination loss acquisition unit.
  • the detection loss acquisition unit is used to obtain the first training feature vector output by the detection module during the model training process, and calculate the loss between the first training feature vector and the pre-stored first label vector using a preset detection loss function, Get detection loss.
  • the classification loss acquisition unit is used to obtain the second training feature vector output by the classification module during the model training process, and calculate the loss between the second training feature vector and the pre-stored second label vector by using a preset classification loss function, Get classification loss.
  • the discrimination loss acquisition unit is used to obtain the third training feature vector output by the discrimination module during the model training process, and calculate the discrimination loss by using the preset discriminant loss function according to the third training feature vector.
  • the target object detection model acquisition module 60 includes a network parameter update unit and a target object detection model acquisition unit.
  • the network parameter update unit is used to update the network parameters in the object detection model by using the back propagation algorithm according to the detection loss, classification loss and discrimination loss.
  • the target object detection model acquisition unit is used to stop updating the network parameters when the change values of the network parameters are less than the iterative stop threshold to obtain the target object detection model.
  • the image to be detected is first acquired, and then the image to be detected is input into the target object detection model for object detection, and the object detection result of the image to be detected is obtained, wherein the target object detection model combines The detection loss, classification loss and discrimination loss jointly update the object detection model, so that the target object detection model obtained by training has better detection and classification effects.
  • This embodiment provides a computer non-volatile readable storage medium.
  • the computer non-volatile readable storage medium stores computer readable instructions.
  • the object detection method in the embodiment is implemented. To avoid repetition, I won’t repeat them here.
  • the computer-readable instructions realize the functions of the modules/units in the object detection device in the embodiment when they are executed by the processor. To avoid repetition, details are not repeated here.
  • Fig. 3 is a schematic diagram of a computer device provided by an embodiment of the present application.
  • the computer device 70 of this embodiment includes: a processor 71, a memory 72, and computer-readable instructions 73 stored in the memory 72 and running on the processor 71, and the computer-readable instructions 73 are processed
  • the object detection method in the embodiment is implemented when the device 71 is executed. To avoid repetition, it will not be repeated here.
  • the computer-readable instruction 73 is executed by the processor 71, the function of each model/unit in the object detection device in the embodiment is realized. In order to avoid repetition, it will not be repeated here.
  • the computer device 70 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the computer device 70 may include, but is not limited to, a processor 71 and a memory 72.
  • FIG. 3 is only an example of the computer device 70, and does not constitute a limitation on the computer device 70. It may include more or less components than those shown in the figure, or a combination of certain components, or different components.
  • computer equipment may also include input and output devices, network access devices, buses, and so on.
  • the so-called processor 71 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 72 may be an internal storage unit of the computer device 70, such as a hard disk or memory of the computer device 70.
  • the memory 72 may also be an external storage device of the computer device 70, such as a plug-in hard disk equipped on the computer device 70, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash memory card (Flash). Card) and so on.
  • the memory 72 may also include both an internal storage unit of the computer device 70 and an external storage device.
  • the memory 72 is used to store computer readable instructions and other programs and data required by the computer equipment.
  • the memory 72 can also be used to temporarily store data that has been output or will be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un dispositif de détection d'objet, un dispositif informatique et un support d'informations qui se rapportent au domaine de l'intelligence artificielle. Le procédé de détection d'objet consiste à : acquérir une image à détecter (S1) ; et entrer l'image à détecter dans un modèle de détection d'objet cible puis exéuter une détection d'objet, de façon à obtenir un résultat de détection d'objet de l'image à détecter, les étapes d'apprentissage de modèle utilisées par le modèle de détection d'objet cible consistant à : acquérir un échantillon d'apprentissage ; entrer l'échantillon d'apprentissage dans le modèle de détection d'objet et exécuter un apprentissage de modèle, le modèle de détection d'objet comprenant un module de détection, un module de classification et un module de discrimination ; obtenir une perte de détection générée par le module de détection, une perte de classification générée par le module de classification et une perte de discrimination générée par le module de discrimination lors du processus d'apprentissage de modèle ; et mettre à jour le modèle de détection d'objet conformément à la perte de détection, à la perte de classification et à la perte de discrimination, de façon à obtenir le modèle de détection d'objet cible (S2). La précision de la détection d'objet peut être efficacement améliorée au moyen du procédé de détection d'objet révélé.
PCT/CN2019/091100 2019-02-03 2019-06-13 Procédé et dispositif de détection d'objet, dispositif informatique et support d'informations WO2020155518A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910108522.6 2019-02-03
CN201910108522.6A CN110020592B (zh) 2019-02-03 2019-02-03 物体检测模型训练方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020155518A1 true WO2020155518A1 (fr) 2020-08-06

Family

ID=67188871

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/091100 WO2020155518A1 (fr) 2019-02-03 2019-06-13 Procédé et dispositif de détection d'objet, dispositif informatique et support d'informations

Country Status (2)

Country Link
CN (1) CN110020592B (fr)
WO (1) WO2020155518A1 (fr)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183358A (zh) * 2020-09-29 2021-01-05 新石器慧拓(北京)科技有限公司 一种目标检测模型的训练方法及装置
CN112418480A (zh) * 2020-10-14 2021-02-26 上海眼控科技股份有限公司 气象图像预测方法、装置、计算机设备和存储介质
CN112508097A (zh) * 2020-12-08 2021-03-16 深圳市优必选科技股份有限公司 图像转换模型训练方法、装置、终端设备及存储介质
CN112561885A (zh) * 2020-12-17 2021-03-26 中国矿业大学 基于YOLOv4-tiny的插板阀开度检测方法
CN112633351A (zh) * 2020-12-17 2021-04-09 博彦多彩数据科技有限公司 检测方法、装置、存储介质及处理器
CN112634245A (zh) * 2020-12-28 2021-04-09 广州绿怡信息科技有限公司 损耗检测模型训练方法、损耗检测方法及装置
CN112633355A (zh) * 2020-12-18 2021-04-09 北京迈格威科技有限公司 图像数据处理方法及装置、目标检测模型训练方法及装置
CN112966565A (zh) * 2021-02-05 2021-06-15 深圳市优必选科技股份有限公司 一种物体检测方法、装置、终端设备及存储介质
CN113033579A (zh) * 2021-03-31 2021-06-25 北京有竹居网络技术有限公司 图像处理方法、装置、存储介质及电子设备
CN113298122A (zh) * 2021-04-30 2021-08-24 北京迈格威科技有限公司 目标检测方法、装置和电子设备
CN113591839A (zh) * 2021-06-28 2021-11-02 北京有竹居网络技术有限公司 一种特征提取模型构建方法、目标检测方法及其设备
CN113627298A (zh) * 2021-07-30 2021-11-09 北京百度网讯科技有限公司 目标检测模型的训练方法及检测目标对象的方法、装置
CN116935102A (zh) * 2023-06-30 2023-10-24 上海蜜度信息技术有限公司 一种轻量化模型训练方法、装置、设备和介质
CN116958607A (zh) * 2023-09-20 2023-10-27 中国人民解放军火箭军工程大学 用于目标毁伤预测的数据处理方法和装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110442804A (zh) * 2019-08-13 2019-11-12 北京市商汤科技开发有限公司 一种对象推荐网络的训练方法、装置、设备及存储介质
CN112417955B (zh) * 2020-10-14 2024-03-05 国能大渡河沙坪发电有限公司 巡检视频流处理方法及装置
CN112580731B (zh) * 2020-12-24 2022-06-24 深圳市对庄科技有限公司 翡翠产品识别方法、系统、终端、计算机设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100202693A1 (en) * 2009-02-09 2010-08-12 Samsung Electronics Co., Ltd. Apparatus and method for recognizing hand shape in portable terminal
US20130034263A1 (en) * 2011-08-04 2013-02-07 Yuanyuan Ding Adaptive Threshold for Object Detection
CN107038448A (zh) * 2017-03-01 2017-08-11 中国科学院自动化研究所 目标检测模型构建方法
CN107944443A (zh) * 2017-11-16 2018-04-20 深圳市唯特视科技有限公司 一种基于端到端深度学习进行对象一致性检测方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5394959B2 (ja) * 2010-03-23 2014-01-22 富士フイルム株式会社 判別器生成装置および方法並びにプログラム
CN106845522B (zh) * 2016-12-26 2020-01-31 华北理工大学 一种冶金成球过程中的分类判别系统
GB2564668B (en) * 2017-07-18 2022-04-13 Vision Semantics Ltd Target re-identification
CN108009524B (zh) * 2017-12-25 2021-07-09 西北工业大学 一种基于全卷积网络的车道线检测方法
CN108376235A (zh) * 2018-01-15 2018-08-07 深圳市易成自动驾驶技术有限公司 图像检测方法、装置及计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100202693A1 (en) * 2009-02-09 2010-08-12 Samsung Electronics Co., Ltd. Apparatus and method for recognizing hand shape in portable terminal
US20130034263A1 (en) * 2011-08-04 2013-02-07 Yuanyuan Ding Adaptive Threshold for Object Detection
CN107038448A (zh) * 2017-03-01 2017-08-11 中国科学院自动化研究所 目标检测模型构建方法
CN107944443A (zh) * 2017-11-16 2018-04-20 深圳市唯特视科技有限公司 一种基于端到端深度学习进行对象一致性检测方法

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183358A (zh) * 2020-09-29 2021-01-05 新石器慧拓(北京)科技有限公司 一种目标检测模型的训练方法及装置
CN112183358B (zh) * 2020-09-29 2024-04-23 新石器慧通(北京)科技有限公司 一种目标检测模型的训练方法及装置
CN112418480A (zh) * 2020-10-14 2021-02-26 上海眼控科技股份有限公司 气象图像预测方法、装置、计算机设备和存储介质
CN112508097A (zh) * 2020-12-08 2021-03-16 深圳市优必选科技股份有限公司 图像转换模型训练方法、装置、终端设备及存储介质
CN112508097B (zh) * 2020-12-08 2024-01-19 深圳市优必选科技股份有限公司 图像转换模型训练方法、装置、终端设备及存储介质
CN112561885A (zh) * 2020-12-17 2021-03-26 中国矿业大学 基于YOLOv4-tiny的插板阀开度检测方法
CN112633351A (zh) * 2020-12-17 2021-04-09 博彦多彩数据科技有限公司 检测方法、装置、存储介质及处理器
CN112561885B (zh) * 2020-12-17 2023-04-18 中国矿业大学 基于YOLOv4-tiny的插板阀开度检测方法
CN112633355A (zh) * 2020-12-18 2021-04-09 北京迈格威科技有限公司 图像数据处理方法及装置、目标检测模型训练方法及装置
CN112634245A (zh) * 2020-12-28 2021-04-09 广州绿怡信息科技有限公司 损耗检测模型训练方法、损耗检测方法及装置
CN112966565A (zh) * 2021-02-05 2021-06-15 深圳市优必选科技股份有限公司 一种物体检测方法、装置、终端设备及存储介质
CN113033579A (zh) * 2021-03-31 2021-06-25 北京有竹居网络技术有限公司 图像处理方法、装置、存储介质及电子设备
CN113033579B (zh) * 2021-03-31 2023-03-21 北京有竹居网络技术有限公司 图像处理方法、装置、存储介质及电子设备
CN113298122A (zh) * 2021-04-30 2021-08-24 北京迈格威科技有限公司 目标检测方法、装置和电子设备
CN113591839A (zh) * 2021-06-28 2021-11-02 北京有竹居网络技术有限公司 一种特征提取模型构建方法、目标检测方法及其设备
CN113591839B (zh) * 2021-06-28 2023-05-09 北京有竹居网络技术有限公司 一种特征提取模型构建方法、目标检测方法及其设备
CN113627298A (zh) * 2021-07-30 2021-11-09 北京百度网讯科技有限公司 目标检测模型的训练方法及检测目标对象的方法、装置
CN116935102A (zh) * 2023-06-30 2023-10-24 上海蜜度信息技术有限公司 一种轻量化模型训练方法、装置、设备和介质
CN116935102B (zh) * 2023-06-30 2024-02-20 上海蜜度科技股份有限公司 一种轻量化模型训练方法、装置、设备和介质
CN116958607A (zh) * 2023-09-20 2023-10-27 中国人民解放军火箭军工程大学 用于目标毁伤预测的数据处理方法和装置
CN116958607B (zh) * 2023-09-20 2023-12-22 中国人民解放军火箭军工程大学 用于目标毁伤预测的数据处理方法和装置

Also Published As

Publication number Publication date
CN110020592A (zh) 2019-07-16
CN110020592B (zh) 2024-04-09

Similar Documents

Publication Publication Date Title
WO2020155518A1 (fr) Procédé et dispositif de détection d'objet, dispositif informatique et support d'informations
EP4148622A1 (fr) Procédé d'entraînement de réseau neuronal, procédé de classification d'images et dispositif associé
CN108038474B (zh) 人脸检测方法、卷积神经网络参数的训练方法、装置及介质
WO2018108129A1 (fr) Procédé et appareil destinés à l'identification d'un type d'objet, et dispositif électronique
WO2017096753A1 (fr) Procédé de suivi de point clé facial, terminal et support de stockage lisible par ordinateur non volatil
CN111476284A (zh) 图像识别模型训练及图像识别方法、装置、电子设备
WO2021136027A1 (fr) Procédé et appareil de détection d'images similaires, dispositif et support d'informations
CN109919002B (zh) 黄色禁停线识别方法、装置、计算机设备及存储介质
WO2019200702A1 (fr) Procédé et appareil d'apprentissage de système de détramage, procédé et appareil de détramage, et support
JP2022141931A (ja) 生体検出モデルのトレーニング方法及び装置、生体検出の方法及び装置、電子機器、記憶媒体、並びにコンピュータプログラム
US9129152B2 (en) Exemplar-based feature weighting
WO2021051497A1 (fr) Procédé et appareil de détermination de tuberculose pulmonaire, dispositif informatique et support de stockage
CN107862680B (zh) 一种基于相关滤波器的目标跟踪优化方法
US11893773B2 (en) Finger vein comparison method, computer equipment, and storage medium
CN111914908B (zh) 一种图像识别模型训练方法、图像识别方法及相关设备
WO2022218396A1 (fr) Procédé et appareil de traitement d'image et support de stockage lisible par ordinateur
CN115797736B (zh) 目标检测模型的训练和目标检测方法、装置、设备和介质
WO2019232861A1 (fr) Procédé et appareil d'entraînement de modèle d'écriture manuscrite, procédé et appareil de reconnaissance de texte, et dispositif et support
WO2023221608A1 (fr) Procédé et appareil d'apprentissage de modèle de reconnaissance de masque, appareil, dispositif et support de stockage
CN113869449A (zh) 一种模型训练、图像处理方法、装置、设备及存储介质
WO2023109361A1 (fr) Procédé et système de traitement vidéo, dispositif, support et produit
CN108399430A (zh) 一种基于超像素和随机森林的sar图像舰船目标检测方法
WO2023088174A1 (fr) Procédé et appareil de détection de cible
Meus et al. Embedded vision system for pedestrian detection based on HOG+ SVM and use of motion information implemented in Zynq heterogeneous device
US8467607B1 (en) Segmentation-based feature pooling for object models

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19914141

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19914141

Country of ref document: EP

Kind code of ref document: A1