WO2020151172A1 - Procédé et appareil de détection d'objet en mouvement, dispositif informatique, et support de stockage - Google Patents

Procédé et appareil de détection d'objet en mouvement, dispositif informatique, et support de stockage Download PDF

Info

Publication number
WO2020151172A1
WO2020151172A1 PCT/CN2019/091905 CN2019091905W WO2020151172A1 WO 2020151172 A1 WO2020151172 A1 WO 2020151172A1 CN 2019091905 W CN2019091905 W CN 2019091905W WO 2020151172 A1 WO2020151172 A1 WO 2020151172A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
real
moving target
time video
bounding box
Prior art date
Application number
PCT/CN2019/091905
Other languages
English (en)
Chinese (zh)
Inventor
王健宗
彭俊清
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020151172A1 publication Critical patent/WO2020151172A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • This application relates to the field of image recognition technology, and in particular to a moving target detection method, device, computer equipment and storage medium.
  • This application provides a moving target detection method, device, computer equipment and storage medium to improve the detection speed and accuracy of moving targets.
  • this application provides a method for detecting a moving target, the method including:
  • this application also provides a moving target detection device, the device including:
  • An obtaining and determining unit configured to obtain real-time video, and determine the moving target in the real-time video
  • An information extraction unit configured to extract a bounding box of the moving target and data information corresponding to the bounding box, the data information including position information and size information of the bounding box in the real-time video;
  • a recognition detection unit configured to input the image in the bounding box into a pre-trained target recognition model for recognition and detection according to the data information, so as to output a classification category corresponding to the moving target;
  • the target labeling unit is configured to label the moving target in the real-time video recording according to the classification category.
  • the present application also provides a computer device, the computer device includes a memory and a processor; the memory is used to store a computer program; the processor is used to execute the computer program and when executing the computer program The computer program realizes the above-mentioned moving target detection method.
  • the present application also provides a computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, the processor realizes the above-mentioned moving target detection method.
  • This application discloses a moving object detection method, device, equipment and storage medium, which can quickly identify and classify moving objects, such as identifying car logos and car models corresponding to moving vehicles, etc., which can reduce the amount of calculation when identifying and classifying, thereby Provides the recognition efficiency of moving targets and is suitable for real-time detection and recognition.
  • FIG. 1 is a schematic flowchart of a method for training a target recognition model provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of an application scenario of a moving target detection method provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of a moving target detection method provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of sub-steps of the moving target detection method in FIG. 3;
  • FIG. 5 is a schematic flowchart of steps for determining a moving target provided by an embodiment of the present application
  • FIG. 6 is a schematic block diagram of a model training device provided by an embodiment of the application.
  • FIG. 7 is a schematic block diagram of a moving target detection device provided by an embodiment of the application.
  • FIG. 8 is a schematic block diagram of another moving target detection device provided by an embodiment of the application.
  • FIG. 9 is a schematic block diagram of the structure of a computer device according to an embodiment of the application.
  • the embodiments of the application provide a moving target detection method, device, computer equipment, and storage medium.
  • the moving target detection method can be applied to a terminal or a server to quickly and accurately identify the classification information of the moving target.
  • the moving target detection method is used to identify and classify moving vehicles on the road, and of course it can be used to identify other moving targets, such as non-motorized vehicles, animals, or pedestrians.
  • the following embodiments will take a moving vehicle as a moving target for detailed introduction.
  • FIG. 1 is a schematic flowchart of a method for training a target recognition model provided by an embodiment of the present application.
  • the target recognition model is obtained by model training based on a convolutional neural network.
  • a convolutional neural network Of course, other networks can also be used for training.
  • GoogLeNet is used for model training to obtain the target recognition model.
  • other networks may also be used, such as AlexNet or VGGNet.
  • GoogLeNet is used for model training to obtain the target recognition model.
  • the training method of the target recognition model is used to train the target recognition model for application in the moving target detection method.
  • the training method includes step S101 to step S105.
  • the target pictures are pictures of multiple target objects taken from different angles.
  • the target object is a vehicle, including vehicles of different models under the same vehicle label. Of course, it may also be a non-motorized vehicle, a pedestrian, or an animal. Selecting a vehicle includes selecting cars with different logos and models, and taking pictures taken from different angles of the car as the target picture.
  • the target picture constitutes a picture set for training the target recognition model.
  • S102 Mark the target picture according to the category identifier corresponding to the category category.
  • the classification category includes vehicle logo and vehicle model
  • the corresponding category identification includes vehicle logo identification and vehicle model identification.
  • the car logo includes: Ferrari, Lamborghini, Bentley, Aston Martin, Mercedes-Benz, BMW, Audi, Chevrolet, Volkswagen or BYD, etc.
  • model logos include: small cars, mini cars, compact cars, medium cars, high-end cars , luxury models, sedan models or SUV models.
  • the target pictures are marked according to the vehicle logo identifier and the vehicle type identifier corresponding to the classification category, so that each target picture has marking information, that is, each target picture includes the vehicle logo and the vehicle model.
  • sample data in order to quickly train the target recognition model, after marking each target picture, sample data can be constructed, and step S105 is executed according to the constructed sample data to perform model training.
  • S103 Perform an image processing operation on the target picture to change the picture parameters of the target picture, and use the target picture whose picture parameters are changed as a new target picture.
  • image processing operations include: size adjustment, cropping, rotation, image algorithm processing, etc.
  • image algorithm processing includes: color temperature adjustment algorithm, exposure adjustment algorithm, contrast adjustment algorithm, highlight recovery algorithm, low light compensation algorithm, white balance Algorithm, adjustment of definition algorithm, fogging algorithm index, adjustment of natural saturation algorithm.
  • the picture parameters include size information, pixel size, color temperature parameters, exposure, contrast, white balance, sharpness, fogging parameters, and natural saturation.
  • performing an image processing operation on the target picture to change the picture parameters of the target picture, and using the target picture whose picture parameters are changed as a new target picture refers to performing the aforementioned multiple image processing operations on the target picture respectively One or more of them are combined to change the picture parameters of the target picture.
  • the diversity of the samples is increased, and the samples are more representative of the real environment, thereby improving the recognition accuracy of the model.
  • the target picture whose picture parameters are changed is saved as a new target picture, and the new target picture and the original target picture are combined to form sample data. This increases the number of samples and at the same time increases the diversity of samples.
  • S105 Based on the convolutional neural network, perform model training according to the sample data to obtain a target recognition model, and use the obtained target recognition model as a pre-trained target recognition model.
  • the constructed sample data is used for model training through GoogLeNet.
  • GoogLeNet Specifically, directional propagation training can be used.
  • the convolutional layer and pooling layer of GoogLeNet are used to extract features from the input sample data, and the fully connected layer is used as a classifier.
  • the output of this classifier is the probability value of different car logos and models.
  • the convolutional neural network takes the trained sample data as input and goes through the forward propagation step (convolution, ReLU activation and pooling operations to forward propagation in the fully connected layer) , And finally get the output probability of each category.
  • the terminal can be an electronic device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, and a wearable device
  • the server can be an independent server or a server cluster.
  • the compression processing specifically includes pruning processing, quantization processing, and Huffman encoding processing on the target recognition model, etc., to reduce the size of the target recognition model, and thereby facilitate storage in a terminal with a smaller capacity.
  • the training method provided by the above-mentioned embodiments uses image processing operations to process the target pictures to increase the diversity of sample data by shooting target pictures with multiple target objects at different angles; based on the convolutional neural network, the training is performed according to the constructed sample data Model training is used to obtain a target recognition model, and the obtained target recognition model is used as a pre-trained target recognition model in the moving target recognition method, thereby improving the recognition accuracy of the moving target.
  • FIG. 2 is a schematic diagram of an application scenario of the moving target detection method provided by an embodiment of the present application.
  • This application scenario includes servers, terminals, and traffic monitoring equipment, and traffic monitoring equipment includes cameras.
  • the server is used to train the target recognition model, and save the trained target recognition model in the terminal or save it after compression;
  • the camera is used to collect real-time video of moving vehicles on the traffic road, and send the collected real-time video to the terminal ;
  • the terminal is used to implement the moving target detection method to identify the category of the detected moving vehicle.
  • FIG. 3 is a schematic flowchart of a method for detecting a moving target provided by an embodiment of the present application.
  • the moving object detection method can be applied to a terminal or a server, and quickly identify the category of the detected moving object from the real-time video with a small amount of calculation.
  • the moving target detection method specifically includes steps S201 to S204, which will be described in detail below in conjunction with FIG. 2.
  • real-time video recording is, for example, a camera in a traffic monitoring device that captures a video of a moving vehicle on a traffic road in real time.
  • determine the moving target in the real-time video recording such as a moving vehicle
  • determine the moving target in the real-time video recording such as a moving vehicle
  • use the inter-frame difference method to detect the real-time video to determine the moving vehicle.
  • other detection methods can also be used, such as image recognition to determine the moving vehicle.
  • Shape recognition of moving vehicles in real-time video can also be used, such as image recognition to determine the moving vehicle.
  • S202 Extract a bounding box of the moving target and data information corresponding to the bounding box.
  • the data information includes position information and size information of the bounding box in the real-time video recording. Extracting the bounding box of the moving target and the data information corresponding to the bounding box includes: determining the bounding box of the video frame image of the moving target in real-time recording; extracting the position of the bounding box in the real-time recording Information and size information.
  • step S202 includes sub-steps S202a and S202b.
  • S202a Determine a bounding box corresponding to the moving target according to the horizontal bandwidth and vertical length of the moving target in real-time video recording;
  • S202b Extract the horizontal bandwidth and vertical length as the size information, and the bounding box As the position information.
  • the corresponding bounding box is determined according to the maximum horizontal bandwidth and vertical length of the moving target in real-time recording; and the maximum horizontal bandwidth and vertical length are extracted as size information, and the center coordinate value of the bounding box is obtained as According to the position information, the size and position information of the bounding box can be obtained, and the size and position information of the bounding box is the data information corresponding to the bounding box.
  • a frame of image in real-time recording may include multiple moving targets, such as multiple moving vehicles, each moving vehicle corresponds to a bounding box, so the real-time recording video frame may correspond to multiple Bounding box.
  • the image in the bounding box can be determined according to the data information of the bounding box, and then the image in the bounding box is input to a pre-trained target recognition model for prediction, so as to output the classification category corresponding to the moving target.
  • the target recognition model may recognize that the classification category of the moving vehicle includes information such as car logo and model. Specifically, as shown in Figure 2, the predicted logo and model of the sports vehicle are Audi And the car.
  • S204 Mark the moving target in the real-time video recording according to the classification category.
  • marking the moving targets in the real-time recording according to the classification category includes displaying the classification category output by the model at the moving target in the real-time recording.
  • the bounding box can also be displayed in the real-time video, and then the classification category can be displayed in the bounding box.
  • other labeling methods may also be used to label the moving target in the real-time video recording. Therefore, by marking the moving target, it is convenient for the user to locate or track the moving vehicle.
  • each moving target needs to be marked separately for the user to recognize.
  • the method for recognizing moving objects can quickly recognize and classify moving objects, such as recognizing car logos and car models corresponding to moving vehicles. Specifically, after determining the moving target in real-time video; extracting the bounding box of the moving target and the data information corresponding to the bounding box; determining the image in the bounding box according to the data information corresponding to the bounding box, and then inputting the image in the bounding box To the pre-trained target recognition model to output the classification category of the moving target. This realizes the recognition and classification of moving targets in real-time video. This method can reduce the amount of calculation during classification, thereby improving the recognition efficiency of moving targets, and is suitable for real-time detection and recognition.
  • FIG. 5 is a schematic flowchart of steps for determining a moving target provided by an embodiment of the present application.
  • the steps of determining the moving target specifically include the following:
  • S301 Determine a current frame image from the real-time video recording, and use the current frame image as a reference image.
  • the current frame image is determined from the real-time recording, and the corresponding video picture can be selected as the current frame image according to the user in the real-time recording. For example, when the real-time video is played, the user clicks to select the currently played video, and the video frame selected by the user can be used as the current frame image. Of course, the user can also specify the corresponding video frame as the current frame image.
  • the determined current frame image is taken as the reference image, and the reference image is expressed as f k (i, j), where k represents the current frame image of the k-th video frame in the real-time recorded image sequence, where k is a positive integer, (i, j) are expressed as discrete image coordinates in the video frame.
  • the moving speed of the moving target can be determined first, and then the corresponding preset number of frames is selected according to the moving speed, where different moving speeds correspond to different numbers of presets The number of frames.
  • the movement speed is a range value, of course, it can also be a specific value.
  • the movement speed range value is, for example, 90 to 110km/h; the specific movement speed value is, for example, 100km/h.
  • the moving speed of the moving target to be determined may be measured by a speed measuring instrument, such as a laser speedometer.
  • a speed measuring instrument such as a laser speedometer.
  • the moving speed of the moving target can also be calculated based on two images with a certain number of frames in the interval.
  • the speed and accuracy of moving target recognition are improved.
  • the moving speed of the moving target to be determined can be determined according to the environmental parameters of the moving target.
  • the number of delayed preset frames is set according to the motion speed. For example, a vehicle in the leftmost lane on an expressway moves faster, and its corresponding delay preset number of frames is less. For example, set the preset number of frames to delay 1 or 2 frames; on an expressway For vehicles in the middle lane, the speed is also relatively fast. Set the default frame number to 4 or 5 frames later; for vehicles in the rightmost lane on the expressway, the speed is relatively fast, so set the default frame number to delay 7 frames or 8 frames; the speed of vehicles on urban roads is relatively slow, and the corresponding delay preset frame number can be set to a larger number of frames, such as 9 or 10 frames.
  • the preset frame number corresponding to the acquired motion speed range is determined, which can be changed according to the actual situation of the moving target, thereby quickly and accurately determining real-time recording Sports goals in.
  • the moving speed of the moving vehicle is approximately 110km/h or more, and the obtained moving speed is determined according to the preset correspondence between the moving speed range and the preset number of frames
  • the preset number of frames corresponding to the range is specifically 2 frames.
  • the reference image is expressed as f k (i, j).
  • the predetermined number of frames is determined to be 2 frames, and then an image that is 2 frames behind the reference image can be extracted
  • the delayed frame image is expressed as f k+2 (i, j).
  • the deferred frame image and the current frame image are subtracted by a difference method to obtain a difference image, and the difference image is expressed as:
  • D k represents a differential image
  • f k (i, j) represents a reference image
  • f k+2 (i, j) represents a delayed frame image
  • (i, j) represents a discrete image coordinate
  • S306 Perform threshold processing on the difference image to obtain a binary image corresponding to the difference image.
  • the performing threshold processing on the differential image to obtain the binary image corresponding to the differential image includes: determining pixels in the differential image with pixel values greater than a preset threshold; The pixel points determine the binary image corresponding to the difference image.
  • S k (i, j) represents a binary image
  • T is a preset threshold
  • (i, j) represents the coordinates of a discrete image
  • D k represents a differential image
  • greater than or equal to the preset threshold is represented as 1, and less than the preset threshold.
  • the determining the moving target in the real-time video recording according to the binary image includes: setting the area corresponding to S k (i, j) of 1 in the binary image as the moving area; passing through the moving area Morphological processing and connectivity analysis remove noise to determine the moving target in the real-time video.
  • the area corresponding to S k (i, j) of 1 in the binary image is set as the motion area, and then the motion area is processed by morphological processing and connectivity analysis to remove noise, so as to obtain effective motion aims.
  • FIG. 6 is a schematic block diagram of a model training device provided by an embodiment of the present application.
  • the model training device may be configured in a server and used to execute the aforementioned target recognition model training method.
  • the model training device 400 includes: a picture acquisition unit 401, a picture labeling unit 402, a parameter changing unit 403, a data construction unit 404, and a model training unit 405.
  • the picture acquiring unit 401 is configured to acquire a target picture, where the target picture is a picture of multiple target objects taken from different angles.
  • the picture marking unit 402 is configured to mark the target picture according to the category identifier corresponding to the classification category.
  • the parameter changing unit 403 is configured to perform an image processing operation on the target picture to change the picture parameters of the target picture, and use the target picture whose picture parameters are changed as a new target picture.
  • the image processing operations include: size adjustment, cropping processing, rotation processing, image algorithm processing, etc.
  • the image algorithm processing includes: color temperature adjustment algorithm, exposure adjustment algorithm, contrast adjustment algorithm, highlight restoration algorithm, low light compensation algorithm , White balance algorithm, sharpness adjustment algorithm, fogging algorithm index, natural saturation adjustment algorithm.
  • the data construction unit 404 is configured to construct sample data according to the new target picture and the target picture.
  • the model training unit 405 is configured to perform model training according to the sample data based on the convolutional neural network to obtain a target recognition model, and use the obtained target recognition model as a pre-trained target recognition model.
  • FIG. 7 is a schematic block diagram of a moving target detection device provided in an embodiment of the present application, and the moving target detection device is used to execute the aforementioned moving target detection method.
  • the moving target detection device can be configured in a server or a terminal.
  • the moving target detection device 500 includes: an acquisition and determination unit 501, an information extraction unit 502, an identification and detection unit 503, and a target labeling unit 504.
  • the obtaining and determining unit 501 is configured to obtain real-time video and determine the moving target in the real-time video.
  • the information extraction unit 502 is configured to extract a bounding box of the moving target and data information corresponding to the bounding box, the data information including position information and size information of the bounding box in the real-time video recording.
  • the information extraction unit 502 is specifically configured to determine the bounding box corresponding to the moving target according to the horizontal broadband and vertical length of the moving target in real-time video recording; extract the horizontal broadband and vertical length as the size Information, and the center coordinates of the bounding box as the position information.
  • the recognition and detection unit 503 is configured to input the image in the bounding box into a pre-trained target recognition model for recognition and detection according to the data information, so as to output the classification category corresponding to the moving target;
  • the target labeling unit 504 is configured to label the moving target in the real-time video recording according to the classification category.
  • the acquisition and determination unit 501 includes: a reference determination unit 5011, a speed determination unit 5012, a frame number determination unit 5013, an image extraction unit 5014, an image subtraction unit 5015, and an image processing unit 5016.
  • the reference determining unit 5011 is configured to determine a current frame image from the real-time video recording, and use the current frame image as a reference image.
  • the speed determining unit 5012 is used to obtain the moving speed of the moving target to be determined, where different moving speeds correspond to different numbers of preset frames.
  • the frame number determining unit 5013 is configured to determine the preset frame number corresponding to the acquired motion speed range according to the preset correspondence between the motion speed range and the preset frame number.
  • the image extraction unit 5014 is configured to extract a delayed frame image that is delayed by a preset number of frames relative to the reference image.
  • the image subtraction unit 5015 is configured to subtract the delayed frame image and the current frame image to obtain a difference image.
  • the image processing unit 5016 is configured to perform threshold processing on the difference image to obtain a binary image corresponding to the difference image.
  • the above-mentioned apparatus may be implemented in the form of a computer program, and the computer program may run on the computer device as shown in FIG. 9.
  • FIG. 9 is a schematic block diagram of the structure of a computer device according to an embodiment of the present application.
  • the computer equipment can be a server or a terminal.
  • the computer device includes a processor, a memory, and a network interface connected through a system bus, where the memory may include a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium can store an operating system and a computer program.
  • the computer program includes program instructions, and when the program instructions are executed, the processor can execute any moving target detection method.
  • the processor is used to provide calculation and control capabilities and support the operation of the entire computer equipment.
  • the internal memory provides an environment for the running of the computer program in the non-volatile storage medium.
  • the processor can execute any method for detecting moving objects.
  • the network interface is used for network communication, such as sending assigned tasks.
  • the network interface is used for network communication, such as sending assigned tasks.
  • FIG. 9 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or less parts than shown in the figure, or combining some parts, or having a different part arrangement.
  • the processor may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), and application specific integrated circuits (Application Specific Integrated Circuits). Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the present application Any of the moving target detection methods provided by the embodiments.
  • the computer-readable storage medium may be the internal storage unit of the computer device described in the foregoing embodiment, such as the hard disk or memory of the computer device.
  • the computer-readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a smart media card (SMC), or a secure digital (Secure Digital, SD) equipped on the computer device. ) Card, Flash Card, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil de détection d'objet en mouvement, un dispositif, et un support de stockage. Le procédé comporte les étapes consistant à: acquérir une vidéo en temps réel, et déterminer d'abord un objet en mouvement dans la vidéo en temps réel; extraire un rectangle de délimitation de l'objet en mouvement et des informations de données correspondant au rectangle de délimitation; introduire l'image comprise dans le rectangle de délimitation dans un modèle de reconnaissance de cible pré-entraîné selon les informations de données pour réaliser une détection par reconnaissance de façon à obtenir une catégorie de classification correspondant à l'objet en mouvement; et étiqueter l'objet en mouvement dans la vidéo en temps réel selon la catégorie de classification.
PCT/CN2019/091905 2019-01-23 2019-06-19 Procédé et appareil de détection d'objet en mouvement, dispositif informatique, et support de stockage WO2020151172A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910065021.4 2019-01-23
CN201910065021.4A CN109919008A (zh) 2019-01-23 2019-01-23 运动目标检测方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020151172A1 true WO2020151172A1 (fr) 2020-07-30

Family

ID=66960695

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/091905 WO2020151172A1 (fr) 2019-01-23 2019-06-19 Procédé et appareil de détection d'objet en mouvement, dispositif informatique, et support de stockage

Country Status (2)

Country Link
CN (1) CN109919008A (fr)
WO (1) WO2020151172A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881854A (zh) * 2020-07-31 2020-11-03 上海商汤临港智能科技有限公司 动作识别方法、装置、计算机设备及存储介质
CN112036462A (zh) * 2020-08-25 2020-12-04 北京三快在线科技有限公司 一种模型训练以及目标检测的方法及装置
CN112101134A (zh) * 2020-08-24 2020-12-18 深圳市商汤科技有限公司 物体的检测方法及装置、电子设备和存储介质
CN112149546A (zh) * 2020-09-16 2020-12-29 珠海格力电器股份有限公司 一种信息处理方法、装置、电子设备及存储介质
CN112465868A (zh) * 2020-11-30 2021-03-09 浙江大华汽车技术有限公司 一种目标检测跟踪方法、装置、存储介质及电子装置
CN112733741A (zh) * 2021-01-14 2021-04-30 苏州挚途科技有限公司 交通标识牌识别方法、装置和电子设备
CN113379591A (zh) * 2021-06-21 2021-09-10 中国科学技术大学 速度确定方法、速度确定装置、电子设备及存储介质
CN113537207A (zh) * 2020-12-22 2021-10-22 腾讯科技(深圳)有限公司 视频处理方法、模型的训练方法、装置以及电子设备
CN113822146A (zh) * 2021-08-02 2021-12-21 浙江大华技术股份有限公司 目标检测方法、终端设备及计算机存储介质
CN113838110A (zh) * 2021-09-08 2021-12-24 重庆紫光华山智安科技有限公司 目标检测结果的校验方法、装置、存储介质和电子设备
CN114155594A (zh) * 2020-08-17 2022-03-08 中移(成都)信息通信科技有限公司 行为识别方法、装置、设备和存储介质

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532859B (zh) * 2019-07-18 2021-01-22 西安电子科技大学 基于深度进化剪枝卷积网的遥感图像目标检测方法
CN111723634B (zh) * 2019-12-17 2024-04-16 中国科学院上海微系统与信息技术研究所 一种图像检测方法、装置、电子设备及存储介质
CN111222423B (zh) * 2019-12-26 2024-05-28 深圳供电局有限公司 基于作业区域的目标识别方法、装置、计算机设备
CN113129331B (zh) * 2019-12-31 2024-01-30 中移(成都)信息通信科技有限公司 目标移动轨迹检测方法、装置、设备及计算机存储介质
CN111461209B (zh) * 2020-03-30 2024-04-09 深圳市凯立德科技股份有限公司 一种模型训练装置和方法
CN111582377A (zh) * 2020-05-09 2020-08-25 济南浪潮高新科技投资发展有限公司 一种基于模型压缩的边缘端目标检测方法及系统
CN111866449B (zh) * 2020-06-17 2022-03-29 中国人民解放军国防科技大学 一种智能视频采集系统及方法
CN112055172B (zh) * 2020-08-19 2022-04-19 浙江大华技术股份有限公司 一种监控视频的处理方法、装置以及存储介质
WO2022240363A2 (fr) * 2021-05-12 2022-11-17 Nanyang Technological University Systèmes d'assemblage de repas robotisé et procédés robotiques pour estimation de pose d'objet en temps réel d'articles alimentaires aléatoires à ressemblance élevée
CN113205068B (zh) * 2021-05-27 2024-06-14 苏州魔视智能科技有限公司 洒水车喷头监控方法、电子设备及车辆
CN113192109B (zh) * 2021-06-01 2022-01-11 北京海天瑞声科技股份有限公司 在连续帧中识别物体运动状态的方法及装置
CN113344967B (zh) * 2021-06-07 2023-04-07 哈尔滨理工大学 一种复杂背景下的动态目标识别追踪方法
CN113822137A (zh) * 2021-07-23 2021-12-21 腾讯科技(深圳)有限公司 一种数据标注方法、装置、设备及计算机可读存储介质
CN114581798A (zh) * 2022-02-18 2022-06-03 广州中科云图智能科技有限公司 目标检测方法、装置、飞行设备及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559498A (zh) * 2013-09-24 2014-02-05 北京环境特性研究所 基于多特征融合的快速人车目标分类方法
CN104700430A (zh) * 2014-10-05 2015-06-10 安徽工程大学 机载显示器的运动检测方法
CN106991668A (zh) * 2017-03-09 2017-07-28 南京邮电大学 一种天网摄像头拍摄画面的评价方法
CN108022249A (zh) * 2017-11-29 2018-05-11 中国科学院遥感与数字地球研究所 一种遥感视频卫星运动车辆目标感兴趣区域自动提取方法
CN109035287A (zh) * 2018-07-02 2018-12-18 广州杰赛科技股份有限公司 前景图像提取方法和装置、运动车辆识别方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878674B (zh) * 2017-01-10 2019-08-30 哈尔滨工业大学深圳研究生院 一种基于监控视频的停车检测方法及装置
CN109117794A (zh) * 2018-08-16 2019-01-01 广东工业大学 一种运动目标行为跟踪方法、装置、设备及可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559498A (zh) * 2013-09-24 2014-02-05 北京环境特性研究所 基于多特征融合的快速人车目标分类方法
CN104700430A (zh) * 2014-10-05 2015-06-10 安徽工程大学 机载显示器的运动检测方法
CN106991668A (zh) * 2017-03-09 2017-07-28 南京邮电大学 一种天网摄像头拍摄画面的评价方法
CN108022249A (zh) * 2017-11-29 2018-05-11 中国科学院遥感与数字地球研究所 一种遥感视频卫星运动车辆目标感兴趣区域自动提取方法
CN109035287A (zh) * 2018-07-02 2018-12-18 广州杰赛科技股份有限公司 前景图像提取方法和装置、运动车辆识别方法和装置

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881854A (zh) * 2020-07-31 2020-11-03 上海商汤临港智能科技有限公司 动作识别方法、装置、计算机设备及存储介质
CN114155594A (zh) * 2020-08-17 2022-03-08 中移(成都)信息通信科技有限公司 行为识别方法、装置、设备和存储介质
CN112101134A (zh) * 2020-08-24 2020-12-18 深圳市商汤科技有限公司 物体的检测方法及装置、电子设备和存储介质
CN112101134B (zh) * 2020-08-24 2024-01-02 深圳市商汤科技有限公司 物体的检测方法及装置、电子设备和存储介质
CN112036462A (zh) * 2020-08-25 2020-12-04 北京三快在线科技有限公司 一种模型训练以及目标检测的方法及装置
CN112149546A (zh) * 2020-09-16 2020-12-29 珠海格力电器股份有限公司 一种信息处理方法、装置、电子设备及存储介质
CN112149546B (zh) * 2020-09-16 2024-05-03 珠海格力电器股份有限公司 一种信息处理方法、装置、电子设备及存储介质
CN112465868A (zh) * 2020-11-30 2021-03-09 浙江大华汽车技术有限公司 一种目标检测跟踪方法、装置、存储介质及电子装置
CN112465868B (zh) * 2020-11-30 2024-01-12 浙江华锐捷技术有限公司 一种目标检测跟踪方法、装置、存储介质及电子装置
CN113537207B (zh) * 2020-12-22 2023-09-12 腾讯科技(深圳)有限公司 视频处理方法、模型的训练方法、装置以及电子设备
CN113537207A (zh) * 2020-12-22 2021-10-22 腾讯科技(深圳)有限公司 视频处理方法、模型的训练方法、装置以及电子设备
CN112733741A (zh) * 2021-01-14 2021-04-30 苏州挚途科技有限公司 交通标识牌识别方法、装置和电子设备
CN113379591A (zh) * 2021-06-21 2021-09-10 中国科学技术大学 速度确定方法、速度确定装置、电子设备及存储介质
CN113379591B (zh) * 2021-06-21 2024-02-27 中国科学技术大学 速度确定方法、速度确定装置、电子设备及存储介质
CN113822146A (zh) * 2021-08-02 2021-12-21 浙江大华技术股份有限公司 目标检测方法、终端设备及计算机存储介质
CN113838110A (zh) * 2021-09-08 2021-12-24 重庆紫光华山智安科技有限公司 目标检测结果的校验方法、装置、存储介质和电子设备
CN113838110B (zh) * 2021-09-08 2023-09-05 重庆紫光华山智安科技有限公司 目标检测结果的校验方法、装置、存储介质和电子设备

Also Published As

Publication number Publication date
CN109919008A (zh) 2019-06-21

Similar Documents

Publication Publication Date Title
WO2020151172A1 (fr) Procédé et appareil de détection d'objet en mouvement, dispositif informatique, et support de stockage
Zhang et al. CCTSDB 2021: a more comprehensive traffic sign detection benchmark
WO2022126377A1 (fr) Procédé et appareil de détection de ligne de voie de circulation, dispositif terminal et support de stockage lisible
Pavlic et al. Classification of images in fog and fog-free scenes for use in vehicles
US10970824B2 (en) Method and apparatus for removing turbid objects in an image
CN111435446A (zh) 一种基于LeNet车牌识别方法及装置
Dhatbale et al. Deep learning techniques for vehicle trajectory extraction in mixed traffic
Wei et al. Detection of lane line based on Robert operator
CN116052090A (zh) 图像质量评估方法、模型训练方法、装置、设备及介质
CN113688839B (zh) 视频处理方法及装置、电子设备、计算机可读存储介质
Lashkov et al. Edge-computing-facilitated nighttime vehicle detection investigations with CLAHE-enhanced images
CN117218622A (zh) 路况检测方法、电子设备及存储介质
CN111009136A (zh) 一种高速公路行驶速度异常车辆检测方法、装置及系统
CN111127358A (zh) 图像处理方法、装置及存储介质
CN112435278B (zh) 一种基于动态目标检测的视觉slam方法及装置
CN110154896B (zh) 一种检测障碍物的方法以及设备
Yaghoobi Ershadi Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera
CN113902047B (zh) 图像元素匹配方法、装置、设备以及存储介质
Muniruzzaman et al. Deterministic algorithm for traffic detection in free-flow and congestion using video sensor
Xiong et al. Fast and robust approaches for lane detection using multi‐camera fusion in complex scenes
CN110765940B (zh) 目标对象统计方法和装置
CN108985233B (zh) 一种基于数字图像相关的高精度车辆跟踪方法
Jehad et al. Developing and validating a real time video based traffic counting and classification
CN111104885A (zh) 基于视频深度学习的车辆识别方法
Fleck et al. Low-Power Traffic Surveillance using Multiple RGB and Event Cameras: A Survey

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19911806

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19911806

Country of ref document: EP

Kind code of ref document: A1