CN110717933B - Post-processing method, device, equipment and medium for moving object missed detection - Google Patents

Post-processing method, device, equipment and medium for moving object missed detection Download PDF

Info

Publication number
CN110717933B
CN110717933B CN201910959572.5A CN201910959572A CN110717933B CN 110717933 B CN110717933 B CN 110717933B CN 201910959572 A CN201910959572 A CN 201910959572A CN 110717933 B CN110717933 B CN 110717933B
Authority
CN
China
Prior art keywords
detection
target
frame
detection frame
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910959572.5A
Other languages
Chinese (zh)
Other versions
CN110717933A (en
Inventor
刘博�
舒茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN201910959572.5A priority Critical patent/CN110717933B/en
Publication of CN110717933A publication Critical patent/CN110717933A/en
Application granted granted Critical
Publication of CN110717933B publication Critical patent/CN110717933B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a post-processing method, a post-processing device and a post-processing medium for missed detection of a moving object, relates to a moving object detection technology, and can be used in the field of automatic driving. The specific implementation scheme is as follows: acquiring at least one motion foreground in the current frame image by using the background model of the current frame; obtaining a target object detection result of a target detection model aiming at the current frame image, wherein the detection result at least comprises the position and the confidence degree of a detection frame of a target object; and sequentially judging whether each motion foreground has a detection frame with the confidence coefficient reaching a preset threshold value or not by combining the detection result, and determining whether each motion foreground has a motion object missing detection or not according to the judgment result. The embodiment of the application determines whether the target detection model has missed detection or not by combining background modeling, retraining the target detection model is not needed, the period of judging the missed detection of the moving object is shortened, and the detection efficiency of the moving object is improved.

Description

Post-processing method, device, equipment and medium for moving object missed detection
Technical Field
The application relates to the technical field of automatic driving, in particular to a moving object detection technology, and specifically relates to a post-processing method, device, equipment and medium for missed detection of a moving object.
Background
In the automatic driving technology, a 2D target detection model is used for target detection, including targets such as moving objects and stationary objects. In addition, for moving object detection, such as vehicles and pedestrians, detection omission sometimes occurs, and therefore safety of automatic driving is affected.
In the prior art, the problem is mainly solved by collecting data of bad cases (badcases) causing the missed detection, and using the data to train a new model so as to improve the missed detection condition of the model for the cases. However, the collection of data requires manual processing, which consumes a lot of effort and time, and the training of a new model also requires a long time, thereby causing an overlong error correction period and affecting the error correction efficiency.
Disclosure of Invention
The embodiment of the application provides a post-processing method, a post-processing device and a post-processing medium for the missed detection of a moving object, so as to shorten the missed detection period and improve the error correction efficiency for the missed detection of the moving object.
In a first aspect, an embodiment of the present application provides a post-processing method for missed detection of a moving object, including:
acquiring at least one motion foreground in the current frame image by using the background model of the current frame;
obtaining a target object detection result of a target detection model aiming at the current frame image, wherein the detection result at least comprises the position and the confidence degree of a detection frame of a target object;
and sequentially judging whether each motion foreground has a detection frame with the confidence coefficient reaching a preset threshold value or not by combining the detection result, and determining whether each motion foreground has a motion object missing detection or not according to the judgment result.
One embodiment in the above application has the following advantages or benefits: by combining background modeling, the motion foreground is distinguished by utilizing the background modeling, and whether each motion foreground has a detection frame with the attribute of a movable object is judged, so that whether the target detection model has missed detection or not is determined, retraining of the target detection model is not needed, whether the missed detection exists or not can be judged on line, the period of judging the missed detection of the moving object is shortened, and the detection efficiency of the moving object is improved.
Optionally, the sequentially determining, by combining the detection result, whether each motion foreground has a detection frame whose confidence reaches a preset threshold includes:
sequentially judging whether each motion foreground has a detection frame or not according to the position of the detection frame of the target object and the position of the motion foreground in the current frame image;
and judging whether the confidence of the detection frame reaches a preset threshold value or not for the motion foreground with the detection frame.
Optionally, the determining whether there is a moving object missing detection in each moving foreground according to the determination result includes:
and if the detection frame with the confidence coefficient reaching a preset threshold exists in each motion foreground, determining that no moving object missing detection exists in the current frame.
Optionally, the detection result further includes a category of the detection frame; the determining whether the moving objects are missed to be detected or not according to the judgment result comprises the following steps:
if any target motion foreground has a detection frame and the confidence coefficient of the detection frame does not reach a preset threshold value, judging whether the attribute of the detection frame is a movable object or not according to the category of the detection frame;
and if the object is judged to be a movable object, determining that the target moving foreground is missed to be detected, and outputting a detection frame of the target moving foreground.
One embodiment in the above application has the following advantages or benefits: when the detection frame exists but the confidence coefficient of the detection frame does not reach the preset threshold value, whether the attribute of the detection frame is a movable object or not can be further judged, whether the current movement foreground is missed to be detected or not is determined according to the judgment result, and therefore the situation that the confidence coefficient of the detection frame is judged inaccurately by the target detection model is determined, and the situation that the detection is missed is avoided through the judgment of whether the detection frame is the movable object or not.
Optionally, the determining whether the attribute of the detection frame is a movable object according to the type of the detection frame includes:
and judging whether the attribute of the detection frame is a movable object or not according to the type of the detection frame and the corresponding relation between the pre-configured type and the movable object.
Optionally, the determining whether there is a moving object missing detection in each moving foreground according to the determination result further includes:
if any target moving foreground has a detection frame, the confidence coefficient of the detection frame does not reach a preset threshold value, and the attribute of the detection frame is a non-movable object, acquiring a first target characteristic diagram corresponding to the detection frame of the target moving foreground from the target detection model;
inputting the first target feature map into a classification model trained in advance, wherein the classification model is used for judging whether the attribute of the detection frame is a movable object or not according to the feature map and determining the category of the detection frame;
if the output result of the classification model is a movable object, determining that the target motion foreground is missed to be detected, and outputting a detection frame and the category of the target motion foreground;
and if the output result of the classification model is a non-movable object, determining that the target motion foreground is not missed.
One embodiment in the above application has the following advantages or benefits: and if the detection frame exists on any target motion foreground, the confidence coefficient of the detection frame does not reach a preset threshold value, and the attribute of the detection frame is a non-movable object, whether the detection frame belongs to the movable object or not can be further judged through a classification model, so that the occurrence of missing detection is further avoided.
Optionally, the determining, according to the determination result, whether the moving objects are missed to be detected exists in each moving foreground includes:
if any target moving foreground does not have a detection frame, generating a target smallest surrounding frame of the target moving foreground through the background model, and extracting a second target feature map of the target smallest surrounding frame;
inputting the second target feature map into a pre-trained classification model, wherein the classification model is used for judging whether the attribute of the smallest enclosure frame is a movable object or not according to the feature map and determining the category of the smallest enclosure frame;
if the output result of the classification model is a movable object, determining that the target moving foreground is missed to be detected, and outputting a target minimum bounding box and the category of the target moving foreground;
and if the output result of the classification model is a non-movable object, determining that the target motion foreground is not missed.
One embodiment in the above application has the following advantages or benefits: for the situation that no detection frame exists in any target motion foreground, whether the smallest surrounding frame of the motion foreground belongs to a movable object or not can be further judged through the classification model, and therefore missing detection is further avoided.
In a second aspect, an embodiment of the present application further provides a post-processing device for missed detection of a moving object, including:
the motion foreground obtaining module is used for obtaining at least one motion foreground in the current frame image by utilizing the background model of the current frame;
a detection result obtaining module, configured to obtain a target object detection result of the target detection model for the current frame image, where the detection result at least includes a position and a confidence of a detection frame of the target object;
and the missed detection determining module is used for sequentially judging whether each motion foreground has a detection frame with the confidence coefficient reaching a preset threshold value or not by combining the detection result, and determining whether each motion foreground has the missed detection of the moving object or not according to the judgment result.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the post-processing method for the moving object missed detection according to any embodiment of the application.
In a fourth aspect, the present application further provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the post-processing method for the missing detection of a moving object according to any embodiment of the present application.
One embodiment in the above application has the following advantages or benefits: by combining background modeling, motion foregrounds are distinguished by utilizing the background modeling, and whether a detection frame with the attribute of a movable object exists in each motion foregrounds is judged, so that whether the object detection model has missed detection or not is determined, retraining of the object detection model is not needed, whether the missed detection exists or not can be judged on line, the period of judging the missed detection of the moving object is shortened, meanwhile, the missed detection can be corrected, and the detection efficiency of the moving object is improved.
Other effects of the above alternatives will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be considered limiting of the present application. Wherein:
fig. 1 is a schematic flowchart of a post-processing method for missing detection of a moving object according to a first embodiment of the present application;
FIG. 2 is a flowchart illustrating a post-processing method for missed detection of a moving object according to a second embodiment of the present application;
FIG. 3 is a schematic structural diagram of a post-processing device for missed detection of a moving object according to a third embodiment of the present application;
fig. 4 is a block diagram of an electronic device for implementing the post-processing method for moving object omission according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic flow chart of a post-processing method for missing detection of a moving object according to a first embodiment of the present application, which is applicable to the field of automatic driving, and is suitable for performing post-processing after detecting a moving object in a target detection model to determine whether missing detection of a moving object exists and correct the missing detection. The method can be executed by a post-processing device for the omission of moving objects, which is implemented by software and/or hardware, and is preferably configured in electronic equipment, such as a computer device or a server. As shown in fig. 1, the method specifically includes the following steps:
s101, acquiring at least one motion foreground in the current frame image by using the background model of the current frame.
The background model is used for distinguishing a moving foreground from a static background in each frame of image, and the background model may be a gaussian mixture model or a codebook model, for example.
In addition, regarding the detection of the moving object, the detection is performed based on each frame image captured by the camera, and therefore, the embodiment of the present application uses any one frame as a current frame, and uses the current frame as an example to explain how to perform the post-processing for the missing detection of the moving object. And for other frame images, the post-processing can be carried out by adopting the same method.
It should be further noted that, since the surrounding environment in the actual scene changes at any moment, the background model needs to be updated before the moving foreground of each frame of image is acquired. For example, a car is parked at the roadside for a historical period of time, the background model considers the car as a stationary background during the period of time, and once the car is driven away, the car changes from the stationary background to a moving foreground, and the recognition of other objects in the image is also affected. Therefore, before each frame is processed, the background model needs to be updated to obtain a real background, and at least one moving foreground in each frame image is obtained in a differential mode, so that the accuracy of identifying the moving foreground is improved, and data preparation is provided for subsequent post-processing.
S102, obtaining a target object detection result of the target detection model aiming at the current frame image, wherein the detection result at least comprises the position and the confidence degree of a detection frame of the target object.
The target detection model is used for identifying a target object according to an image of a surrounding environment shot by a camera, the target detection model can identify an existing object from the image and give a position, a category and a confidence level of a detection frame of the object, wherein the position represents coordinates of the detection frame in the image and the size of the detection frame, the category represents the category of the object corresponding to the detection frame, such as a vehicle, a pedestrian, a tree or a building, and the confidence level represents the confidence level of the model for predicting the category of the detection frame, and the higher the confidence level is, the higher the probability that the object corresponding to the detection frame belongs to the category is indicated, and the smaller the probability is otherwise indicated. The detection principle of the target detection model belongs to the field of the prior art, and is not described herein again.
S103, sequentially judging whether each motion foreground has a detection frame with a confidence coefficient reaching a preset threshold value or not by combining the detection result, and determining whether each motion foreground has a motion object missing detection or not according to the judgment result.
Specifically, since the detection result at least includes the position and the confidence of the detection frame of the target object, it may be determined whether the detection frame exists in each motion foreground in sequence according to the position of the detection frame of the target object and the position of each motion foreground in the current frame image, that is, if the positions of the detection frame and the motion foreground are matched, it indicates that the detection frame exists in the motion foreground; then, for the motion foreground with the detection frame, whether the confidence of the detection frame reaches a preset threshold value is judged.
If a detection frame exists in a certain moving foreground and the confidence coefficient of the certain moving foreground reaches a preset threshold value, it is indicated that the moving foreground is recognized as a target object by the target detection model, and the confidence coefficient of the recognized detection frame is credible, the moving foreground is considered to be not detected in a missing mode. And if the detection frame with the confidence coefficient reaching the preset threshold exists in each motion foreground, determining that no moving object missing detection exists in the current frame.
In addition, if there is no detection frame with a confidence reaching a preset threshold in a certain motion foreground, or there is no detection frame at all, it is a situation that there may be missed detection, and the motion foreground may be further determined, for example, whether the attribute of the detection frame belongs to a movable object, or the detection frame is identified by a classification model, or the motion foreground is directly identified to determine whether it belongs to a movable object. If the moving foreground belongs to the movable object, the moving foreground is judged to be missed, otherwise, the moving foreground is not missed.
It should be noted that, when the confidence of the detection frame reaches the preset threshold, but the category of the detection frame identified by the target detection model is a stationary object such as a building, the implementation of the embodiment of the present application is not affected. The method and the device for detecting the moving foreground are used for postprocessing aiming at the moving object missing detection, and the core is to judge whether each moving foreground detected by a background model has a detection frame with a confidence coefficient reaching a preset threshold value, if yes, the detection is determined not to be missed, and whether the type of the detection frame, detected by a target detection model aiming at the moving foreground, with the confidence coefficient reaching the preset threshold value belongs to a movable object or not belongs to the problem of false detection or not, and the method and the device for detecting the moving foreground do not judge the category of the detection missing.
According to the technical scheme, the background model is combined, the motion foreground is identified by the background model, whether a detection frame with the confidence coefficient reaching the preset threshold exists in each motion foreground is judged, whether the object detection model has missed detection is determined, and if the detection frame with the confidence coefficient reaching the preset threshold exists in each motion foreground, whether the object detection missing does not exist in the current frame is determined. According to the method and the device, the target detection model does not need to be retrained, whether missing detection exists can be judged on line, the missing detection judgment period of the moving object is shortened, and the moving object detection efficiency is improved.
Fig. 2 is a schematic flow chart of a post-processing method for missed detection of a moving object according to a second embodiment of the present application, which is further optimized based on the above embodiments. As shown in fig. 2, the method specifically includes the following steps:
s201, acquiring at least one motion foreground in the current frame image by using the background model of the current frame.
S202, obtaining a target object detection result of the target detection model aiming at the current frame image, wherein the detection result at least comprises the position and the confidence degree of a detection frame of the target object.
And S203, taking any one of the at least one motion foreground as a current motion foreground.
In the embodiment of the present application, a cyclic process is included, that is, each motion foreground is processed separately, so that any one of the at least one motion foreground may be first processed as a current motion foreground, and then a next motion foreground is processed as a new current motion foreground until all the motion foreground is processed.
S204, judging whether the current motion foreground has a detection frame or not according to the position of the detection frame of the target object and the position of the current motion foreground in the current frame image, if so, executing S205, otherwise, executing S213.
Whether a detection frame exists is judged firstly, and the two situations are divided into two situations, wherein one situation is that the detection frame exists, and the other situation is that the detection frame does not exist. If the detection frame exists, the reliability needs to be further judged, if the detection frame does not exist, the fact that the missed detection exists cannot be directly considered, and the classification model needs to be further used for judging.
S205, judging whether the confidence of the detection frame of the current motion foreground with the detection frame reaches a preset threshold, if so, executing S206, otherwise, executing S208.
If the detection frame exists and the confidence coefficient reaches the preset threshold value, it is determined that the current motion foreground is not missed, and continuous judgment is not needed, otherwise, it is further determined whether the detection frame is missed through judgment of the attribute of the detection frame.
S206, judging whether an unprocessed motion foreground exists in the at least one motion foreground, if so, executing S207 and then returning to execute S204, otherwise, ending the process.
And S207, taking the next motion foreground in the at least one motion foreground as the current motion foreground.
And S208, judging whether the attribute of the detection frame is a movable object or not according to the type of the detection frame of the current moving foreground, if so, executing S209 and then executing S206, and if not, executing S210.
For example, if the type of the detection frame is a car, the attribute is a movable object, and if the type is a tree, the attribute is a non-movable object, and the like, which are not described in detail again.
The judgment about the attribute is also divided into two cases, and for the case that the attribute is a movable object, the current moving foreground is determined to be missed, otherwise, the judgment needs to be further carried out through a classification model.
S209, determining that the current moving foreground is missed to be detected, and outputting a detection frame of the current moving foreground.
When the detection of the motion foreground is determined to be missed, the detection frame of the current motion foreground is output to achieve the purpose of correcting the missed detection.
S210, a first target feature map corresponding to the detection frame of the current moving foreground is obtained from the target detection model.
And S211, inputting the first target feature map into a classification model trained in advance, wherein the classification model is used for judging whether the attribute of the detection frame is a movable object or not according to the feature map and determining the class of the detection frame, if the attribute is the movable object, the execution returns to S206 after S212 is executed, and if the attribute is a non-movable object, the execution returns to S206.
The target detection model may be formed by a neural network, for example, a convolutional neural network, and the feature map is an output result of the convolutional layer, where the operation of obtaining the feature map from the target detection model belongs to the field of the prior art and is not described herein again.
The classification model may be a multi-classification model based on SVM (Support Vector Machine), and is specifically used for determining the class of the object of the uncertain type. In the embodiment of the application, the obtained first target feature map is an uncertain type target, so that whether the attribute of the detection frame is a movable object or not is determined. The data source used for training the classification model can comprise a feature map corresponding to the detection frame and marks of the real category of the detection frame, so that the category of the detection frame can be judged according to the feature map, and whether the attribute of the detection frame is a movable object or not is determined. Meanwhile, when collecting data, attention needs to be paid to keep the data quantity of the objects among various categories balanced, so that the accuracy of category identification is improved.
The output result of the classification model can be divided into two situations, when the detection frame is judged to be a movable object through the classification model, the current motion foreground is indicated to be missed, otherwise, the current motion foreground is indicated to be not missed, and the current motion foreground does not need to be further judged.
S212, determining that the current moving foreground is missed to be detected, and outputting a detection frame and the type of the current moving foreground.
S213, generating a target minimum enclosing frame of the current moving foreground through the background model, and extracting a second target characteristic diagram of the target minimum enclosing frame.
And S214, inputting the second target feature map into a classification model trained in advance, wherein the classification model is used for judging whether the attribute of the smallest enclosing frame is a movable object or not according to the feature map and determining the class of the smallest enclosing frame, if the attribute is judged to be the movable object, returning to execute S206 after executing S215, and if the attribute is judged to be a non-movable object, returning to execute S206.
In S204, if it is determined that the current moving foreground does not have the detection frame, it indicates that the current moving foreground is not detected as a target object by the target detection model, and therefore, it is necessary to further verify whether the current moving foreground is a movable object by the classification model, so as to accurately determine whether the current moving foreground is missed.
Specifically, a target minimum bounding box of the current moving foreground can be generated through a background model, a second target feature map of the target minimum bounding box is extracted, the second target feature map is identified through a classification model, the category of the corresponding target minimum bounding box is determined, and whether the attribute of the target minimum bounding box is a movable object is determined. The generation of the smallest bounding box and the extraction of the feature map of the smallest bounding box through the background model belong to the category of the prior art, and are not described herein again.
Similarly, the classification model may be a multi-classification model based on SVM, and may be the same as the classification model used in S211, or may be another model specifically used for determining the class of the object of the uncertain type. In the embodiment of the present application, the obtained second target feature map is an uncertain type target, so as to determine whether the attribute of the smallest enclosure frame is a movable object. The data source used for training the classification model may include a feature map of the smallest enclosure frame of the moving foreground generated by the background model and marks of the real category of the smallest enclosure frame, so that the category of the smallest enclosure frame can be judged according to the feature map, and whether the attribute of the smallest enclosure frame is a movable object or not is determined. Meanwhile, when collecting data, attention needs to be paid to keeping the data quantity of the object among various categories balanced, so that the accuracy of category identification is improved.
As for the output result of the classification model, two situations can be adopted, when the minimum bounding box is judged to be a movable object by the classification model, it indicates that the current moving foreground is missed, otherwise, it indicates that the current moving foreground is not missed, and it is not necessary to further judge the current moving foreground, and the process returns to S206 to process the next moving foreground.
S215, determining that the current motion foreground is missed to be detected, and outputting the target minimum surrounding frame and the category of the current motion foreground.
According to the technical scheme, the background model is combined, the motion foreground is identified by the background model, whether a detection frame with the confidence coefficient reaching the preset threshold exists in each motion foreground is judged, whether the object detection model has missed detection is determined, and if the detection frame with the confidence coefficient reaching the preset threshold exists in each motion foreground, whether the object detection missing does not exist in the current frame is determined. And for other situations, whether the current motion foreground has a detection frame with the attribute of the movable object or not is judged, or whether the current motion foreground belongs to the movable object or not is judged, so that whether the current motion foreground is missed or not is further judged, the missed detection is judged more comprehensively from multiple angles without retraining a target detection model, online detection can be realized, the cycle of missed detection judgment of the moving object is shortened, meanwhile, the missed detection can be corrected, and the moving object detection efficiency and accuracy are improved.
Fig. 3 is a schematic structural diagram of a post-processing device for missed detection of a moving object according to a third embodiment of the present application, which is applicable to the field of automatic driving, and is adapted to perform post-processing after a target detection model performs moving object detection, so as to determine whether missed detection of the moving object exists, and correct missed detection. The device can realize the post-processing method aiming at the moving object missing detection in any embodiment of the application. As shown in fig. 3, the apparatus 300 specifically includes:
a motion foreground obtaining module 301, configured to obtain at least one motion foreground in the current frame image by using the background model of the current frame;
a detection result obtaining module 302, configured to obtain a target object detection result of a target detection model for the current frame image, where the detection result at least includes a position and a confidence of a detection frame of a target object;
and the missed detection determining module 303 is configured to sequentially determine, in combination with the detection result, whether each motion foreground has a detection frame whose confidence reaches a preset threshold, and determine whether each motion foreground has a missed detection of the moving object according to the determination result.
Optionally, the missed detection determining module 303 includes a detection frame determining unit, where the detection frame determining unit is specifically configured to:
sequentially judging whether each motion foreground has a detection frame or not according to the position of the detection frame of the target object and the position of the motion foreground in the current frame image;
and judging whether the confidence of the detection frame reaches a preset threshold value or not for the motion foreground with the detection frame.
Optionally, the missed detection determining module includes a first missed detection determining unit, and is specifically configured to:
and if the detection frame with the confidence coefficient reaching a preset threshold exists in each motion foreground, determining that no moving object missing detection exists in the current frame.
Optionally, the detection result further includes a category of the detection frame;
the missed detection determining module comprises a second missed detection determining unit, and is specifically configured to:
if any target motion foreground has a detection frame and the confidence coefficient of the detection frame does not reach a preset threshold value, judging whether the attribute of the detection frame is a movable object or not according to the category of the detection frame;
and if the object is judged to be a movable object, determining that the target moving foreground is missed to be detected, and outputting a detection frame of the target moving foreground.
Optionally, when the second missed-detection determining unit determines whether the attribute of the detection frame is a movable object according to the type of the detection frame, specifically:
and judging whether the attribute of the detection frame is a movable object or not according to the type of the detection frame and the corresponding relation between the pre-configured type and the movable object.
Optionally, the missed detection determining module further includes a third missed detection determining unit, specifically configured to:
if any target moving foreground has a detection frame, the confidence coefficient of the detection frame does not reach a preset threshold value, and the attribute of the detection frame is a non-movable object, acquiring a first target characteristic diagram corresponding to the detection frame of the target moving foreground from the target detection model;
inputting the first target feature map into a pre-trained classification model, wherein the classification model is used for judging whether the attribute of the detection frame is a movable object or not according to the feature map and determining the category of the detection frame;
if the output result of the classification model is a movable object, determining that the target moving foreground is missed to be detected, and outputting a detection frame and the category of the target moving foreground;
and if the output result of the classification model is a non-movable object, determining that the target motion foreground is not missed.
Optionally, the missed detection determining module includes a fourth missed detection determining unit, and is specifically configured to:
if any target moving foreground does not have a detection frame, generating a target smallest surrounding frame of the target moving foreground through the background model, and extracting a second target feature map of the target smallest surrounding frame;
inputting the second target feature map into a pre-trained classification model, wherein the classification model is used for judging whether the attribute of the smallest enclosure frame is a movable object or not according to the feature map and determining the category of the smallest enclosure frame;
if the output result of the classification model is a movable object, determining that the target moving foreground is missed to be detected, and outputting a target minimum bounding box and the category of the target moving foreground;
and if the output result of the classification model is a non-movable object, determining that the target motion foreground is not missed.
The post-processing device 300 for the missed detection of the moving object, which is provided by the embodiment of the application, can execute the post-processing method for the missed detection of the moving object, which is provided by any embodiment of the application, and has corresponding functional modules and beneficial effects of the execution method. Reference may be made to the description of any method embodiment of the present application for a matter not explicitly described in this embodiment.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 4, the embodiment of the present application is a block diagram of an electronic device for a post-processing method for missing detection of a moving object. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 4, the electronic apparatus includes: one or more processors 401, memory 402, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing some of the necessary operations (e.g., as an array of servers, a group of blade servers, or a multi-processor system). In fig. 4, one processor 401 is taken as an example.
Memory 402 is a non-transitory computer readable storage medium as provided herein. The storage stores instructions executable by at least one processor to cause the at least one processor to execute the post-processing method for the moving object missing detection provided by the application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the post-processing method for moving object omission provided herein.
The memory 402, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the post-processing method for motion object omission in the embodiment of the present application (for example, the motion foreground acquiring module 301, the detection result acquiring module 302, and the omission determining module 303 shown in fig. 3). The processor 401 executes various functional applications and data processing of the server by running non-transitory software programs, instructions and modules stored in the memory 402, that is, implements the post-processing method for the moving object missed detection in the above method embodiment.
The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data and the like created according to use of an electronic device implementing the post-processing method for moving object omission according to the embodiment of the present application. Further, the memory 402 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 402 may optionally include a memory remotely disposed with respect to the processor 401, and these remote memories may be connected via a network to an electronic device implementing the post-processing method for moving object omission of the present embodiment. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device implementing the post-processing method for the moving object missed detection in the embodiment of the application may further include: an input device 403 and an output device 404. The processor 401, memory 402, input device 403, and output device 404 may be connected by a bus or other means, as exemplified by the bus connection in fig. 4.
The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of an electronic device implementing the post-processing method for moving object omission of the present embodiment, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, etc. The output devices 404 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the background modeling is combined, the motion foreground is distinguished by the background modeling, and whether each motion foreground has a detection frame with the attribute of a movable object is judged, so that whether the target detection model has missing detection or not is determined, retraining is not needed to be carried out on the target detection model, whether the missing detection exists or not can be judged on line, the missing detection judgment period of the moving object is shortened, meanwhile, the missing detection can be corrected, and the moving object detection efficiency is improved.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (18)

1. A post-processing method for moving object omission is characterized by comprising the following steps:
acquiring at least one motion foreground in the current frame image by using the background model of the current frame;
obtaining a target object detection result of a target detection model for the current frame image, wherein the detection result at least comprises the position and the confidence coefficient of a detection frame of a target object;
sequentially judging whether each motion foreground has a detection frame with a confidence coefficient reaching a preset threshold value or not by combining the detection result, and determining whether each motion foreground has a moving object missing detection or not according to the judgment result;
wherein the detection result further comprises the category of the detection frame; the determining whether the moving objects in each moving foreground are missed according to the judgment result comprises the following steps:
if any target motion foreground has a detection frame and the confidence coefficient of the detection frame does not reach a preset threshold value, judging whether the attribute of the detection frame is a movable object or not according to the category of the detection frame; if the object is judged to be a movable object, determining that the target moving foreground is missed to be detected, and outputting a detection frame of the target moving foreground;
if any target moving foreground has a detection frame, the confidence coefficient of the detection frame does not reach a preset threshold value, and the attribute of the detection frame is a non-movable object, acquiring a first target characteristic diagram corresponding to the detection frame of the target moving foreground from the target detection model;
inputting the first target feature map into a classification model trained in advance, wherein the classification model is used for judging whether the attribute of the detection frame is a movable object or not according to the feature map and determining the category of the detection frame;
if the output result of the classification model is a movable object, determining that the target moving foreground is missed to be detected, and outputting a detection frame and the category of the target moving foreground; and if the output result of the classification model is a non-movable object, determining that the target motion foreground is not missed.
2. The method according to claim 1, wherein the sequentially determining whether each motion foreground has a detection frame whose confidence level reaches a preset threshold value in combination with the detection result comprises:
sequentially judging whether each motion foreground has a detection frame or not according to the position of the detection frame of the target object and the position of the motion foreground in the current frame image;
and judging whether the confidence of the detection frame of the motion foreground of the detection frame reaches a preset threshold value or not.
3. The method of claim 1, wherein the determining whether the moving objects are missed in each moving foreground according to the determination result comprises:
and if the detection frame with the confidence coefficient reaching a preset threshold exists in each motion foreground, determining that no moving object missing detection exists in the current frame.
4. The method of claim 1, wherein determining whether the attribute of the detection frame is a movable object according to the type of the detection frame comprises:
and judging whether the attribute of the detection frame is a movable object or not according to the type of the detection frame and the corresponding relation between the pre-configured type and the movable object.
5. A post-processing method for the omission of moving objects is characterized by comprising the following steps:
acquiring at least one motion foreground in the current frame image by using the background model of the current frame;
obtaining a target object detection result of a target detection model aiming at the current frame image, wherein the detection result at least comprises the position and the confidence degree of a detection frame of a target object;
sequentially judging whether each motion foreground has a detection frame with a confidence coefficient reaching a preset threshold value or not by combining the detection result, and determining whether each motion foreground has a motion object missing detection or not according to the judgment result;
determining whether moving objects exist in each moving foreground and are missed to be detected according to the judgment result comprises the following steps:
if any target moving foreground does not have a detection frame, generating a target smallest surrounding frame of the target moving foreground through the background model, and extracting a second target feature map of the target smallest surrounding frame;
inputting the second target feature map into a pre-trained classification model, wherein the classification model is used for judging whether the attribute of the smallest enclosing frame is a movable object or not according to the feature map and determining the category of the smallest enclosing frame;
if the output result of the classification model is a movable object, determining that the target moving foreground is missed to be detected, and outputting a target minimum bounding box and the category of the target moving foreground;
and if the output result of the classification model is a non-movable object, determining that the target motion foreground is not missed.
6. The method according to claim 5, wherein the sequentially determining whether each motion foreground has a detection frame whose confidence level reaches a preset threshold value in combination with the detection result comprises:
sequentially judging whether each motion foreground has a detection frame or not according to the position of the detection frame of the target object and the position of the motion foreground in the current frame image;
and judging whether the confidence of the detection frame of the motion foreground of the detection frame reaches a preset threshold value or not.
7. The method of claim 5, wherein the determining whether the moving objects are missed in each moving foreground according to the determination result comprises:
and if the detection frame with the confidence coefficient reaching a preset threshold exists in each motion foreground, determining that no moving object missing detection exists in the current frame.
8. The method of claim 5, wherein the determining whether the attribute of the detection frame is a movable object according to the type of the detection frame comprises:
and judging whether the attribute of the detection frame is a movable object or not according to the type of the detection frame and the corresponding relation between the pre-configured type and the movable object.
9. A post-processing device for the missed detection of a moving object is characterized by comprising:
the motion foreground obtaining module is used for obtaining at least one motion foreground in the current frame image by utilizing the background model of the current frame;
a detection result obtaining module, configured to obtain a target object detection result of the target detection model for the current frame image, where the detection result at least includes a position and a confidence of a detection frame of the target object;
the missed detection determining module is used for sequentially judging whether each motion foreground has a detection frame with the confidence coefficient reaching a preset threshold value or not by combining the detection result, and determining whether each motion foreground has the missed detection of the moving object or not according to the judgment result;
wherein the detection result further comprises the category of the detection frame;
the missed detection determining module comprises a second missed detection determining unit, and is specifically configured to:
if any target motion foreground has a detection frame and the confidence coefficient of the detection frame does not reach a preset threshold value, judging whether the attribute of the detection frame is a movable object or not according to the category of the detection frame; if the object is judged to be a movable object, determining that the target moving foreground is missed to be detected, and outputting a detection frame of the target moving foreground;
the missed-detection determining module further includes a third missed-detection determining unit, specifically configured to:
if any target moving foreground has a detection frame, the confidence coefficient of the detection frame does not reach a preset threshold value, and the attribute of the detection frame is a non-movable object, acquiring a first target characteristic diagram corresponding to the detection frame of the target moving foreground from the target detection model; inputting the first target feature map into a classification model trained in advance, wherein the classification model is used for judging whether the attribute of the detection frame is a movable object or not according to the feature map and determining the category of the detection frame; if the output result of the classification model is a movable object, determining that the target motion foreground is missed to be detected, and outputting a detection frame and the category of the target motion foreground; and if the output result of the classification model is a non-movable object, determining that the target motion foreground is not missed.
10. The apparatus according to claim 9, wherein the missed detection determining module comprises a detection frame determining unit, and the detection frame determining unit is specifically configured to:
sequentially judging whether each motion foreground has a detection frame or not according to the position of the detection frame of the target object and the position of the motion foreground in the current frame image;
and judging whether the confidence of the detection frame reaches a preset threshold value or not for the motion foreground with the detection frame.
11. The apparatus according to claim 9, wherein the missed detection determining module comprises a first missed detection determining unit, specifically configured to:
and if the detection frame with the confidence coefficient reaching a preset threshold exists in each motion foreground, determining that no moving object missing detection exists in the current frame.
12. The apparatus according to claim 9, wherein the second missed detection determining unit, when determining whether the attribute of the detection frame is a movable object according to the category of the detection frame, specifically:
and judging whether the attribute of the detection frame is a movable object or not according to the type of the detection frame and the corresponding relation between the pre-configured type and the movable object.
13. A post-processing device for the missed detection of a moving object is characterized by comprising:
the motion foreground obtaining module is used for obtaining at least one motion foreground in the current frame image by utilizing the background model of the current frame;
a detection result obtaining module, configured to obtain a target object detection result of the target detection model for the current frame image, where the detection result at least includes a position and a confidence of a detection frame of the target object;
the missed detection determining module is used for sequentially judging whether each motion foreground has a detection frame with the confidence coefficient reaching a preset threshold value or not by combining the detection result, and determining whether each motion foreground has the missed detection of the moving object or not according to the judgment result;
the missed detection determining module includes a fourth missed detection determining unit, and is specifically configured to:
if any target moving foreground does not have a detection frame, generating a target smallest surrounding frame of the target moving foreground through the background model, and extracting a second target feature map of the target smallest surrounding frame; inputting the second target feature map into a pre-trained classification model, wherein the classification model is used for judging whether the attribute of the smallest enclosing frame is a movable object or not according to the feature map and determining the category of the smallest enclosing frame; if the output result of the classification model is a movable object, determining that the target moving foreground is missed to be detected, and outputting a target minimum bounding box and the category of the target moving foreground; and if the output result of the classification model is a non-movable object, determining that the target motion foreground is not missed.
14. The apparatus of claim 13, wherein the missed detection determination module comprises a detection frame determination unit, and the detection frame determination unit is specifically configured to:
sequentially judging whether each motion foreground has a detection frame or not according to the position of the detection frame of the target object and the position of the motion foreground in the current frame image;
and judging whether the confidence of the detection frame reaches a preset threshold value or not for the motion foreground with the detection frame.
15. The apparatus of claim 13, wherein the missed detection determination module comprises a first missed detection determination unit, configured to:
and if the detection frame with the confidence coefficient reaching a preset threshold exists in each motion foreground, determining that no moving object missing detection exists in the current frame.
16. The apparatus according to claim 13, wherein the second missed detection determining unit, when determining whether the attribute of the detection frame is a movable object according to the category of the detection frame, is specifically:
and judging whether the attribute of the detection frame is a movable object or not according to the type of the detection frame and the corresponding relation between the pre-configured type and the movable object.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8 for post-processing of a missed detection of a moving object.
18. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-8 for post-processing for moving object omission.
CN201910959572.5A 2019-10-10 2019-10-10 Post-processing method, device, equipment and medium for moving object missed detection Active CN110717933B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910959572.5A CN110717933B (en) 2019-10-10 2019-10-10 Post-processing method, device, equipment and medium for moving object missed detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910959572.5A CN110717933B (en) 2019-10-10 2019-10-10 Post-processing method, device, equipment and medium for moving object missed detection

Publications (2)

Publication Number Publication Date
CN110717933A CN110717933A (en) 2020-01-21
CN110717933B true CN110717933B (en) 2023-02-07

Family

ID=69211386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910959572.5A Active CN110717933B (en) 2019-10-10 2019-10-10 Post-processing method, device, equipment and medium for moving object missed detection

Country Status (1)

Country Link
CN (1) CN110717933B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507327B (en) * 2020-04-07 2023-04-14 浙江大华技术股份有限公司 Target detection method and device
CN112464880B (en) * 2020-12-11 2021-09-21 东莞先知大数据有限公司 Night foreign body detection method, device, medium and equipment
CN113095288A (en) * 2021-04-30 2021-07-09 浙江吉利控股集团有限公司 Obstacle missing detection repairing method, device, equipment and storage medium
CN113838110B (en) * 2021-09-08 2023-09-05 重庆紫光华山智安科技有限公司 Verification method and device for target detection result, storage medium and electronic equipment
CN113554008B (en) * 2021-09-18 2021-12-31 深圳市安软慧视科技有限公司 Method and device for detecting static object in area, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2013263838A1 (en) * 2013-11-29 2015-06-18 Canon Kabushiki Kaisha Method, apparatus and system for classifying visual elements
CN108509876A (en) * 2018-03-16 2018-09-07 深圳市商汤科技有限公司 For the object detecting method of video, device, equipment, storage medium and program
CN108961316A (en) * 2017-05-23 2018-12-07 华为技术有限公司 Image processing method, device and server
CN110069961A (en) * 2018-01-24 2019-07-30 北京京东尚科信息技术有限公司 A kind of object detecting method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985142A (en) * 2014-05-30 2014-08-13 上海交通大学 Federated data association Mean Shift multi-target tracking method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2013263838A1 (en) * 2013-11-29 2015-06-18 Canon Kabushiki Kaisha Method, apparatus and system for classifying visual elements
CN108961316A (en) * 2017-05-23 2018-12-07 华为技术有限公司 Image processing method, device and server
CN110069961A (en) * 2018-01-24 2019-07-30 北京京东尚科信息技术有限公司 A kind of object detecting method and device
CN108509876A (en) * 2018-03-16 2018-09-07 深圳市商汤科技有限公司 For the object detecting method of video, device, equipment, storage medium and program

Also Published As

Publication number Publication date
CN110717933A (en) 2020-01-21

Similar Documents

Publication Publication Date Title
CN110717933B (en) Post-processing method, device, equipment and medium for moving object missed detection
CN111292531B (en) Tracking method, device and equipment of traffic signal lamp and storage medium
CN110827325B (en) Target tracking method and device, electronic equipment and storage medium
CN112507949A (en) Target tracking method and device, road side equipment and cloud control platform
CN111768381A (en) Part defect detection method and device and electronic equipment
CN112528786B (en) Vehicle tracking method and device and electronic equipment
CN110659600B (en) Object detection method, device and equipment
CN110929639A (en) Method, apparatus, device and medium for determining position of obstacle in image
CN110675635B (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
CN111539347B (en) Method and device for detecting target
CN110703732B (en) Correlation detection method, device, equipment and computer readable storage medium
CN110968718A (en) Target detection model negative sample mining method and device and electronic equipment
EP4080470A2 (en) Method and apparatus for detecting living face
CN110852321A (en) Candidate frame filtering method and device and electronic equipment
CN111783639A (en) Image detection method and device, electronic equipment and readable storage medium
CN110866504B (en) Method, device and equipment for acquiring annotation data
CN112581533A (en) Positioning method, positioning device, electronic equipment and storage medium
CN111191619A (en) Method, device and equipment for detecting virtual line segment of lane line and readable storage medium
CN113657398B (en) Image recognition method and device
CN111337898A (en) Laser point cloud processing method, device, equipment and storage medium
CN112749701B (en) License plate offset classification model generation method and license plate offset classification method
CN110717474A (en) Target association calculation method, device, equipment and medium
CN113920158A (en) Training and traffic object tracking method and device of tracking model
CN112561053B (en) Image processing method, training method and device of pre-training model and electronic equipment
CN112560772B (en) Face recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211020

Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Applicant after: Apollo Intelligent Technology (Beijing) Co.,Ltd.

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant