WO2022017140A1 - 目标检测方法及装置、电子设备和存储介质 - Google Patents

目标检测方法及装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022017140A1
WO2022017140A1 PCT/CN2021/103286 CN2021103286W WO2022017140A1 WO 2022017140 A1 WO2022017140 A1 WO 2022017140A1 CN 2021103286 W CN2021103286 W CN 2021103286W WO 2022017140 A1 WO2022017140 A1 WO 2022017140A1
Authority
WO
WIPO (PCT)
Prior art keywords
result
target
detection
correction
target object
Prior art date
Application number
PCT/CN2021/103286
Other languages
English (en)
French (fr)
Inventor
鲍虎军
周晓巍
孙佳明
谢一鸣
张思宇
Original Assignee
浙江商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江商汤科技开发有限公司 filed Critical 浙江商汤科技开发有限公司
Publication of WO2022017140A1 publication Critical patent/WO2022017140A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • Embodiments of the present disclosure relate to the technical field of computer vision, and in particular, to a method and apparatus for target detection, an electronic device, and a storage medium.
  • Computer vision technology can simulate biological vision through electronic equipment. With the development of computer vision technology, more and more work can be done using electronic equipment to provide people with convenient conditions.
  • Object detection is an important task in computer vision, and its task goal is to estimate the position information of objects within the field of view. Stable object detection techniques can not only estimate the position information of objects, but also help to optimize the pose of the camera or be used in the development of other applications such as augmented reality and indoor navigation.
  • the embodiment of the present disclosure proposes a technical solution for target detection.
  • a target detection method comprising: acquiring a first detection result obtained by performing target detection on a current data frame of a target scene; A detection result is updated to obtain the first observation result of the target object in the current data frame; the first observation result is corrected according to the point cloud data corresponding to the first observation result, and the first observation result of the target object is obtained. A correction result.
  • the updating the first detection result based on the historical optimization result of the target scene to obtain the first observation result of the target object in the current data frame includes: based on the target The historical optimization result of the scene determines the object information of the first detection result, wherein the object information is used to identify the target object; the first detection result is updated according to the object information of the first detection result , to obtain the first observation result of the target object in the current data frame. In this way, by determining the object information of the first detection result, the connection between the historical optimization result and the first detection result can be established, thereby improving the accuracy of object detection.
  • the determining the object information of the first detection result based on the historical optimization result of the target scene includes: comparing the historical optimization result of the target scene with the first detection result Matching; if the first detection result matches the historical optimization result, determine the object information of the historical optimization result as the object information of the first detection result. In this way, the first detection result can be further updated to obtain the first observation result with accurate object information.
  • the determining the object information of the first detection result based on the historical optimization result of the target scene includes: when the first detection result does not match the historical optimization result Next, new object information is set for the first detection result. In this way, the first detection result can be made to correspond to the newly observed target object.
  • the matching the historical optimization result of the target scene with the first detection result includes: determining the overlap between the first detection result and a detection frame of a historical optimization result. the first volume, and determine the total volume occupied by the detection frame of the first detection result and the detection frame of the historical optimization result; determine the first detection result according to the ratio of the first volume to the total volume How well it matches the historical optimization results. In this way, the matching degree between the detection result and the historical optimization result can be determined more accurately.
  • the updating the first detection result based on the historical optimization result of the target scene to obtain the first observation result of the target object in the current data frame includes: based on the target The historical optimization result of the scene, when it is determined that the current data frame has a target object that is not detected by the first detection result, the historical optimization result of the undetected target object is determined as the current data frame.
  • the first observation of an undetected target object in . In this way, the phenomenon of missed detection can be reduced, and the reliability of target detection can be greatly increased.
  • the modifying the first observation result according to the point cloud data corresponding to the first observation result to obtain the first correction result of the target object includes: The point cloud data corresponding to the historical optimization result and the point cloud data corresponding to the first observation result are merged to obtain merged point cloud data; based on the merged point cloud data, a first correction result obtained by modifying the first observation result is obtained .
  • the merged point cloud data of the same object is used to obtain a first correction result with more accurate position information, and the historical information of the same target object can be considered in the target detection process, which can improve the accuracy of target detection.
  • the merging of the point cloud data corresponding to the historical optimization result of the same target object and the point cloud data corresponding to the first observation result includes: for the same target object, combining the current data frame The point cloud data corresponding to the historical optimization result of the previous data frame is merged with the point cloud data corresponding to the first observation result. In this way, the first observation result of the current data frame is corrected by using the historical optimization result of the previous data frame, so that the obtained first correction result is more accurate.
  • the method further includes: acquiring a correction result of the target object, wherein the correction result includes the first correction result and a second correction result, and the second correction result is based on The historical data frame of the target scene is obtained by performing target detection; based on the target result in the correction result, the current optimization result of the target object is determined. In this way, the current optimization result of the target object can be obtained by using multiple correction results, so that the target detection is more accurate.
  • the method further includes: determining errors between a first correction result in the correction results and a plurality of second correction results, wherein the first correction result is any one of the correction results As a result, the second correction result is a correction observation frame other than the first correction result; the number of inliers corresponding to the first correction result is counted, wherein the number of inliers is the same as the first correction result The number of second correction results whose error is less than the error threshold; the target result in the correction result is determined according to the number of inliers corresponding to the first correction result. In this way, the current optimization result of the target object is determined according to the relatively accurate target result in the correction result, and the correction result with lower accuracy is removed, so that the accuracy of target detection can be further improved.
  • the determining the target result in the correction result according to the number of inliers corresponding to the first correction result includes: determining that the number of inliers in the plurality of first correction results is the largest the first correction result; the first correction result with the largest number of inliers and the second correction result in which the error with the first correction result with the largest number of inliers is less than the error threshold are determined as the correction results target result in .
  • the first correction result of the target object can be further optimized, so that the current optimization result obtained after optimization can more accurately indicate the position of the target object.
  • the sum of errors between the current optimization result and a plurality of the target results is minimized.
  • a target detection apparatus comprising: an acquisition module configured to acquire a first detection result obtained by performing target detection on a current data frame of a target scene; a determination module configured to The historical optimization result of the target scene updates the first detection result, and obtains the first observation result of the target object in the current data frame; the correction module is configured to perform a correction on the first detection result according to the point cloud data corresponding to the first observation result. The first observation result is corrected to obtain the first correction result of the target object.
  • an electronic device including:
  • memory for storing processor-executable instructions
  • the processor is configured to: execute the above target detection method.
  • a computer-readable storage medium having computer program instructions stored thereon, the computer program instructions implementing the above target detection method when executed by a processor.
  • An embodiment of the present disclosure provides a computer program, including computer-readable code, when the computer-readable code is executed in an electronic device, a processor in the electronic device executes and is configured to achieve any of the above objectives. Detection method.
  • the first detection result obtained by performing target detection on the current data frame of the target scene may be obtained, and then the first detection result is updated based on the historical optimization result of the target scene to obtain the target object in the current data frame.
  • the first observation result is corrected according to the historical optimization result of the target object and the point cloud data corresponding to the first observation result, so as to obtain the first correction result of the target object.
  • the first detection result of the target scene can be combined with the historical optimization result, and the correlation between the first detection result and the historical optimization result can be considered, so that the obtained first correction result can more accurately represent the position of the target object.
  • FIG. 1A is a schematic diagram of a system architecture to which a target detection method according to an embodiment of the present disclosure can be applied;
  • FIG. 1B shows a flowchart of a target detection method according to an embodiment of the present disclosure.
  • FIG. 2 shows a flowchart of an example of a target detection method according to an embodiment of the present disclosure.
  • FIG. 3 shows a block diagram of a target detection apparatus according to an embodiment of the present disclosure.
  • FIG. 4 shows a block diagram of an example of an electronic device according to an embodiment of the present disclosure.
  • FIG. 5 shows a block diagram of an example of an electronic device according to an embodiment of the present disclosure.
  • the target detection solution provided by the embodiment of the present disclosure can obtain the first detection result obtained by performing target detection on the current data frame of the target scene, and then update the first detection result based on the historical optimization result of the target scene, and obtain the target scene in the current data frame.
  • the first observation result is corrected according to the historical optimization result of the target object and the point cloud data corresponding to the first observation result, and a first correction result obtained by correcting the first observation result is obtained.
  • the first observation result obtained by combining the first detection result and the historical optimization result can more accurately indicate the target object in the current data frame, and further through the point cloud data corresponding to the historical optimization result and the first observation result, it is possible to The first observation result is adjusted so that the first correction result can more accurately indicate the position of the target object.
  • target detection is usually performed separately for each data frame collected from the target scene.
  • this method of target detection has great limitations. For example, the detection results for the same object will shake, or when the target object in the target scene has occlusion or truncation, it is difficult to detect The position of the target object is estimated accurately, so the accuracy of the detection result is poor.
  • the embodiment of the present disclosure can combine the first detection result of the current data frame of the target scene with the historical optimization result, so that the temporal continuity of the position of the same target object can be considered, and the accuracy of estimating the position of the target object can be improved.
  • the technical solutions provided by the embodiments of the present disclosure can be applied to the expansion of application scenarios such as target detection, target tracking, positioning, and navigation, which are not limited in the embodiments of the present disclosure.
  • the augmented reality technology that can be applied to the terminal can realize indoor positioning and/or indoor navigation by obtaining the first correction result of the target object in the indoor scene.
  • FIG. 1A is a schematic diagram of a system architecture to which a target detection method according to an embodiment of the present disclosure can be applied; as shown in FIG. 1A , the system architecture includes a data frame collection terminal 131 , a network 132 and a target detection terminal 133 .
  • the data frame collection terminal 131 and the target detection terminal 133 may establish a communication connection through the network 132, and the data frame collection terminal 131 sends the collected current data frame to the target detection terminal 133 through the network 132.
  • the detection terminal 133 first, obtains the first detection result after the target detection of the current data frame; then, updates the first detection result through the historical optimization result to obtain the first observation result of the target object;
  • the point cloud data is used to correct the result, and the final correction result of the target object can be obtained.
  • the first detection result of the target scene is combined with the historical optimization result, and the correlation between the first detection result and the historical optimization result is considered, so that the position of the target object can be more accurately represented.
  • the data frame acquisition terminal 131 may be an image acquisition device with a camera, and the target detection terminal 133 may include a computer device with certain computing capabilities, such as a terminal device or a server or other processing device.
  • the network 132 can be wired or wireless.
  • the data frame acquisition terminal 131 can be connected to the server through wired connection, such as data communication through a bus; when the target detection terminal 133 is a terminal device, the data frame acquisition terminal 131 can It is connected to the target detection terminal 133 by means of wireless connection, and then performs data communication.
  • the target detection terminal 133 may be a vision processing device with a video capture module, or a host with a camera.
  • the target detection method in the embodiment of the present disclosure may be executed by the target detection terminal, and the above-mentioned system architecture may not include the network 132 and the data frame collection terminal 131 .
  • FIG. 1B shows a flowchart of a target detection method according to an embodiment of the present disclosure.
  • the target detection method can be performed by a terminal device, a server or other types of electronic devices, wherein the terminal device can be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, a personal digital processor (Personal Digital Assistant, PDA), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
  • the object detection method may be implemented by the processor calling computer-readable instructions stored in the memory.
  • the target detection method of the embodiment of the present disclosure will be described below by taking an electronic device as an execution subject as an example.
  • Step S11 acquiring a first detection result obtained by performing target detection on the current data frame of the target scene.
  • the electronic device may perform data collection on the target scene to obtain the current data frame of the target scene, or the electronic device may obtain the current data frame of the target scene from other devices.
  • the current data frame may be an image frame, for example, the current data frame may be a depth image of the target scene, or the current data frame may also be point cloud data collected for the target scene.
  • the depth image may include a common RGB image (a color image with three color channels of red (R), green (G), and blue (B)) and a depth image.
  • target detection may be performed on the current data frame to obtain a first detection result.
  • any detection method can be used to perform target detection on the current data frame.
  • the first detection result may be a detection frame obtained by performing target detection on the current data frame, and the detection frame may indicate the position and size of the target object, so that the first detection result may include position information and size information.
  • the detection frame may be a three-dimensional (3-Dimension, 3D) detection frame, and the position and size of the target object indicated by the detection frame may be the position and size of the target object in the target scene.
  • the first detection result can be considered as a relatively rough detection result.
  • the electronic device also directly obtains the first detection result from other devices.
  • the position of the target object indicated by the first detection result may be the position of the target object in the world coordinate system of the target scene.
  • the first detection result may be the coordinates of the target object in the world coordinate system.
  • the electronic device may directly acquire the first detection result including the position of the target object in the world coordinate system.
  • the position of the target object in the coordinate system of the image acquisition device can be obtained first, and then according to the relative position transformation relationship between the coordinate system of the image acquisition device and the world coordinate system, the position of the target object in the coordinate system of the image acquisition device can be obtained.
  • the target object may be an object, a person, etc. existing in the target scene.
  • the target objects can be pedestrians, tables, chairs, etc.
  • the first detection result may further include object information of the indicated target object, so that the target object indicated by the first detection result may be determined according to the object information of the first detection result.
  • Step S12 Update the first detection result based on the historical optimization result of the target scene to obtain the first observation result of the target object in the current data frame.
  • the historical optimization result of the target scene may be the detection result of the target object obtained by performing optimization based on the second detection result, and the historical optimization result may more accurately indicate the location of the target object.
  • the second detection result may be obtained by performing target detection on all or part of the historical data frame of the target scene, the historical data frame may be the data frame collected before the current data frame, and the second detection result may be the historical detection result of the target object.
  • the manner of acquiring the second detection result may be similar to the manner of acquiring the above-mentioned first detection result, which will not be repeated here.
  • the second detection result may be a detection frame obtained by performing target detection on the historical data frame, and the second detection result may include position information and size information.
  • a target object in the target scene may correspond to a historical optimization result, that is, a plurality of second detection results obtained by target detection based on all or part of the historical data frame, a historical optimization of each target object can be obtained.
  • the stored historical optimization result can be updated, so that one target object corresponds to one historical optimization result, thereby reducing the stored historical optimization result.
  • the optimization result corresponding to each data frame may also be stored, which is not limited in this embodiment of the present disclosure.
  • step S12 is performed at this time.
  • the historical optimization results mentioned in can be considered as the optimization results corresponding to the previous data frame of the current data frame.
  • the first detection result may be updated by using the historical optimization result of the target scene.
  • the historical optimization result may be matched with the first detection result, and a known knowledge corresponding to the target object corresponding to the first detection result and the historical optimization result may be established. Associations between target objects.
  • the first detection result may be updated according to the association between the target object corresponding to the first detection result and the known target object corresponding to the historical optimization result; for example, the object information of the first detection result may be determined, or the same
  • the historical optimization result of the target object and the first detection result are combined, for example, the detection frame corresponding to the historical optimization result and the detection frame corresponding to the first detection result are combined.
  • the connection between the target object of the current data frame and the target object of the historical data frame can be established, so that the obtained first observation result has more accurate object information .
  • the first observation result may also be a detection frame, and correspondingly, the first observation result may include position information and size information of the target object.
  • Step S13 correcting the first observation result according to the point cloud data corresponding to the first observation result to obtain a first correction result of the target object.
  • a target object in the target scene may exist in the current data frame or may exist in one or more historical data frames, so that a target object in the current data frame may have the first observation results.
  • the first observation result can be corrected according to the point cloud data corresponding to the first observation result of the target object to obtain the first correction of the target object. result.
  • the point cloud data corresponding to the first observation result of the target object and the point cloud data corresponding to the historical optimization result can be used to determine The first observation result is corrected to obtain the first correction result of the target object.
  • the point cloud data corresponding to the first observation result of the target object and/or the obvious abnormal data in the historical optimization result may be deleted, or the missing data in the point cloud data corresponding to the first observation result may be deleted. Supplement, and then the first correction result of the target object. In this way, the first correction result may more accurately indicate the position of the target object in the current data frame in the target scene.
  • the image frame may be converted into point cloud data according to depth information of the image frame. Then, point cloud data corresponding to the historical optimization result and/or the first observation result can be obtained.
  • the first detection result may be updated through the historical optimization result of the target scene, so that the association between the current data frame and the historical data frame may be established.
  • the process of obtaining the first observation result of the target object in the current data frame will be described below through an implementation manner.
  • the object information of the first detection result is determined. Then, the first detection result is updated according to the object information of the first detection result to obtain the first observation result of the target object in the current data frame. The object information is used to identify the target object.
  • the historical optimization result of the target object in the target scene may be used to determine the object information of the first detection result.
  • the detection frame of the historical optimization result may overlap with the detection frame of the first detection result. Then, it can be considered that the target object indicated by the historical optimization result and the target object indicated by the first detection result are the same target object, so that the object information of the historical optimization result can be used as the object information corresponding to the first detection result.
  • the detection frame of any historical optimization result does not overlap with the detection frame of the first detection result, it can be considered that the target object indicated by the first detection result is the newly detected target object in the target scene, so that it can be The new object information is generated to identify the target object indicated by the first detection result.
  • the historical optimization result of the target scene may be matched with the first detection result, and if the first detection result matches the historical optimization result, the object information of the historical optimization result may be matched The object information determined as the first detection result.
  • the historical optimization result of the target scene may be matched with the first detection result.
  • the detection frame of the historical optimization result may be determined to match the detection frame of the first detection result, and the historical optimization result may be determined to match the detection frame of the first detection result.
  • the matching degree of the detection result For a first detection result, the historical optimization result with the highest matching degree with the first detection result and greater than the matching degree threshold can be determined as the historical optimization result matching the first detection result, and then the historical optimization result matching the first detection result can be determined.
  • the object information of the historical optimization result of the result matching is used as the object information of the first detection result, and the first observation result of the target object is obtained, and the first observation result may be the first detection result after updating the object information.
  • the first volume of the overlapping portion of a detection frame of a first detection result and a detection frame of a historical optimization result it is possible to determine the first volume of the detection frame of the first detection result.
  • the total volume occupied by the detection frame of the result and the detection frame of the historical optimization result, and then the ratio of the first volume to the total volume can be used as the matching degree of the historical optimization result and the first detection result. That is, the three-dimensional intersection ratio (3-Dimensional Intersection over Union) between the detection frame of a first detection result and the detection frame of a historical optimization result may be used as the matching degree between the detection result and the historical optimization result.
  • new object information is set for the first detection result.
  • the first detection result does not match any historical optimization result, so it can be considered that the first detection result does not match any historical optimization result. is the detection result of the newly observed target object in the target scene, thereby setting new object information for the first detection result. In the case that the first detection result does not match the historical optimization result in the current scene, by setting new object information for the first detection result, the first detection result can be made to correspond to the newly observed target object.
  • the historical The optimization result is determined as the first observation result of the undetected target object in the current data frame.
  • each historical optimization result may be obtained by performing target detection based on historical data frames of the target scene, and the same target object detected in multiple historical data frames may correspond to one historical optimization result.
  • the result may include the position information and object information of the target object, and the target object existing in the target scene may be determined according to the historical optimization result of the historical data frame.
  • a target object can be observed within the field of view of the current data frame determined according to the historical optimization results, but the first detection result of the current data frame indicates that the target object is not detected in the current data frame, and it can be considered that the current data frame has missed detection Therefore, the historical optimization result of the undetected target object can be determined as the first observation result of the target object in the current data frame, thereby reducing the phenomenon of missed detection and greatly increasing the reliability of target detection.
  • the first observation result may be corrected to obtain the first correction result.
  • the first correction result has more accurate position information, thereby making the target detection more accurate. The process of obtaining the first correction result will be described below through a possible implementation manner.
  • the point cloud data of the historical optimization result of the same target object and the point cloud data corresponding to the first observation result may be merged to obtain merged point cloud data. Then, based on the merged point cloud data, a first correction result obtained by correcting the first observation result is obtained.
  • the historical optimization result and the first observation result belonging to the same target object may be determined according to the object information of the historical optimization result and the object information of the first observation result. Since the object information can mark the target object, if the object information is the same, it can be considered that the historical optimization result and the first observation result belong to the same target object. For the same target object, the point cloud data in the detection frame of the historical optimization result and the point cloud data in the detection frame of the first observation result can be obtained, and the point cloud data corresponding to the historical optimization result and the point corresponding to the first observation result can be obtained.
  • the cloud data is merged, for example, the point cloud data corresponding to the historical optimization result and the point cloud data corresponding to the first observation result are merged to obtain the merged point cloud data of a target object.
  • the first observation result can be corrected according to the merged point cloud data to obtain the first correction result of the target object.
  • the merged point cloud data of a target object can be input into a neural network, and the position information of the first observation result can be corrected by using the neural network to obtain the first corrected result output by the neural network.
  • the merged point cloud data of the same object can be used to obtain the first correction result with more accurate position information, so that the historical information of the same target object (such as the position information of the historical optimization result) can be considered in the target detection process, improving the The accuracy of object detection.
  • each data frame may correspond to an optimization result of a target object, so that when correction and optimization are performed for the first observation result of the current data frame , for the same target object, the point cloud data corresponding to the historical optimization result of the previous data frame of the current data frame can be merged with the point cloud data corresponding to the first observation result, and the historical optimization result of the previous data frame of the current data frame can be used.
  • the first observation result of the current data frame is corrected. Since the historical optimization result of the previous data frame of the current data frame is the latest stored, it is more accurate than the historical optimization results corresponding to other historical data frames.
  • the historical optimization result of the data frame corrects the first observation result of the current data frame, which can make the obtained first correction result more accurate.
  • correction and optimization are performed on the first observations of some of the collected data frames, for example, a data frame for which correction and optimization of the first observations are to be performed every certain data frame is selected, then not every data frame is selected for correction and optimization.
  • Each frame corresponds to the optimization result of a target object.
  • the latest stored historical optimization result of the target object may be selected to correct the first observation result of the current data frame.
  • the first correction result may be optimized. The process of further optimizing the first correction result will be described below.
  • a correction result of the target object may be obtained, wherein the correction result includes a first correction result and a second correction result, and the second correction result is obtained by performing target detection based on the historical data frame of the target scene. Based on the target result in the correction result, the current optimization result of the target object can be determined.
  • the first correction result of the current data frame may be combined with the second correction result of the historical data frame to further optimize the first correction result.
  • the second correction result may be obtained based on a second detection result of target detection performed on a historical data frame of the target scene, and the second detection result may be a historical detection result.
  • the manner of determining the second correction result may be the same as the manner of determining the first correction result, which will not be repeated here.
  • Each historical data frame may correspond to a second correction result of the target object, and the same target object may correspond to a series of second correction results as data frames are continuously collected on the target scene.
  • a correction result including the first correction result and the second correction result can be obtained, so that the target detection information (second correction result) of the historical data frame can be combined.
  • the current optimization result of the target object can be determined. For example, one or several correction results can be selected from the correction results of a target object as the target result, and the target result can be used as the current optimization result, or, Take the average or median of multiple target results as the current optimization result. Since the position change of the target object may be small, the correction results of the target object obtained from different data frames can be consistent, so that the current optimization result of the target object can be obtained by using multiple correction results, so that the target detection is more accurate.
  • errors between the first correction result in the correction results and the plurality of second correction results may be determined, wherein the first correction result is any one correction result, and the second correction result It is a correction result other than the first correction result.
  • the target result in the correction result is determined according to the number of interior points corresponding to the first correction result.
  • an example of determining the target result in the correction result is provided.
  • any one of the correction results may be used as the first correction result, and the correction results other than the first correction result among the multiple correction results may be used as the second correction result.
  • the error between the first correction result and the plurality of second correction results can be calculated respectively, and according to the error between the first correction result and the plurality of second correction results, the internal error corresponding to the first correction result can be counted. number of points. For example, the error between the position information of the first correction result and the position information of a second correction result can be calculated.
  • the second correction result is an interior point of the first correction result, and the number of interior points of the first correction result can be used as the number of interior points corresponding to the first correction result, that is, the second correction whose error with the first correction result is smaller than the error threshold the number of results.
  • the target result in the correction result may be determined according to the number of inliers corresponding to the first correction result. For example, the first correction result with the largest number of inliers is determined as the target result among the correction results. In this way, the current optimization result of the target object can be determined according to the relatively accurate target result in the correction result, and the correction result with lower accuracy can be removed, so that the accuracy of target detection can be further improved.
  • a first correction result with the largest number of inliers among the plurality of first correction results is determined. Then, the first correction result with the largest number of inliers and the second correction result with an error smaller than the error threshold from the first correction result with the largest number of inliers are determined as target results in the correction results.
  • the second correction result whose error with the first correction result is smaller than the error threshold may be an inner point of the first correction result, and the first correction result with the largest number of inner points may be the first correction result with the largest number of inner points.
  • Correction results A first correction result of a target object has the largest number of inliers, which can indicate that in the case where the position of the target object changes less, the first correction result and the inliers of the first correction result are closer to the target object. Therefore, the first correction result and the inner point of the first correction result can be determined as the target result in the correction result of the target object.
  • the current optimization result of a target object may be determined based on multiple target results in the correction results of a target object, so that the first correction result of the target object may be further optimized, so that the optimization
  • the current optimization result obtained after can more accurately indicate the position of the target object. For example, an optimal value can be estimated according to the position information of the target object in each target result, so that the optimal value reaches a specific condition, and this optimal value can be used as the current optimization result of the target object.
  • a current optimization result is estimated according to the position information of the target object in each target result, so that the sum of the distances between the current optimization result and multiple target results can be minimized, for example, the The current optimization result is regarded as an unknown variable, and the equation of the sum of the squares of the errors between the unknown variable and each target result is established, and then the value of the unknown variable under the condition of the minimum sum of distances is solved.
  • the solved value of the unknown variable can be used as The current optimization result for this target object.
  • the sum of the distances between the obtained current optimization result and the position information of multiple target results can be minimized. In this way, the current optimization result can be used as the final detection result of the target object, thereby improving the accuracy of target detection.
  • the current optimization result of a target object may be saved, or the saved historical optimization result of the target object may be updated to the obtained current optimization result .
  • FIG. 2 shows a flowchart of an example of a target detection method according to an embodiment of the present disclosure.
  • Step S201 obtaining the 3D detection frame (first detection result) of the current data frame of the target scene
  • Step S202 matching the historical optimal estimation frame (historical optimization result) of the known object in the target scene with the 3D detection frame of the current data frame to obtain the current observation frame (first observation result) of the target object in the current data frame;
  • Step S203 for each target object, use the optimal estimation frame of the target object and the current observation frame of the current data frame to segment the point cloud data of the target scene, and retain the historical optimal estimation frame and/or the current optimal estimation frame of the target object.
  • Step S204 input the optimal estimation frame of each target object and/or the point cloud data in the current observation frame and the current observation frame corresponding to the target object into the neural network, and use the neural network to observe the current observation of each target object.
  • the frame is corrected to obtain the current correction frame (the first correction result) of each target object in the current data frame;
  • step S205 the current correction frame and the historical correction frame of each target object are jointly optimized to obtain the current optimal estimation frame (current optimization result) of each target object.
  • the target detection solution provided by the embodiments of the present disclosure can improve the accuracy of target detection. Even if there is occlusion or truncation in the target scene, the obtained detection result has strong robustness and improves the stability of target detection.
  • embodiments of the present disclosure also provide apparatuses, electronic devices, computer-readable storage media, and programs, all of which can be used to implement any target detection method provided by the embodiments of the present disclosure. Corresponding records will not be repeated.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • FIG. 3 shows a block diagram of a target detection apparatus according to an embodiment of the present disclosure. As shown in FIG. 3 , the apparatus includes:
  • the obtaining module 31 is configured to obtain a first detection result obtained by performing target detection on the current data frame of the target scene;
  • a determination module 32 configured to update the first detection result based on the historical optimization result of the target scene to obtain the first observation result of the target object in the current data frame;
  • the correction module 33 is configured to correct the first observation result according to the point cloud data corresponding to the first observation result to obtain a first correction result of the target object.
  • the determining module 32 is configured to determine object information of the first detection result based on historical optimization results of the target scene, where the object information is used to identify the target object ; Update the first detection result according to the object information of the first detection result to obtain the first observation result of the target object in the current data frame.
  • the determining module 32 is configured to match the historical optimization result of the target scene with the first detection result; when the first detection result matches the historical optimization result In this case, the object information of the historical optimization result is determined as the object information of the first detection result.
  • the determining module 32 is configured to set new object information for the first detection result in the case that the first detection result does not match the historical optimization result.
  • the determining module 32 is configured to determine the first volume of the overlapping portion of the first detection result and a detection frame of a historical optimization result, and to determine the detection of the first detection result
  • the total volume occupied by the frame and the detection frame of the historical optimization result; the matching degree between the first detection result and the historical optimization result is determined according to the ratio of the first volume to the total volume.
  • the determining module 32 is configured to, based on the historical optimization result of the target scene, in the case that it is determined that the current data frame has a target object that is not detected by the first detection result, The historical optimization result of the undetected target object is determined as the first observation result of the undetected target object in the current data frame.
  • the correction module 33 is configured to merge the point cloud data corresponding to the historical optimization result of the same target object and the point cloud data corresponding to the first observation result to obtain the merged point cloud data;
  • the merged point cloud data is used to obtain a first correction result obtained by correcting the first observation result.
  • the correction module 33 is configured to, for the same target object, compare the point cloud data corresponding to the historical optimization result of the previous data frame of the current data frame with the point cloud data corresponding to the first observation result. Merge point cloud data.
  • the apparatus further includes: an optimization module configured to obtain a correction result of the target object, wherein the correction result includes the first correction result and the second correction result, the first correction result
  • the second correction result is obtained by performing target detection based on the historical data frame of the target scene; based on the target result in the correction result, the current optimization result of the target object is determined.
  • the optimization module is further configured to determine an error between a first correction result in the correction results and a plurality of second correction results, wherein the first correction result is any one the correction result, the second correction result is the correction observation frame other than the first correction result; the number of inliers corresponding to the first correction result is counted, wherein the number of inliers is the same as the first correction result.
  • the number of second correction results whose error of the correction result is smaller than the error threshold; the target result in the correction result is determined according to the number of inliers corresponding to the first correction result.
  • the optimization module is configured to determine a first correction result with the largest number of inliers among the plurality of first correction results; A second correction result whose error from the first correction result with the largest number of inliers is smaller than the error threshold is determined as the target result in the correction results.
  • the sum of errors between the current optimization result and a plurality of the target results is minimized.
  • the functions or modules included in the apparatuses provided in the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments.
  • FIG. 4 is a block diagram of an electronic device 800 according to an exemplary embodiment.
  • electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, and the like.
  • the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814 , and the communication component 816 .
  • the processing component 802 generally controls the overall operation of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 can include one or more processors 820 to execute instructions to perform all or some of the steps of the methods described above.
  • processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components.
  • processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
  • Memory 804 is configured to store various types of data to support operation at electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like. Memory 804 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • Power supply assembly 806 provides power to various components of electronic device 800 .
  • Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
  • Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 810 is configured to output and/or input audio signals.
  • audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 800 is in operating modes, such as calling mode, recording mode, and voice recognition mode.
  • the received audio signal may be further stored in memory 804 or transmitted via communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of electronic device 800 .
  • the sensor assembly 814 can detect the on/off state of the electronic device 800, the relative positioning of the components, such as the display and the keypad of the electronic device 800, the sensor assembly 814 can also detect the electronic device 800 or one of the electronic device 800 Changes in the position of components, presence or absence of user contact with the electronic device 800 , orientation or acceleration/deceleration of the electronic device 800 and changes in the temperature of the electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
  • Electronic device 800 may access wireless networks based on communication standards, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
  • NFC near field communication
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmed gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmed gate array
  • controller microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • a non-volatile computer-readable storage medium such as a memory 804 comprising computer program instructions executable by the processor 820 of the electronic device 800 to perform the above method is also provided.
  • An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to perform the above method.
  • the electronic device may be provided as a terminal, server or other form of device.
  • FIG. 5 is a block diagram of an electronic device 1900 according to an exemplary embodiment.
  • the electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922, which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922, such as applications.
  • An application program stored in memory 1932 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as a Microsoft server operating system (Windows ServerTM), a graphical user interface based operating system (Mac OS XTM) introduced by Apple, a multi-user multi-process computer operating system (UnixTM). ), Free and Open Source Unix-like Operating System (LinuxTM), Open Source Unix-like Operating System (FreeBSDTM) or similar.
  • a non-volatile computer-readable storage medium such as memory 1932 comprising computer program instructions executable by processing component 1922 of electronic device 1900 to perform the above-described method.
  • Embodiments of the present disclosure may be systems, methods and/or computer program products.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the embodiments of the present disclosure.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions for carrying out the operations of the disclosed embodiments may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state setting data, or programmed in one or more Source or object code written in any combination of languages, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs)
  • FPGAs field programmable gate arrays
  • PDAs programmable logic arrays
  • Computer readable program instructions are executed to implement various aspects of the embodiments of the present disclosure.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
  • Embodiments of the present disclosure relate to a target detection method and apparatus, electronic device, and storage medium, wherein the method includes: acquiring a first detection result obtained by performing target detection on a current data frame of a target scene; The historical optimization result updates the first detection result to obtain the first observation result of the target object in the current data frame; the first observation result is corrected according to the point cloud data corresponding to the first observation result, Obtain the first correction result of the target object.

Abstract

一种目标检测方法及装置、电子设备和存储介质,其中,所述方法包括:获取针对目标场景的当前数据帧进行目标检测得到的第一检测结果(S11);基于所述目标场景的历史优化结果对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果(S12);根据所述第一观测结果对应的点云数据对所述第一观测结果进行修正,得到所述目标对象的第一修正结果(S13)。所述方法可以将目标场景的第一检测结果与历史优化结果相结合,考虑第一检测结果与历史优化结果的关联性,从而得到的第一修正结果可以更加准确地对目标对象的位置进行表示。

Description

目标检测方法及装置、电子设备和存储介质
相关申请的交叉引用
本公开基于申请号为202010725039.5、申请日为2020年7月24日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开实施例涉及计算机视觉技术领域,尤其涉及一种目标检测方法及装置、电子设备和存储介质。
背景技术
计算机视觉技术可以通过电子设备对生物视觉进行模拟,随着计算机视觉技术的发展,越来越多的工作可以利用电子设备完成,为人们提供便利条件。目标检测是计算机视觉中一个重要的任务,其任务目标是对视野范围内物体的位置信息进行估计。稳定的目标检测技术不仅可以估计物体的位置信息,还可以帮助优化相机的位姿或者用于其他应用(例如增强现实和室内导航)的开发。
在相关技术中,由于目标检测场景中可能存在一些遮挡或截断现象,还可能存在某些图像帧的漏检情况,使得对物体的位置信息估计的准确度较低。
发明内容
本公开实施例提出了一种目标检测技术方案。
根据本公开实施例的一方面,提供了一种目标检测方法,包括:获取针对目标场景的当前数据帧进行目标检测得到的第一检测结果;基于所述目标场景的历史优化结果对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果;根据所述第一观测结果对应的点云数据对所述第一观测结果进行修正,得到所述目标对象的第一修正结果。
在一些可能的实现方式中,所述基于所述目标场景的历史优化结果对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果,包括:基于所述目标场景的历史优化结果,确定所述第一检测结果的对象信息,其中,所述对象信息用于标识所述目标对象;根据所述第一检测结果的对象信息对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果。如此,通过确定第一检测结果的对象信息,可以建立历史优化结果与第一检测结果之间的联系,从而提高目标检测的准确性。
在一些可能的实现方式中,所述基于所述目标场景的历史优化结果,确定所述第一检测结果的对象信息,包括:将所述目标场景的历史优化结果与所述第一检测结果进行匹配;在所述第一检测结果与所述历史优化结果匹配的情况下,将所述历史优化结果的对象信息确定为所述第一检测结果的对象信息。如此,能够进一步对第一检测结果进行更新,得到具有准确的对象信息的第一观测结果。
在一些可能的实现方式中,所述基于所述目标场景的历史优化结果,确定所述第一检测结果的对象信息,包括:在所述第一检测结果与所述历史优化结果不匹配的情况下,为所述第一检测结果设置新的对象信息。如此,能够使第一检测结果对应新观测到的目标对象。
在一些可能的实现方式中,所述将所述目标场景的历史优化结果与所述第一检测结果进行匹配,包括:确定所述第一检测结果与一个历史优化结果的检测框交叠部分的第一体积,以及,确定所述第一检测结果的检测框与该历史优化结果的检测框共同占据的总体积;根据所述第一体积与所述总体积的比值确定所述第一检测结果与该的历史优化结果的匹配程度。如此,能够更加精确地确定该检测结果与该历史优化结果的匹配程度。
在一些可能的实现方式中,所述基于所述目标场景的历史优化结果对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果,包括:基于所述目标场景的历史优化结果,在确定所述当前数据帧存在所述第一检测结果未检测到的目标对象的情况下,将所述未检测到的目标对象的历史优化结果确定为所述当前数据帧中未检测到的目标对象的第一观测结果。如此,能够减少漏检现象,大大增加了目标检测的可靠性。
在一些可能的实现方式中,所述根据所述第一观测结果对应的点云数据对所述第一观测结果进行修正,得到所述目标对象的第一修正结果,包括:对同一目标对象的历史优化结果对应的点云数据与第一观测结果对应的点云数据进 行合并,得到合并点云数据;基于所述合并点云数据,得到对所述第一观测结果进行修正的第一修正结果。如此,利用同一对象的合并点云数据得到位置信息更加准确的第一修正结果,在目标检测过程中可以考虑同一目标对象的历史信息,能够提高目标检测的准确性。
在一些可能的实现方式中,所述对同一目标对象的历史优化结果对应的点云数据与第一观测结果对应的点云数据进行合并,包括:针对同一目标对象,将所述当前数据帧的前一个数据帧的历史优化结果对应的点云数据与所述第一观测结果对应的点云数据进行合并。如此,利用前一个数据帧的历史优化结果对当前数据帧的第一观测结果进行修正,使得到的第一修正结果更加准确。
在一些可能的实现方式中,所述方法还包括:获取所述目标对象的修正结果,其中,所述修正结果包括所述第一修正结果和第二修正结果,所述第二修正结果为基于目标场景的历史数据帧进行目标检测得到的;基于所述修正结果中的目标结果,确定所述目标对象的当前优化结果。如此,能够利用多个修正结果得到目标对象的当前优化结果,使得目标检测更加准确。
在一些可能的实现方式中,所述方法还包括:确定所述修正结果中的第一修正结果分别与多个第二修正结果的误差,其中,所述第一修正结果为任意一个所述修正结果,所述第二修正结果为所述第一修正结果之外的修正观测框;统计所述第一修正结果对应的内点数量,其中,所述内点数量为与所述第一修正结果的误差小于误差阈值的第二修正结果的数量;根据所述第一修正结果对应的内点数量确定所述修正结果中的目标结果。如此,根据修正结果中相对准确的目标结果确定目标对象的当前优化结果,去除准确性较低的修正结果,从而可以进一步提高目标检测的准确性。
在一些可能的实现方式中,所述根据所述第一修正结果对应的内点数量确定所述修正结果中的目标结果,包括:确定多个所述第一修正结果中所述内点数量最大的第一修正结果;将所述内点数量最大的第一修正结果以及与所述内点数量最大的第一修正结果的误差小于所述误差阈值的第二修正结果,确定为所述修正结果中的目标结果。如此,能够对该目标对象的第一修正结果进行进一步的优化,使优化后得到的当前优化结果可以更加准确地指示目标对象的位置。
在一些可能的实现方式中,所述当前优化结果与多个所述目标结果的误差之和达到最小。
根据本公开实施例的一方面,提供了一种目标检测装置,包括:获取模块,配置为获取针对目标场景的当前数据帧进行目标检测得到的第一检测结果;确定模块,配置为基于所述目标场景的历史优化结果对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果;修正模块,配置为根据所述第一观测结果对应的点云数据对所述第一观测结果进行修正,得到所述目标对象的第一修正结果。
根据本公开实施例的一方面,提供了一种电子设备,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:执行上述目标检测方法。
根据本公开实施例的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述目标检测方法。
本公开实施例提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行配置为实现上述任意一项所述的目标检测方法。
在本公开实施例中,可以获取针对目标场景的当前数据帧进行目标检测得到的第一检测结果,然后基于目标场景的历史优化结果对第一检测结果进行更新,得到当前数据帧中目标对象的第一观测结果,再根据目标对象的历史优化结果与第一观测结果对应的点云数据对第一观测结果进行修正,得到所述目标对象的第一修正结果。这样,可以将目标场景的第一检测结果与历史优化结果相结合,考虑第一检测结果与历史优化结果的关联性,从而得到的第一修正结果可以更加准确地对目标对象的位置进行表示。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开实施例。
根据下面参考附图对示例性实施例的详细说明,本公开实施例的其它特征及方面将变得清楚。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开实施例的技术方案。
图1A为可以应用本公开实施例的目标检测方法的一种系统架构示意图;
图1B示出根据本公开实施例的目标检测方法的流程图。
图2示出根据本公开实施例的目标检测方法一示例的流程图。
图3示出根据本公开实施例的目标检测装置的框图。
图4示出根据本公开实施例的电子设备示例的框图。
图5示出根据本公开实施例的电子设备示例的框图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多个中的任意一种或多个中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
另外,为了更好地说明本公开实施例,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开实施例同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开实施例的主旨。
本公开实施例提供的目标检测方案,可以获取针对目标场景的当前数据帧进行目标检测得到的第一检测结果,然后基于目标场景的历史优化结果对第一检测结果进行更新,得到当前数据帧中目标对象的第一观测结果,再根据目标对象的历史优化结果与第一观测结果对应的点云数据对第一观测结果进行修正,得到对第一观测结果进行修正的第一修正结果。这里,通过第一检测结果与历史优化结果相结合得到的第一观测结果,可以更加准确地指示当前数据帧中的目标对象,进一步通过历史优化结果与第一观测结果对应的点云数据,可以对第一观测结果进行调整,使得第一修正结果可以更加准确的指示目标对象的位置。
在相关技术中,通常针对目标场景采集的每一个数据帧单独进行目标检测。然而这种目标检测的方式存在很大的局限性,例如,针对同一物体的检测结果会发生抖动的现象,或者,在目标场景中的目标对象存在遮挡现象或截断现象的情况下,很难对目标对象的位置进行准确的估计,从而检测结果的准确性较差。本公开实施例可以将目标场景当前数据帧的第一检测结果与历史优化结果相结合,从而可以考虑同一目标对象的位置在时间上的连续性,提高对目标对象的位置进行估计的准确性。
本公开实施例提供的技术方案可以应用于目标检测、目标追踪、定位、导航等应用场景的扩展,本公开实施例对此不做限定。例如,可以应用于终端的增强现实技术,通过得到的室内场景中目标对象的第一修正结果,可以实现室内定位和/或室内导航。
图1A为可以应用本公开实施例的目标检测方法的一种系统架构示意图;如图1A所示,该系统架构中包括:数据帧采集终端131、网络132和目标检测终端133。为实现支撑本公开实施例性应用,数据帧采集终端131和目标检测终端133可以通过网络132建立通信连接,数据帧采集终端131通过网络132向目标检测终端133发送采集到的当前数据帧,目标检测终端133,首先,获取当前数据帧目标检测后的第一检测结果;然后,通过历史优化结果对第一检测结果进行更新,得到目标对象的第一观测结果;最后,通过第一观测结果的点云数据对该结果进行修正,能够得到目标对象最终的修正结果。如此,将目标场景的第一检测结果与历史优化结果相结合,考虑第一检测结果与历史优化结果的关联性,从而可以更加准确地对目标对象的位置进行表示。
作为示例,数据帧采集终端131可以是具有摄像头的图像采集设备,目标检测终端133可以包括具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备。网络132可以采用有线连接或无线连接方式。其中,在目标检测终端133为服务器时,数据帧采集终端131可以通过有线连接的方式与服务器通信连接,例如通过总线进行数据通信;在目标检测终端133为终端设备时,数据帧采集终端131可以通过无线连接的方式与目标检测终端133通信连接,进而进行数据通信。
或者,在一些场景中,目标检测终端133可以是带有视频采集模组的视觉处理设备,可以是带有摄像头的主机。这时,本公开实施例的目标检测方法可以由目标检测终端执行,上述系统架构可以不包含网络132和数据帧采集终端131。
图1B示出根据本公开实施例的目标检测方法的流程图。该目标检测方法可以由终端设备、服务器或其它类型的电子设备执行,其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该目标检测方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。下面以电子设备作为执行主体为例对本公开实施例的目标检测方法进行说明。
步骤S11,获取针对目标场景的当前数据帧进行目标检测得到的第一检测结果。
在本公开实施例中,电子设备可以对目标场景进行数据采集,得到目标场景的当前数据帧,或者,电子设备可以从其他设备处获取目标场景的当前数据帧。当前数据帧可以是一个图像帧,例如,当前数据帧可以是目标场景的深度图像,或者,当前数据帧还可以是针对目标场景采集的点云数据。其中,深度图像可以包括普通的RGB图(具有红(R)、绿(G)、蓝(B)三个颜色通道的彩色图像)和深度图。进一步地,可以对当前数据帧进行目标检测,得到第一检测结果。这里,可以利用任一检测方法对当前数据帧进行目标检测。第一检测结果可以是针对当前数据帧进行目标检测得到的检测框,检测框可以指示目标对象所在的位置和尺寸,从而第一检测结果可以包括位置信息和尺寸信息。其中,检测框可以是三维(3-Dimension,3D)检测框,检测框所指示的目标对象的位置和尺寸可以是目标对象在是目标场景中的位置和尺寸。第一检测结果可以认为是较为粗略的检测结果。在一些实现方式中,电子设备也从其他设备处直接获取第一检测结果。
这里,第一检测结果所指示的目标对象所在的位置,可以是目标对象在目标场景的世界坐标系下的位置,例如,第一检测结果可以是目标对象在世界坐标系下的坐标。电子设备可以直接获取包括目标对象在世界坐标系下位置的第一检测结果。在一些实现方式中,可以先获取目标对象在图像采集装置坐标系下的位置,然后根据图像采集装置坐标系与世界坐标系的相对位置变换关系,可以将目标对象在图像采集装置坐标系下的位置,转换为目标对象在世界坐标系下的位置。目标对象可以是目标场景中存在的物体、人物等。例如,目标对象可以是行人、桌子、椅子等。第一检测结果还可以包括所指示的目标对象的对象信息,从而可以根据第一检测结果的对象信息确定第一检测结果指示的目标对象。
步骤S12,基于所述目标场景的历史优化结果对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果。
在本公开实施例中,目标场景的历史优化结果可以是基于第二检测结果进行优化得到的目标对象的检测结果,历史优化结果可以较为准确地指示目标对象所在的位置。第二检测结果可以是针对目标场景的全部或部分历史数据帧进行目标检测得到的,历史数据帧可以是在当前数据帧之前采集的数据帧,第二检测结果可以是目标对象的历史检测结果。这里,第二检测结果的获取方式可以与上述第一检测结果的获取方式相似,这里不再赘述。相应地,第二检测结果可以是针对历史数据帧进行目标检测得到的检测框,第二检测结果可以包括位置信息和尺寸信息。
需要说明的是,目标场景中的一个目标对象可以对应一个历史优化结果,即,根据全部或部分历史数据帧进行目标检测得到的多个第二检测结果,可以得到每个目标对象的一个历史优化结果,在得到一个目标对象的新的优化结果之后,可以对存储的历史优化结果进行更新,从而使一个目标对象对应一个历史优化结果,进而减少存储的历史优化结果。一些实现方式中,也可以针对每个数据帧对应的优化结果进行存储,本公开实施例不对此进行限制,在针对目标场景的每个数据帧进行检测结果的优化的情况下,此时步骤S12中提到的历史优化结果可以认为是当前数据帧的前一个数据帧对应的优化结果。
这里,可以利用目标场景的历史优化结果对第一检测结果进行更新,例如,可以将历史优化结果与第一检测结果进行匹配,建立第一检测结果对应的目标对象与历史优化结果对应的已知目标对象之间的关联。根据第一检测结果对应的目标对象与历史优化结果对应的已知目标对象之间的关联,可以对第一检测结果进行更新;例如,可以确定第一检测结果的对象信息,或者,可以将同一目标对象的历史优化结果和第一检测结果进行合并,例如,将历史优化结果对应的检测框与第一检测结果对应的检测框进行合并。
通过基于目标场景的历史优化结果对第一检测结果进行更新,可以建立当前数据帧的目标对象与历史数据帧的目标对象之间的联系,从而使得到的第一观测结果具有更加准确的对象信息。这里,第一观测结果也可以是一个检测框,相应地,第一观测结果可以包括目标对象的位置信息和尺寸信息。
步骤S13,根据所述第一观测结果对应的点云数据对所述第一观测结果进行修正,得到所述目标对象的第一修正结果。
在本公开实施例中,针对目标场景中的一个目标对象,可能存在于当前数据帧中,也可能存在于一个或多个历史数据帧中,从而当前数据帧中的一个目标对象可以具有第一观测结果。在一些实现方式中,还可以具有历史优化结果。在当前数据帧中的一个目标对象仅具有第一观测结果的情况下,可以根据该目标对象的第一观测结果对应的点云数据对第一观测结果进行修正,得到该目标对象的第一修正结果。在当前数据帧中的一个目标对象既具有第一观测结果又具有历史优化结果的情况下,可以根据该目标对象的第一观测结果对应的点云数据和历史优化结果对应的点云数据,对第一观测结果进行修正,得到该目标对象的第一修正结果。举例来说,可以对目标对象的第一观测结果对应的点云数据和/或历史优化结果中明显的异常数据进行删除,或者,可以对第一观测结果对应的点云数据中的缺失数据进行补充,进而目标对象的第一修正结果。这样,第一修正结果可以更加准确地指示当前数据帧中的目标对象在目标场景的位置。
这里,在当前数据帧是图像帧的情况下,可以根据图像帧的深度信息将图像帧转换为点云数据。然后可以获取历史优化结果和/或第一观测结果对应的点云数据。
在本公开实施例中,可以通过目标场景的历史优化结果对第一检测结果进行更新,从而可以建立当前数据帧与历史数据帧之间的关联。下面通过一实现方式对得到当前数据帧中目标对象的第一观测结果的过程进行说明。
在一个或多个可能的实现方式中,首先,基于目标场景的历史优化结果,确定第一检测结果的对象信息。然后,根据第一检测结果的对象信息对第一检测结果进行更新,得到当前数据帧中目标对象的第一观测结果。其中,对象信息用于标识目标对象。
在一些可能的实现方式中,可以利用目标场景中目标对象的历史优化结果,确定第一检测结果的对象信息,例如,可以在历史优化结果的检测框与第一检测结果的检测框重合的情况下,可以认为历史优化结果指示的目标对象与第一检测结果指示的目标对象是同一目标对象,从而可以将历史优化结果的对象信息作为第一检测结果对应的对象信息。再例如,在任意一个历史优化结果的检测框均未与第一检测结果的检测框重合的情况下,可以认为第一检测结果指示的目标对象是目标场景中新检测到的目标对象,从而可以生成新的对象信息标识第一检测结果指示的目标对象。通过确定第一检测结果的对象信息,可以建立历史优化结果与第一检测结果之间的联系,从而提高目标检测的准确性。
在一些可能的实现方式的本公开实施例中,可以将目标场景的历史优化结果与第一检测结果进行匹配,在第一检测结果与历史优化结果匹配的情况下,将历史优化结果的对象信息确定为第一检测结果的对象信息。
在本公开实施例中,可以将目标场景的历史优化结果与第一检测结果进行匹配,例如,可以确定历史优化结果的检测框与第一检测结果的检测框进行匹配,确定历史优化结果与第一检测结果的匹配程度。针对一个第一检测结果,可以将与该第一检测结果的匹配程度最高并且大于匹配程度阈值的历史优化结果,确定为与该第一检测结果匹配的历史优化结果,进而将与该第一检测结果匹配的历史优化结果的对象信息,作为第一检测结果的对象信息,得到目标对象的第一观测结果,第一观测结果可以为更新对象信息后的第一检测结果。通过将目标场景的历史优化结果与第一检测结果进行匹配,可以确定第一检测结果与历史优化结果之间的联系,从而可以进一步对第一检测结果进行更新,得到具有准确的对象信息的第一观测结果。
这里,将目标场景的历史优化结果与第一检测结果进行匹配,可以确定一个第一检测结果的检测框与一个历史优化结果的检测框交叠部分的第一体积,以及,确定该第一检测结果的检测框与该历史优化结果的检测框共同占据的总体积,然后可以将第一体积与总体积的比值作为该历史优化结果与该第一检测结果的匹配程度。即,可以将一个第一检测结果的检测框与一个历史优化结果的检测框之间的三维交并比(3-Dimensional Intersection over Union)作为该检测结果与该历史优化结果的匹配程度。
在本公开实现方式的本公开实施例中,在第一检测结果与历史优化结果不匹配的情况下,为第一检测结果设置新的对象信息。
在本公开实施例中,如果第一检测结果与任意一个历史优化结果的匹配程度均低于匹配程度阈值,则第一检测结果与任意一个历史优化结果均不匹配,从而可以认为第一检测结果是目标场景中新观测到的目标对象的检测结果,从而为第一检测结果设置新的对象信息。在第一检测结果与当前场景中历史优化结果不匹配的情况下,通过为第一检测结果设置新的对象信息,可以使第一检测结果对应新观测到的目标对象。
在一个可能的实现方式中,还可以基于目标场景的历史优化结果,在确定当前数据帧的视野范围存在第一检测结 果未检测到的目标对象的情况下,将未检测到的目标对象的历史优化结果确定为当前数据帧中未检测到的目标对象的第一观测结果。
在一些可能的实现方式中,各个历史优化结果可以是基于目标场景的历史数据帧进行目标检测得到的,在多个历史数据帧中检测到的同一个目标对象可以对应一个历史优化结果,历史优化结果可以包括目标对象的位置信息和对象信息,根据历史数据帧的历史优化结果可以确定目标场景中存在的目标对象。在根据历史优化结果确定当前数据帧的视野范围内可以观测到一个目标对象,但是当前数据帧的第一检测结果表明在当前数据帧中未检测到该目标对象,可以认为当前数据帧存在漏检的现象,进而可以将未检测到的目标对象的历史优化结果确定为当前数据帧中该目标对象的第一观测结果,从而减少漏检现象,大大增加了目标检测的可靠性。
在上述步骤S13中,可以对第一观测结果进行修正,得到第一修正结果。第一修正结果相比于第一观测结果而言,具有更加准确的位置信息,从而使得目标检测更加准确。下面通过一可能的实现方式对得到第一修正结果的过程进行说明。
在一个可能的实现方式中,可以对同一目标对象的历史优化结果的点云数据与第一观测结果对应的点云数据进行合并,得到合并点云数据。然后基于合并点云数据,得到对第一观测结果进行修正的第一修正结果。
在一些可能的实现方式中,可以根据历史优化结果的对象信息以及第一观测结果对象信息,确定属于同一目标对象的历史优化结果和第一观测结果。由于对象信息可以对目标对象进行标注,在对象信息相同的情况下,则可以认为历史优化结果和第一观测结果属于同一目标对象。针对同一目标对象,可以获取历史优化结果的检测框中的点云数据以及第一观测结果的检测框中的点云数据,并将历史优化结果对应的点云数据和第一观测结果对应的点云数据进行合并,例如,对历史优化结果对应的点云数据和第一观测结果对应的点云数据取并集,得到一个目标对象的合并点云数据。根据该合并点云数据可以对第一观测结果进行修正,得到该目标对象的第一修正结果。例如,可以将一个目标对象的合并点云数据输入神经网络中,利用神经网络对第一观测结果的位置信息进行修正,得到神经网络输出的第一修正结果。通过这种方式,可以利用同一对象的合并点云数据得到位置信息更加准确的第一修正结果,从而在目标检测过程中可以考虑同一目标对象的历史信息(如历史优化结果的位置信息),提高目标检测的准确性。
这里,如果针对每个数据帧的第一观测结果进行修正和优化,那么每个数据帧可以对应一个目标对象的优化结果,从而在针对当前数据帧的第一观测结果进行修正和优化的情况下,针对同一目标对象,可以将当前数据帧的前一个数据帧的历史优化结果对应的点云数据与第一观测结果对应的点云数据进行合并,利用当前数据帧的前一个数据帧的历史优化结果对当前数据帧的第一观测结果进行修正,由于当前数据帧的前一个数据帧的历史优化结果是最新存储的,相比于其他历史数据帧对应的历史优化结果更加准确,从而利用前一个数据帧的历史优化结果对当前数据帧的第一观测结果进行修正,可以使得到的第一修正结果更加准确。
在一些实现方式中,如果针对采集的部分数据帧的第一观测结果进行修正和优化,例如,选择每间隔一定数据帧选择进行第一观测结果修正和优化的数据帧,那么并非是每个数据帧均对应一个目标对象的优化结果。此时在针对当前数据帧的第一观测结果进行修正和优化的情况下,可以针对同一目标对象,选择该目标对象最新存储的历史优化结果对当前数据帧的第一观测结果进行修正。
为了进一步提高目标检测的准确率,可以在得到目标对象的第一修正结果之后,对第一修正结果进行优化。下面对进一步对第一修正结果进行优化的过程进行说明。
在一个可能的实现方式中,可以获取目标对象的修正结果,其中,修正结果包括第一修正结果和第二修正结果,第二修正结果为基于目标场景的历史数据帧进行目标检测得到的。基于修正结果中的目标结果,可以确定目标对象的当前优化结果。
在一些可能的实现方式中,可以将当前数据帧的第一修正结果与历史数据帧的第二修正结果相结合,进一步对第一修正结果进行优化。第二修正结果可以是基于针对目标场景的历史数据帧进行目标检测的第二检测结果得到的,第二检测结果可以是历史检测结果。第二修正结果的确定方式可以与第一修正结果的确定方式相同,这里不再赘述。每个历史数据帧可以对应目标对象的一个第二修正结果,随着不断对目标场景进行数据帧采集,同一个目标对象可以对应一系列的第二修正结果。为了进一步提高目标检测的准确性,可以获取包括第一修正结果和第二修正结果的修正结果,从而可以联合历史数据帧的目标检测信息(第二修正结果)。然后基于修正结果中的目标结果,可以确定目标对象的当前优化结果,例如,可以在一个目标对象的修正结果中选择一个或若干个修正结果作为目标结果,将目标结果作为当前优化结果,或者,将多个目标结果的平均值或中间值作为当前优化结果。由于目标对象的位置变动可能很小, 从而由不同数据帧得到的目标对象的修正结果可以一致,从而可以利用多个修正结果得到目标对象的当前优化结果,使得目标检测更加准确。
在一些可能的实现方式的本公开实施例中,可以确定修正结果中的第一修正结果分别与多个第二修正结果的误差,其中,第一修正结果为任意一个修正结果,第二修正结果为第一修正结果之外的修正结果。针对任意一个第一修正结果,统计第一修正结果对应的内点数量,其中,内点数量为与第一修正结果的误差小于误差阈值的第二修正结果的数量。再根据第一修正结果对应的内点数量确定修正结果中的目标结果。
在本公开实施例中,提供了一个确定修正结果中的目标结果的示例。针对一个目标对象的多个修正结果,可以将任意一个修正结果作为第一修正结果,多个修正结果中除第一修正结果之外的修正结果可以作为第二修正结果。针对一个目标对象的第一修正结果,可以分别计算第一修正结果与多个第二修正结果的误差,根据第一修正结果与多个第二修正结果的误差可以统计第一修正结果对应的内点数量。例如,可以计算第一修正结果的位置信息与一个第二修正结果的位置信息的误差,如果该误差小于误差阈值,则可以认为该第二修正结果与第一修正结果比较接近,可以认为该第二修正结果是第一修正结果的一个内点,第一修正结果的内点的数量可以作为第一修正结果对应的内点数量,即,与第一修正结果的误差小于误差阈值的第二修正结果的数量。在确定第一修正结果对应的内点数量之后,可以根据第一修正结果对应的内点数量确定修正结果中的目标结果。例如,将内点数量最大的第一修正结果确定为修正结果中的目标结果。通过这种方式,可以根据修正结果中相对准确的目标结果确定目标对象的当前优化结果,去除准确性较低的修正结果,从而可以进一步提高目标检测的准确性。
在一些可能的实现方式的本公开实施例中,确定多个第一修正结果中内点数量最大的第一修正结果。然后将内点数量最大的第一修正结果以及与内点数量最大的第一修正结果的误差小于误差阈值的第二修正结果,确定为修正结果中的目标结果。
在本公开实施例中,与第一修正结果的误差小于误差阈值的第二修正结果可以是第一修正结果的内点,内点数量最大的第一修正结果可以是内点数量最多的第一修正结果。一个目标对象的一个第一修正结果的内点数量最多,可以表示,在目标对象的位置改变较少的情况下,这个第一修正结果以及该第一修正结果的内点更接近该目标对象的真实位置,从而可以将这个第一修正结果以及该第一修正结果的内点,确定为该目标对象的修正结果中的目标结果。
在一些可能的实现方式中,可以基于一个目标对象的修正结果中的多个目标结果,确定该目标对象的当前优化结果,从而可以对该目标对象的第一修正结果进行进一步的优化,使优化后得到的当前优化结果可以更加准确地指示目标对象的位置。例如,可以根据各个目标结果中目标对象的位置信息,估计一个最优值,使该最优值达到特定条件,这个最优值可以作为目标对象的当前优化结果。
在本公开实施例中,针对一个目标对象,在根据各个目标结果中目标对象位置信息,估计一个当前优化结果,可以使当前优化结果与多个目标结果的距离之和达到最小,例如,可以将当前优化结果作为一个未知变量,建立该未知变量与各个目标结果之间的误差的平方之和的方程式,然后求解距离之和最小情况下的未知变量取值,求解出来的未知变量取值可以作为该目标对象的当前优化结果。得到的当前优化结果可以与多个目标结果的位置信息之间的距离之和达到最小。这样,可以将当前优化结果作为目标对象的最终检测结果,从而提高目标检测的准确性。
在本公开实施例中,在得到一个目标对象的当前优化结果之后,可以将该目标对象的当前优化结果进行保存,或者,可以将保存的该目标对象的历史优化结果更新为得到的当前优化结果。
下面通过本公开实施例对本公开实施例提供的目标检测方案进行说明。图2示出根据本公开实施例的目标检测方法一示例的流程图。
步骤S201,获取目标场景的当前数据帧的3D检测框(第一检测结果);
步骤S202,将目标场景中已知对象的历史最优估计框(历史优化结果)与当前数据帧的3D检测框进行匹配,得到当前数据帧中目标对象的当前观测框(第一观测结果);
步骤S203,针对每个目标对象,利用该目标对象的最优估计框和当前数据帧的当前观测框对目标场景的点云数据进行分割,保留该目标对象的历史最优估计框和/或当前观测框内的点云数据;
步骤S204,将每个目标对象的最优估计框和/或当前观测框内的点云数据以及该目标对象对应的当前观测框输入到神经网络中,利用神经网络对每个目标对象的当前观测框进行修正,得到当前数据帧中每个目标对象的当前修正框(第一修正结果);
步骤S205,对每个目标对象的当前修正框与历史修正框进行联合优化,得到每个目标对象的当前最优估计框(当 前优化结果)。
本公开实施例提供的目标检测方案,可以提高目标检测的准确性,即使在目标场景中存在遮挡或截断的情况,得到的检测结果也具有很强的鲁棒性,提高目标检测的稳定性。
可以理解,本公开实施例提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开实施例不再赘述。
此外,本公开实施例还提供了装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开实施例提供的任一种目标检测方法,相应技术方案和描述和参见方法部分的相应记载,不再赘述。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
图3示出根据本公开实施例的目标检测装置的框图,如图3所示,所述装置包括:
获取模块31,配置为获取针对目标场景的当前数据帧进行目标检测得到的第一检测结果;
确定模块32,配置为基于所述目标场景的历史优化结果对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果;
修正模块33,配置为根据所述第一观测结果对应的点云数据对所述第一观测结果进行修正,得到所述目标对象的第一修正结果。
在一些可能的实现方式中,所述确定模块32,配置为基于所述目标场景的历史优化结果,确定所述第一检测结果的对象信息,其中,所述对象信息用于标识所述目标对象;根据所述第一检测结果的对象信息对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果。
在一些可能的实现方式中,所述确定模块32,配置为将所述目标场景的历史优化结果与所述第一检测结果进行匹配;在所述第一检测结果与所述历史优化结果匹配的情况下,将所述历史优化结果的对象信息确定为所述第一检测结果的对象信息。
在一些可能的实现方式中,所述确定模块32,配置为在所述第一检测结果与所述历史优化结果不匹配的情况下,为所述第一检测结果设置新的对象信息。
在一些可能的实现方式中,所述确定模块32,配置为确定所述第一检测结果与一个历史优化结果的检测框交叠部分的第一体积,以及,确定所述第一检测结果的检测框与该历史优化结果的检测框共同占据的总体积;根据所述第一体积与所述总体积的比值确定所述第一检测结果与该的历史优化结果的匹配程度。
在一些可能的实现方式中,所述确定模块32,配置为基于所述目标场景的历史优化结果,在确定所述当前数据帧存在所述第一检测结果未检测到的目标对象的情况下,将所述未检测到的目标对象的历史优化结果确定为所述当前数据帧中未检测到的目标对象的第一观测结果。
在一些可能的实现方式中,所述修正模块33,配置为对同一目标对象的历史优化结果对应的点云数据与第一观测结果对应的点云数据进行合并,得到合并点云数据;基于所述合并点云数据,得到对所述第一观测结果进行修正的第一修正结果。
在一些可能的实现方式中,所述修正模块33,配置为针对同一目标对象,将所述当前数据帧的前一个数据帧的历史优化结果对应的点云数据与所述第一观测结果对应的点云数据进行合并。
在一些可能的实现方式中,所述装置还包括:优化模块,配置为获取所述目标对象的修正结果,其中,所述修正结果包括所述第一修正结果和第二修正结果,所述第二修正结果为基于目标场景的历史数据帧进行目标检测得到的;基于所述修正结果中的目标结果,确定所述目标对象的当前优化结果。
在一些可能的实现方式中,所述优化模块,还配置为确定所述修正结果中的第一修正结果分别与多个第二修正结果的误差,其中,所述第一修正结果为任意一个所述修正结果,所述第二修正结果为所述第一修正结果之外的修正观测框;统计所述第一修正结果对应的内点数量,其中,所述内点数量为与所述第一修正结果的误差小于误差阈值的第二修正结果的数量;根据所述第一修正结果对应的内点数量确定所述修正结果中的目标结果。
在一些可能的实现方式中,所述优化模块,配置为确定多个所述第一修正结果中所述内点数量最大的第一修正结果;将所述内点数量最大的第一修正结果以及与所述内点数量最大的第一修正结果的误差小于所述误差阈值的第二修正结果,确定为所述修正结果中的目标结果。
在一些可能的实现方式中,所述当前优化结果与多个所述目标结果的误差之和达到最小。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
图4是根据一示例性实施例示出的一种电子设备800的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图4,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在本公开实施例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在本公开实施例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。
本公开实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为上述方法。
电子设备可以被提供为终端、服务器或其它形态的设备。
图5是根据一示例性实施例示出的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图5,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如微软服务器操作系统(Windows ServerTM),苹果公司推出的基于图形用户界面操作系统(Mac OS XTM),多用户多进程的计算机操作系统(UnixTM),自由和开放原代码的类Unix操作系统(LinuxTM),开放原代码的类Unix操作系统(FreeBSDTM)或类似。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。
本公开实施例可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开实施例的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开实施例操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多个编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开实施例的各个方面。
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开实施例的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的 一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开实施例的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中技术的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。
工业实用性
本公开实施例涉及一种目标检测方法及装置、电子设备和存储介质,其中,所述方法包括:获取针对目标场景的当前数据帧进行目标检测得到的第一检测结果;基于所述目标场景的历史优化结果对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果;根据所述第一观测结果对应的点云数据对所述第一观测结果进行修正,得到所述目标对象的第一修正结果。

Claims (16)

  1. 一种目标检测方法,所述方法由电子设备执行,包括:
    获取针对目标场景的当前数据帧进行目标检测得到的第一检测结果;
    基于所述目标场景的历史优化结果对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果;
    根据所述第一观测结果对应的点云数据对所述第一观测结果进行修正,得到所述目标对象的第一修正结果。
  2. 根据权利要求1所述的方法,其中,所述基于所述目标场景的历史优化结果对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果,包括:
    基于所述目标场景的历史优化结果,确定所述第一检测结果的对象信息,其中,所述对象信息用于标识所述目标对象;
    根据所述第一检测结果的对象信息对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果。
  3. 根据权利要求2所述的方法,其中,所述基于所述目标场景的历史优化结果,确定所述第一检测结果的对象信息,包括:
    将所述目标场景的历史优化结果与所述第一检测结果进行匹配;
    在所述第一检测结果与所述历史优化结果匹配的情况下,将所述历史优化结果的对象信息确定为所述第一检测结果的对象信息。
  4. 根据权利要求3所述的方法,其中,所述基于所述目标场景的历史优化结果,确定所述第一检测结果的对象信息,包括:
    在所述第一检测结果与所述历史优化结果不匹配的情况下,为所述第一检测结果设置新的对象信息。
  5. 根据权利要求3或4所述的方法,其中,所述将所述目标场景的历史优化结果与所述第一检测结果进行匹配,包括:
    确定所述第一检测结果与一个历史优化结果的检测框交叠部分的第一体积,以及,确定所述第一检测结果的检测框与该历史优化结果的检测框共同占据的总体积;
    根据所述第一体积与所述总体积的比值确定所述第一检测结果与该的历史优化结果的匹配程度。
  6. 根据权利要求1至5任意一项所述的方法,其中,所述基于所述目标场景的历史优化结果对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果,包括:
    基于所述目标场景的历史优化结果,在确定所述当前数据帧存在所述第一检测结果未检测到的目标对象的情况下,将所述未检测到的目标对象的历史优化结果确定为所述当前数据帧中未检测到的目标对象的第一观测结果。
  7. 根据权利要求1至6任意一项所述的方法,其中,所述根据所述第一观测结果对应的点云数据对所述第一观测结果进行修正,得到所述目标对象的第一修正结果,包括:
    对同一目标对象的历史优化结果对应的点云数据与第一观测结果对应的点云数据进行合并,得到合并点云数据;
    基于所述合并点云数据,得到对所述第一观测结果进行修正的第一修正结果。
  8. 根据权利要求7所述的方法,其中,所述对同一目标对象的历史优化结果对应的点云数据与第一观测结果对应的点云数据进行合并,包括:针对同一目标对象,将所述当前数据帧的前一个数据帧的历史优化结果对应的点云数据与所述第一观测结果对应的点云数据进行合并。
  9. 根据权利要求1至8任意一项所述的方法,其中,所述方法还包括:
    获取所述目标对象的修正结果,其中,所述修正结果包括所述第一修正结果和第二修正结果,所述第二修正结果为基于目标场景的历史数据帧进行目标检测得到的;
    基于所述修正结果中的目标结果,确定所述目标对象的当前优化结果。
  10. 根据权利要求9所述的方法,其中,所述方法还包括:
    确定所述修正结果中的第一修正结果分别与多个第二修正结果的误差,其中,所述第一修正结果为任意一个所述 修正结果,所述第二修正结果为所述第一修正结果之外的修正观测框;
    统计所述第一修正结果对应的内点数量,其中,所述内点数量为与所述第一修正结果的误差小于误差阈值的第二修正结果的数量;
    根据所述第一修正结果对应的内点数量确定所述修正结果中的目标结果。
  11. 根据权利要求10所述的方法,其中,所述根据所述第一修正结果对应的内点数量确定所述修正结果中的目标结果,包括:
    确定多个所述第一修正结果中所述内点数量最大的第一修正结果;
    将所述内点数量最大的第一修正结果以及与所述内点数量最大的第一修正结果的误差小于所述误差阈值的第二修正结果,确定为所述修正结果中的目标结果。
  12. 根据权利要求9至11中任意一项所述的方法,其中,所述当前优化结果与多个所述目标结果的误差之和达到最小。
  13. 一种目标检测装置,其中,包括:
    获取模块,配置为获取针对目标场景的当前数据帧进行目标检测得到的第一检测结果;
    确定模块,配置为基于所述目标场景的历史优化结果对所述第一检测结果进行更新,得到所述当前数据帧中目标对象的第一观测结果;
    修正模块,配置为根据所述第一观测结果对应的点云数据对所述第一观测结果进行修正,得到所述目标对象的第一修正结果。
  14. 一种电子设备,其中,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至12中任意一项所述的目标检测方法。
  15. 一种计算机可读存储介质,其上存储有计算机程序指令,其中,所述计算机程序指令被处理器执行时实现权利要求1至12中任意一项所述的目标检测方法。
  16. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,
    所述电子设备中的处理器执行用于实现权利要求1至12任一项所述的方法。
PCT/CN2021/103286 2020-07-24 2021-06-29 目标检测方法及装置、电子设备和存储介质 WO2022017140A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010725039.5 2020-07-24
CN202010725039.5A CN111860373B (zh) 2020-07-24 2020-07-24 目标检测方法及装置、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022017140A1 true WO2022017140A1 (zh) 2022-01-27

Family

ID=72949590

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/103286 WO2022017140A1 (zh) 2020-07-24 2021-06-29 目标检测方法及装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN111860373B (zh)
WO (1) WO2022017140A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860373B (zh) * 2020-07-24 2022-05-20 浙江商汤科技开发有限公司 目标检测方法及装置、电子设备和存储介质
CN112330717B (zh) * 2020-11-11 2023-03-10 北京市商汤科技开发有限公司 目标跟踪方法及装置、电子设备和存储介质
CN112528889B (zh) * 2020-12-16 2024-02-06 中国平安财产保险股份有限公司 Ocr信息检测修正方法、装置、终端及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053427A (zh) * 2017-10-31 2018-05-18 深圳大学 一种基于KCF与Kalman的改进型多目标跟踪方法、系统及装置
CN109636829A (zh) * 2018-11-24 2019-04-16 华中科技大学 一种基于语义信息和场景信息的多目标跟踪方法
CN110827202A (zh) * 2019-11-07 2020-02-21 上海眼控科技股份有限公司 目标检测方法、装置、计算机设备和存储介质
WO2020108311A1 (zh) * 2018-11-29 2020-06-04 北京市商汤科技开发有限公司 目标对象3d检测方法、装置、介质及设备
CN111860373A (zh) * 2020-07-24 2020-10-30 浙江商汤科技开发有限公司 目标检测方法及装置、电子设备和存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256506B (zh) * 2018-02-14 2020-11-24 北京市商汤科技开发有限公司 一种视频中物体检测方法及装置、计算机存储介质
CN110555339A (zh) * 2018-05-31 2019-12-10 北京嘀嘀无限科技发展有限公司 一种目标检测方法、系统、装置及存储介质
CN108734360B (zh) * 2018-06-19 2021-09-07 哈尔滨工业大学 一种基于修正的elm预测模型多维遥测数据智能判读方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053427A (zh) * 2017-10-31 2018-05-18 深圳大学 一种基于KCF与Kalman的改进型多目标跟踪方法、系统及装置
CN109636829A (zh) * 2018-11-24 2019-04-16 华中科技大学 一种基于语义信息和场景信息的多目标跟踪方法
WO2020108311A1 (zh) * 2018-11-29 2020-06-04 北京市商汤科技开发有限公司 目标对象3d检测方法、装置、介质及设备
CN110827202A (zh) * 2019-11-07 2020-02-21 上海眼控科技股份有限公司 目标检测方法、装置、计算机设备和存储介质
CN111860373A (zh) * 2020-07-24 2020-10-30 浙江商汤科技开发有限公司 目标检测方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN111860373B (zh) 2022-05-20
CN111860373A (zh) 2020-10-30

Similar Documents

Publication Publication Date Title
WO2022017140A1 (zh) 目标检测方法及装置、电子设备和存储介质
WO2021051857A1 (zh) 目标对象匹配方法及装置、电子设备和存储介质
US9953506B2 (en) Alarming method and device
CN111105454B (zh) 一种获取定位信息的方法、装置及介质
CN111551191B (zh) 传感器外参数标定方法及装置、电子设备和存储介质
CN107692997B (zh) 心率检测方法及装置
WO2022043741A1 (zh) 网络训练、行人重识别方法及装置、存储介质、计算机程序
WO2020181728A1 (zh) 图像处理方法及装置、电子设备和存储介质
JP2016531361A (ja) 画像分割方法、画像分割装置、画像分割デバイス、プログラム及び記録媒体
TWI718631B (zh) 人臉圖像的處理方法及裝置、電子設備和儲存介質
CN111323007B (zh) 定位方法及装置、电子设备和存储介质
WO2022021872A1 (zh) 目标检测方法及装置、电子设备和存储介质
WO2022134475A1 (zh) 点云地图构建方法及装置、电子设备、存储介质和程序
EP3147802A1 (en) Method and apparatus for processing information
WO2015196715A1 (zh) 图像重定位方法、装置及终端
WO2023273498A1 (zh) 深度检测方法及装置、电子设备和存储介质
CN111563138A (zh) 定位方法及装置、电子设备和存储介质
WO2023273499A1 (zh) 深度检测方法及装置、电子设备和存储介质
WO2022222379A1 (zh) 一种位置确定方法及装置、电子设备和存储介质
WO2022179080A1 (zh) 定位方法、装置、电子设备、存储介质、程序及产品
CN112837372A (zh) 数据生成方法及装置、电子设备和存储介质
WO2023155393A1 (zh) 特征点匹配方法、装置、电子设备、存储介质和计算机程序产品
WO2022110801A1 (zh) 数据处理方法及装置、电子设备和存储介质
WO2022237071A1 (zh) 定位方法及装置、电子设备、存储介质和计算机程序
CN112949568A (zh) 人脸和人体匹配的方法及装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21847104

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21847104

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21847104

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31/07/2023)