WO2020243962A1 - Procédé de détection d'objet, dispositif électronique et plateforme mobile - Google Patents
Procédé de détection d'objet, dispositif électronique et plateforme mobile Download PDFInfo
- Publication number
- WO2020243962A1 WO2020243962A1 PCT/CN2019/090393 CN2019090393W WO2020243962A1 WO 2020243962 A1 WO2020243962 A1 WO 2020243962A1 CN 2019090393 W CN2019090393 W CN 2019090393W WO 2020243962 A1 WO2020243962 A1 WO 2020243962A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dimensional
- compensation value
- pixel
- information
- candidate
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/647—Three-dimensional objects by matching two-dimensional images to three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Definitions
- the embodiments of the present application relate to the technical field of movable platforms, and in particular, to an object detection method, electronic equipment, and a movable platform.
- Obstacle detection is one of the key technologies of the automatic driving system. It uses the cameras, lidar, millimeter wave radar and other sensors on the vehicle to detect obstacles in the road scene, such as vehicles, pedestrians, etc.
- the autonomous driving system In the autonomous driving scene, the autonomous driving system not only needs to obtain the position of the obstacle on the image, but also needs to predict the three-dimensional positioning information of the obstacle. The accuracy of the three-dimensional positioning of obstacles directly affects the safety and reliability of autonomous vehicles.
- the embodiments of the present application provide an object detection method, electronic equipment and a movable platform, which can reduce the cost of object detection.
- an object detection method including:
- an electronic device including:
- Memory used to store computer programs
- the processor is configured to execute the computer program, specifically:
- an embodiment of the present application provides a movable platform, including: the electronic device provided in the second aspect of the embodiment of the present application.
- an embodiment of the present application is a computer storage medium, in which a computer program is stored, and the computer program implements the object detection method provided in the first aspect when executed.
- the embodiments of the present application provide an object detection method, electronic equipment and a movable platform.
- the sparse point cloud data and images are projected into a target coordinate system to obtain the data to be processed, Perform three-dimensional detection on the data to be processed, and obtain detection results of objects included in the scene to be detected. Since only sparse point cloud data needs to be acquired, the density of the point cloud data is reduced, thus reducing the complexity and requirements of electronic equipment, and reducing the cost of object detection.
- FIG. 1 is a schematic diagram of an application scenario involved in an embodiment of this application
- FIG. 2 is a flowchart of an object detection method provided by an embodiment of the application
- FIG. 3 is another flowchart of an object detection method provided by an embodiment of the application.
- FIG. 4 is another flowchart of an object detection method provided by an embodiment of the application.
- FIG. 5 is another flowchart of an object detection method provided by an embodiment of the application.
- FIG. 6 is another flowchart of an object detection method provided by an embodiment of the application.
- FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
- FIG. 8 is a schematic structural diagram of a lidar provided by an embodiment of the application.
- words such as “first” and “second” are used to distinguish the same items or similar items with substantially the same function and effect. Those skilled in the art can understand that words such as “first” and “second” do not limit the quantity and order of execution, and words such as “first” and “second” do not limit the difference.
- the embodiments of the present application can be applied to any field that needs to detect objects.
- it is applied to the field of intelligent driving such as automatic driving and assisted driving, which can detect obstacles such as vehicles and pedestrians in road scenes.
- it can be applied to the field of drones, and can detect obstacles in the drone flight scene.
- Another example is the security field, which detects objects entering a designated area.
- the object detection method provided in the embodiments of the present application can be applied to a low-complexity neural network, and on the basis of ensuring the accuracy of object detection, the detection scheme is universal on multiple platforms.
- FIG. 1 is a schematic diagram of an application scenario involved in an embodiment of this application.
- a smart driving vehicle includes a detection device.
- the detection equipment can identify and detect the objects in the front lane (such as falling rocks, leftovers, dead branches, pedestrians, vehicles, etc.), and obtain detection information such as the three-dimensional position, orientation and three-dimensional size of the object , And plan the state of intelligent driving based on the detection information, such as changing lanes, decelerating, or stopping.
- the detection equipment may include radar, ultrasonic detection equipment, Time Of Flight (TOF) ranging detection equipment, visual detection equipment, laser detection equipment, image sensors, etc., and combinations thereof.
- TOF Time Of Flight
- the image sensor may be a camera, video camera, etc.
- the radar may be a general-purpose lidar or a specific lidar that meets the requirements of a specific scenario, such as a rotating scanning multi-line lidar with multiple transmitters and multiple receivers, etc.
- FIG. 1 is a schematic diagram of an application scenario of this application, and the application scenario of the embodiment of this application includes but is not limited to that shown in FIG. 1.
- FIG. 2 is a flowchart of an object detection method provided by an embodiment of the application.
- the execution subject may be an electronic device.
- the object detection method provided in this embodiment includes:
- S201 Acquire sparse point cloud data and images of the scene to be detected.
- point cloud data refers to a collection of point data on the surface of an object obtained by a measuring device.
- point cloud data can be divided into sparse point cloud data and dense point cloud data. For example, it can be divided according to the distance between points and the number of points. When the distance between points is relatively large and the number of points is relatively small, it can be called sparse point cloud data. When the distance between points is relatively small and the number of points is relatively large, it can be called dense point cloud data or high-density point cloud data.
- the acquisition of dense point clouds requires high-beam lidar to scan at higher frequencies.
- the high-beam lidar causes higher operating costs, and the continuous high scanning frequency of the lidar will reduce the service life of the lidar.
- Other methods of obtaining dense point clouds, such as the point cloud stitching of multiple single-line lidars, require complex algorithms, and the system's robustness is relatively low.
- sparse point clouds Since only sparse point cloud data needs to be obtained, compared to obtaining high-density point cloud data, the difficulty of obtaining point cloud data is reduced, and the requirements for equipment and the cost of equipment are reduced. Therefore, in daily application scenarios, sparse point clouds have better utilization value than dense point clouds.
- this embodiment does not limit the scene to be detected, and it may be different according to the type of electronic device and different application scenarios.
- the scene to be detected may be the road ahead of the vehicle.
- the scene to be detected may be the flight environment when the drone is flying.
- acquiring sparse point cloud data and images of the scene to be detected may include:
- the sparse point cloud data is acquired by a radar sensor, and the image is acquired by an image sensor.
- the image sensor can be a camera, a camera, and so on.
- the number of image sensors can be one.
- the number of radar sensors can be one or more than one.
- acquiring sparse point cloud data through radar sensors may include:
- the first sparse point cloud data corresponding to each radar sensor is projected into the target radar coordinate system to obtain the sparse point cloud data.
- the radar coordinate systems corresponding to multiple radar sensors have a certain conversion relationship, and the conversion relationship may be determined by the external parameters of the radar sensor, which is not limited in this embodiment.
- the external parameters of the radar sensor include but are not limited to the arrangement of the radar sensor, position, orientation angle, carrier speed, acceleration, etc.
- the first sparse point cloud data collected by each radar sensor can be projected into the target radar coordinate system to obtain the sparse point cloud data in the target radar coordinate system.
- the target radar coordinate system may be a radar coordinate system corresponding to any one of the multiple radar sensors.
- the target radar coordinate system is another radar coordinate system, and the target radar coordinate system has a certain conversion relationship with the radar coordinate system corresponding to each radar sensor.
- the target radar coordinate system may be the radar coordinate system 1 corresponding to the radar sensor 1.
- the sparse point cloud data may include: a collection of the sparse point cloud data collected by the radar sensor 2 into the radar coordinate system 1 and the sparse point cloud data collected by the radar sensor 1.
- data deduplication processing is performed.
- the sparse point cloud data may include the three-dimensional position coordinates of each point, which may be marked as (x, y, z).
- the sparse point cloud data may also include the laser reflection intensity value of each point.
- S202 Project the sparse point cloud data and the image into the target coordinate system, and obtain the data to be processed.
- the target coordinate system may be an image coordinate system corresponding to the image sensor.
- the target coordinate system may also be other coordinate systems, which is not limited in this embodiment.
- projecting sparse point cloud data and images into the target coordinate system to obtain the data to be processed may include:
- the sparse point cloud data and image are projected into the image coordinate system to obtain the data to be processed.
- the target coordinate system is the image coordinate system corresponding to the image sensor.
- the sparse point cloud data can be accurately mapped and matched with some pixels in the image, and the sparse point cloud data outside the image coordinate system can be filtered. For example, suppose the length of the image is H and the width is W. Then, by projecting the sparse point cloud data and the image into the image coordinate system, the sparse point cloud data outside the H ⁇ W range can be filtered out to obtain the data to be processed.
- the data to be processed may include: the coordinate value and reflectance of each point in the target coordinate system where the sparse point cloud data is projected, and the coordinate value of the pixel in the image in the target coordinate system.
- each point in the sparse point cloud data and each pixel point in the image may not be completely mapped and matched.
- the reflectivity of the corresponding point in the target coordinate system can be the laser reflection intensity value of the point in the sparse point cloud data.
- the reflectivity of the corresponding point in the target coordinate system can be set to zero.
- S203 Perform three-dimensional detection on the data to be processed, and obtain a detection result of objects included in the scene to be detected.
- the object information may include at least one of the following: three-dimensional position information, orientation information, three-dimensional size information, and depth value of the object.
- the object detection method provided in this embodiment can obtain detection results of objects included in the scene to be detected based on the sparse point cloud data and images by acquiring sparse point cloud data and images of the scene to be detected. Since only sparse point cloud data needs to be acquired, the density of the point cloud data is reduced, thus reducing the complexity and requirements of electronic equipment, and reducing the cost of object detection.
- FIG. 3 is another flowchart of the object detection method provided by the embodiment of the application. As shown in FIG. 3, in the above S203, performing three-dimensional detection on the data to be processed and obtaining the detection result of the objects included in the scene to be detected may include:
- the basic network model may be pre-trained and used to output feature maps according to the data to be processed. It should be noted that this embodiment does not limit the implementation of the basic network model, and different neural network models may be used according to actual needs, for example, a convolutional neural network model.
- the basic network model can include several layers of convolution and pooling operations according to actual needs, and finally output a feature map.
- S302 Input the feature map into the candidate area network model, and obtain a two-dimensional frame of the candidate object.
- the candidate area network model may be a pre-trained two-dimensional box used to output candidate objects according to the feature map. It should be noted that this embodiment does not limit the implementation of the candidate area network model, and different neural network models may be used according to actual needs, for example, a convolutional neural network model.
- the two-dimensional frame of the candidate object corresponds to the objects included in the scene to be detected. Each object included in the scene to be detected may correspond to a two-dimensional box with multiple candidate objects.
- the specific types of objects are not distinguished. For example. Assuming that the objects included in the scene to be detected are two vehicles and one pedestrian, the number of two-dimensional frames of candidate objects obtained may be 100. Then, two vehicles and one pedestrian jointly correspond to the two-dimensional boxes of the 100 candidate objects. In the subsequent steps, it can be determined which object the two-dimensional boxes of the 100 candidate objects correspond to.
- S303 Determine objects included in the scene to be detected according to the two-dimensional frame of the candidate object, and obtain a compensation value of the information of the object.
- the object included in the scene to be detected can be determined according to the two-dimensional frame of the candidate object, and the compensation value of the object information can be obtained.
- the compensation value of the object information may include but is not limited to at least one of the following: the compensation value of the orientation of the object, the compensation value of the three-dimensional position information of the object, the compensation value of the three-dimensional size of the object, and the two-dimensional The compensation value of the frame.
- the compensation value of the orientation of the object is the difference between the actual value of the orientation of the object and the preset orientation.
- the compensation value of the three-dimensional position information of the object is the difference between the actual value of the three-dimensional position of the object and the preset three-dimensional position.
- the compensation value of the three-dimensional size of the object is the difference between the actual value of the three-dimensional size of the object and the preset three-dimensional size.
- the compensation value of the two-dimensional frame of the object is the difference between the actual value of the two-dimensional frame of the object and the preset value.
- this embodiment does not limit the specific values of the preset orientation, preset three-dimensional position, preset three-dimensional size, and preset values of the two-dimensional frame of the object.
- the preset three-dimensional position may be the three-dimensional position of the center point of the chassis of the vehicle.
- the preset three-dimensional size can be different according to different vehicle models.
- S304 Acquire the information of the object according to the compensation value of the information of the object.
- the data to be processed is sequentially input into the basic network model and the candidate area network model to obtain the two-dimensional frame of the candidate object, and then determine the objects included in the scene to be detected according to the two-dimensional frame of the candidate object and The compensation value of the information of the object, and the information of the object is obtained according to the compensation value of the information of the object.
- obtaining the compensation value of the object information first is easy to implement and has higher accuracy, which improves the accuracy of object detection.
- FIG. 4 is another flowchart of the object detection method provided by the embodiment of the application.
- inputting the feature map into the candidate area network model to obtain the two-dimensional frame of the candidate object may include:
- S401 Acquire the probability that each pixel in the image belongs to the object according to the feature map.
- S403 Obtain a two-dimensional frame of the candidate object according to the probability that the first pixel belongs to the object and the two-dimensional frame of the object corresponding to the first pixel.
- the resolution of the image is 100*50, that is, there are 5000 pixels.
- the probability of each of the 5000 pixels belonging to the object can be obtained.
- the probability that the pixel 1 belongs to the object is P1
- the pixel 1 is determined to belong to the object according to the probability P1.
- the pixel 1 can be called the first pixel, and the two-dimensional frame 1 of the object corresponding to the pixel 1 can be obtained.
- the probability that the pixel 2 belongs to the object is P2, and it is determined that the pixel 2 does not belong to the object according to the probability P2.
- the probability that the pixel 3 belongs to the object is P3. According to the probability P3, it can be determined that the pixel 3 belongs to the object.
- the pixel 3 can be called the first pixel, and the two-dimensional frame 3 of the object corresponding to the pixel 3 can be obtained. Assuming that according to the probability that 5000 pixels belong to the object, it is determined that the first pixel is 200, then a two-dimensional frame of 200 objects can be obtained. After that, it is further screened according to the probability that the 200 first pixels belong to the object and the two-dimensional frame of the object corresponding to the 200 first pixels, and the two-dimensional frame of the candidate object is obtained from the two-dimensional frame of the 200 objects. For example, there may be 50 two-dimensional boxes of candidate objects finally obtained.
- the two-dimensional frame of the object corresponding to a part of the pixel can be obtained first, and further, the two-dimensional frame of these objects can be swiped again to determine the candidate
- the two-dimensional frame of the object improves the accuracy of obtaining the two-dimensional frame of the candidate object.
- determining whether the pixel belongs to the object according to the probability that the pixel belongs to the object may include:
- the probability that the pixel belongs to the object is greater than or equal to the preset value, it is determined that the pixel belongs to the object.
- the probability that the pixel belongs to the object is less than the preset value, it is determined that the pixel does not belong to the object.
- This embodiment does not limit the specific value of the preset value.
- acquiring the two-dimensional frame of the candidate object according to the probability that the first pixel belongs to the object and the two-dimensional frame of the object corresponding to the first pixel may include:
- the first pixel to be processed is obtained from the first set composed of a plurality of first pixels, and the first pixel to be processed is deleted from the first set to obtain the updated first set.
- the first pixel to be processed is the first pixel with the highest probability of belonging to the object in the first set.
- the associated value between each first pixel and the first pixel to be processed is obtained.
- the associated value is used to indicate the degree of overlap between the two-dimensional frame of the object corresponding to each first pixel and the two-dimensional frame of the object corresponding to the first pixel to be processed.
- Pixels 1 to 4 form the initial first set.
- Pixels 1 to 4 form the initial first set.
- the probability P2 that pixel 2 belongs to the object in the first set is the largest.
- Pixel 2 can be called the first pixel to be processed, and update the first set to ⁇ pixel 1, pixel 3, pixel 4 ⁇ .
- the two-dimensional frame of the object corresponding to pixel 1 and the two-dimensional frame of the object corresponding to pixel 2 have a higher degree of coincidence
- the two-dimensional frame of the object corresponding to pixel 4 is The degree of coincidence between the two-dimensional frame and the two-dimensional frame of the object corresponding to pixel 2 is also relatively high. Therefore, pixel 1 and pixel 4 can be deleted from the first set to complete the deduplication operation of the two-dimensional frame of the object corresponding to pixel 2.
- the first set only includes pixel 3.
- the pixel 3 is acquired from the first set again, and the pixel 3 may be referred to as the first pixel to be processed.
- the first set does not include the first pixel.
- the two-dimensional frame of the object corresponding to pixel 2 and pixel 3 may be determined as the two-dimensional frame of the candidate object. There are two two-dimensional boxes of candidate objects.
- FIG. 5 is another flowchart of the object detection method provided by an embodiment of the application. As shown in FIG. 5, in the foregoing S303, determining the objects included in the scene to be detected according to the two-dimensional frame of the candidate object may include:
- S501 Input the two-dimensional frame of the candidate object into the first three-dimensional detection network model, and obtain the probability that the candidate object belongs to each of the preset objects.
- the first three-dimensional detection network model may be pre-trained and used to output the probability that the candidate object belongs to each of the preset objects according to the two-dimensional frame of the candidate object. It should be noted that this embodiment does not limit the implementation of the first three-dimensional detection network model, and different neural network models may be used according to actual needs, for example, a convolutional neural network model. This embodiment does not limit the specific categories of the preset objects.
- the preset objects may include, but are not limited to, vehicles, bicycles, and pedestrians.
- the candidate objects can be labeled as candidate objects 1 to 3
- the two-dimensional boxes of candidate objects can be labeled as two-dimensional boxes 1 to 3.
- the preset objects include vehicles and pedestrians.
- Candidate 1 The probability of belonging to a vehicle is P11, and the probability of belonging to a pedestrian is P12.
- Candidate 2 The probability of belonging to a vehicle is P21, and the probability of belonging to a pedestrian is P22.
- Candidate 3 The probability of belonging to a vehicle is P31, and the probability of belonging to a pedestrian is P32.
- the probability that the candidate object belongs to the first object in the preset objects is greater than the preset threshold corresponding to the first object, it is determined that the candidate object is an object included in the scene to be detected.
- the example in S501 is also used for description.
- the preset threshold corresponding to the vehicle is Q1
- the preset threshold corresponding to the pedestrian is Q2.
- P11>Q1, P21 ⁇ Q1, P31>Q1 it can be determined that candidate object 1 and candidate object 3 are included in the scene to be detected, and both are vehicles.
- P12 ⁇ Q2, P22 ⁇ Q2, P32 ⁇ Q2 it means that pedestrians are not included in the scene to be detected.
- obtaining the compensation value of the object information may include:
- At least one of the following compensation values is also obtained: the compensation value of the orientation of the candidate object, the compensation value of the three-dimensional position information of the candidate object, and the two-dimensional The compensation value of the frame and the compensation value of the three-dimensional size of the candidate object.
- the compensation value corresponding to the candidate object is determined as the compensation value of the object information.
- the first three-dimensional detection network model can not only output the probability that the candidate object belongs to each of the preset objects according to the two-dimensional frame of the candidate object, but also output the compensation value of the candidate object information at the same time.
- the candidate object is determined to be the object included in the scene to be detected according to the probability that the candidate object belongs to each object in the preset object (see the relevant description of S502 for details)
- the compensation value corresponding to the candidate object can be determined as the compensation value of the object information .
- the example in S501 is also used for description. Since candidate object 1 and candidate object 3 are vehicles included in the scene to be detected, the compensation values respectively corresponding to candidate object 1 and candidate object 3 may be determined as compensation values of the vehicle information included in the scene to be detected.
- [-180°, 180°] can be equally divided into multiple intervals, and the center of each interval is set as the preset orientation.
- the preset orientation may be -150°.
- the interval to which the orientation of the candidate object belongs and the compensation value of the orientation of the candidate object can be output through the first three-dimensional detection network model.
- the compensation value is the actual value of the orientation of the candidate object and its belonging The difference between the centers of the interval.
- the two-dimensional box of the candidate object is input into the first three-dimensional detection network model, and the probability that the candidate object belongs to each of the preset objects and the compensation value of the candidate object’s information can be obtained, thereby determining When the candidate object is an object included in the scene to be detected, the compensation value of the object information can be obtained at the same time.
- FIG. 6 is another flowchart of the object detection method provided by the embodiment of the application. As shown in FIG. 6, in the foregoing S303, determining the objects included in the scene to be detected according to the two-dimensional frame of the candidate object may include:
- S602 Determine the objects included in the scene to be detected according to the probability that the candidate object belongs to each of the preset objects.
- the semantic prediction network model may be pre-trained and used to output the probability that the candidate object belongs to each of the preset objects according to the two-dimensional frame of the candidate object. It should be noted that this embodiment does not limit the implementation of the semantic prediction network model, and different neural network models can be used according to actual needs, for example, a convolutional neural network model. This embodiment does not limit the specific categories of the preset objects.
- the preset objects may include, but are not limited to, vehicles, bicycles, and pedestrians.
- obtaining the compensation value of the object information may include:
- the two-dimensional frame of the object included in the scene to be detected is input into the second three-dimensional detection network model, and the compensation value of the object information is obtained.
- the compensation value includes at least one of the following: the compensation value of the orientation of the object, the three-dimensional position information of the object The compensation value, the compensation value of the two-dimensional frame of the object and the compensation value of the three-dimensional size of the object.
- the second three-dimensional detection network model may be a pre-trained compensation value used to output object information according to the two-dimensional frame of the object. It should be noted that this embodiment does not limit the implementation of the second three-dimensional detection network model, and different neural network models may be used according to actual needs, for example, a convolutional neural network model.
- the difference between this embodiment and the embodiment shown in FIG. 5 lies in that: in this embodiment, two models of the semantic prediction network model and the second three-dimensional detection network model are involved.
- the output of the semantic prediction network model is the probability that the candidate object belongs to each of the preset objects.
- the compensation value of the object information is output through the second three-dimensional detection network model.
- the first three-dimensional detection network model is involved.
- the first three-dimensional detection network model can simultaneously output the probability that the candidate object belongs to each of the preset objects and the compensation value of the candidate object's information.
- the compensation value of the object information includes the compensation value of the orientation of the object.
- obtaining the information of the object according to the compensation value of the information of the object may include:
- the orientation information of the object can be obtained according to the compensation value of the orientation of the object.
- the compensation value of the object information includes the compensation value of the three-dimensional position information of the object.
- obtaining the information of the object according to the compensation value of the information of the object may include:
- the three-dimensional position information of the object is acquired according to the compensation value of the three-dimensional position information of the object and the three-dimensional position information of the reference point of the object.
- the three-dimensional position information of the object can be obtained according to the compensation value of the three-dimensional position information of the object.
- the compensation value of the object information includes the compensation value of the three-dimensional size of the object.
- obtaining the information of the object according to the compensation value of the information of the object may include:
- the three-dimensional size information of the object is obtained.
- the three-dimensional size information of the object can be obtained according to the compensation value of the three-dimensional size of the object.
- the compensation value of the object information includes the compensation value of the two-dimensional frame of the object.
- obtaining the information of the object according to the compensation value of the information of the object may include:
- the position information of the two-dimensional frame of the object is acquired.
- the depth value of the object can be obtained according to the compensation value of the two-dimensional frame of the object.
- the content of the compensation value of the object information is different, and the various implementations described above can be combined with each other to obtain at least one of the following information of the object: the orientation information of the object, the three-dimensional position information of the object, and the object The three-dimensional size information and the depth value of the object.
- obtaining the depth value of the object according to the position information of the two-dimensional frame of the object may include:
- the position information of the two-dimensional frame of the object is input into the first region segmentation network model to obtain sparse point cloud data on the surface of the object.
- the sparse point cloud data on the surface of the object is clustered and segmented to obtain the sparse point cloud data of the target point on the surface of the object.
- the first region segmentation network model may be pre-trained and used to output sparse point cloud data on the surface of the object according to the position information of the two-dimensional frame of the object. It should be noted that this embodiment does not limit the implementation of the first region segmentation network model, and different neural network models may be used according to actual needs, for example, a convolutional neural network model.
- the target point on the vehicle may be a raised point on the rear of the vehicle.
- the target point of the pedestrian can be a point on the pedestrian's head, and so on.
- obtaining the depth value of the object according to the position information of the two-dimensional frame of the object may include:
- the position information of the two-dimensional frame of the object is input into the second region segmentation network model to obtain sparse point cloud data of the target surface on the surface of the object.
- the second region segmentation network model may be pre-trained and used to output sparse point cloud data of the target surface on the surface of the object according to the position information of the two-dimensional frame of the object. It should be noted that this embodiment does not limit the implementation of the second region segmentation network model, and different neural network models can be used according to actual needs, for example, a convolutional neural network model.
- sparse point cloud data of the target surface on the surface of the object can be obtained, so that the depth value of the object can be determined.
- this embodiment does not limit the position of the target surface.
- the target surface on the vehicle may be the rear of the vehicle. If the direction of travel of the vehicle is opposite to the direction of movement of the electronic device, the target surface on the vehicle may be the front of the vehicle.
- the target surface of the pedestrian can be the head of the pedestrian, and so on.
- FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
- the electronic device provided in this embodiment is used to implement the object detection method provided in any implementation manner of FIG. 2 to FIG. 6.
- the electronic device provided in this embodiment may include:
- the memory 12 is used to store computer programs
- the processor 11 is configured to execute the computer program, specifically:
- the processor 11 is specifically configured to:
- the information of the object is acquired according to the compensation value of the information of the object.
- the processor 11 is specifically configured to:
- the processor 11 is specifically configured to:
- the first pixel to be processed is the first pixel with the greatest probability of belonging to an object in the first set;
- each first pixel in the updated first set obtain the associated value between each first pixel and the first pixel to be processed; the associated value is used to indicate each first pixel The degree of overlap between the two-dimensional frame of the object corresponding to the pixel and the two-dimensional frame of the object corresponding to the first pixel to be processed;
- Delete the first pixel whose associated value is greater than the preset value from the updated first set and re-execute the steps of obtaining the first pixel to be processed and updating the first set until the first set does not include the first pixel.
- all the first pixels to be processed are determined as the two-dimensional frame of the candidate object.
- the processor 11 is specifically configured to:
- the objects included in the scene to be detected are acquired.
- the processor 11 is specifically configured to:
- At least one of the following compensation values is also obtained: the compensation value of the orientation of the candidate object, the compensation value of the three-dimensional position information of the candidate object, the compensation value of the candidate object The compensation value of the two-dimensional frame and the compensation value of the three-dimensional size of the candidate object;
- the compensation value corresponding to the candidate object is determined as the compensation of the information of the object value.
- the processor 11 is specifically configured to:
- the objects included in the scene to be detected are determined.
- the processor 11 is specifically configured to:
- the two-dimensional frame of the object included in the scene to be detected is input into the second three-dimensional detection network model, and the compensation value of the information of the object is obtained.
- the compensation value includes at least one of the following: the compensation value of the orientation of the object, The compensation value of the three-dimensional position information of the object, the compensation value of the two-dimensional frame of the object, and the compensation value of the three-dimensional size of the object.
- the compensation value of the object information includes the compensation value of the orientation of the object
- the processor 11 is specifically configured to:
- the compensation value of the object information includes the compensation value of the three-dimensional position information of the object
- the processor 11 is specifically configured to:
- the compensation value of the object information includes the compensation value of the three-dimensional size of the object
- the processor 11 is specifically configured to:
- the three-dimensional size information of the object is acquired according to the compensation value of the three-dimensional size of the object and the reference value of the three-dimensional size of the object corresponding to the object.
- the compensation value of the information of the object includes the compensation value of the two-dimensional frame of the object
- the processor 11 is specifically configured to:
- the processor 11 is specifically configured to:
- the depth value of the object is determined according to the sparse point cloud data of the target point.
- the processor 11 is specifically configured to:
- the information of the object includes at least one of the following: three-dimensional position information, orientation information, three-dimensional size information, and a depth value of the object.
- the processor 11 is specifically configured to:
- the sparse point cloud data is acquired through at least one radar sensor, and the image is acquired through an image sensor.
- the number of the radar sensors is greater than one; the processor 11 is specifically configured to:
- the first sparse point cloud data corresponding to each radar sensor is projected into the target radar coordinate system to acquire the sparse point cloud data.
- the processor 11 is specifically configured to:
- the sparse point cloud data and the image are projected into the camera coordinate system to obtain the data to be processed.
- the data to be processed includes: the coordinate value and reflectivity of each point in the target coordinate system projected from the sparse point cloud data, and the pixels in the image are in the target coordinate system The coordinate value.
- the electronic device may further include a radar sensor 13 and an image sensor 14.
- This embodiment does not limit the number and installation positions of the radar sensor 13 and the image sensor 14.
- the electronic device provided in this embodiment is used to implement the object detection method provided in any implementation manner of FIG. 2 to FIG. 6.
- the technical solution and the technical effect are similar, and the details are not repeated here.
- the embodiment of the present application also provides a movable platform, which may include the electronic device provided in the embodiment shown in FIG. 7. It should be noted that this embodiment does not limit the type of the movable platform, and it can be any device that needs to perform object detection. For example, it can be a drone, a vehicle, or other means of transportation.
- the ranging device 200 includes a ranging module 210, which includes a transmitter 203 (for example, a transmitting circuit), a collimating element 204, a detector 205 (for example, may include a receiving circuit, a sampling circuit, and an arithmetic circuit), and an optical path change Element 206.
- the ranging module 210 is used to emit a light beam, receive the return light, and convert the return light into an electrical signal.
- the transmitter 203 can be used to emit a light pulse sequence.
- the transmitter 203 may emit a sequence of laser pulses.
- the laser beam emitted by the transmitter 203 is a narrow-bandwidth beam with a wavelength outside the visible light range.
- the collimating element 204 is arranged on the exit light path of the emitter 203, and is used to collimate the light beam emitted from the emitter 203, and collimate the light beam emitted from the emitter 203 into parallel light and output to the scanning module.
- the collimating element 204 is also used to condense at least a part of the return light reflected by the probe.
- the collimating element 204 may be a collimating lens or other elements capable of collimating a light beam.
- the transmitting light path and the receiving light path in the distance measuring device are combined before the collimating element 204 through the light path changing element 206, so that the transmitting light path and the receiving light path can share the same collimating element, so that the light path More compact.
- the transmitter 203 and the detector 205 may respectively use their own collimating elements, and the optical path changing element 206 is arranged on the optical path behind the collimating element.
- the light path changing element can use a small-area mirror to remove The transmitting light path and the receiving light path are combined.
- the light path changing element may also use a reflector with a through hole, where the through hole is used to transmit the emitted light of the emitter 203 and the reflector is used to reflect the return light to the detector 205. In this way, the shielding of the back light by the bracket of the small mirror in the case of using the small mirror can be reduced.
- the optical path changing element deviates from the optical axis of the collimating element 204.
- the optical path changing element may also be located on the optical axis of the collimating element 204.
- the distance measuring device 200 further includes a scanning module 202.
- the scanning module 202 is placed on the exit light path of the distance measuring module 210.
- the scanning module 202 is used to change the transmission direction of the collimated beam 219 emitted by the collimating element 204 and project it to the external environment, and project the return light to the collimating element 204 .
- the returned light is collected on the detector 205 via the collimating element 204.
- the scanning module 202 may include at least one optical element for changing the propagation path of the light beam, wherein the optical element may change the propagation path of the light beam by reflecting, refracting, or diffracting the light beam.
- the scanning module 202 includes a lens, a mirror, a prism, a galvanometer, a grating, a liquid crystal, an optical phased array (Optical Phased Array), or any combination of the foregoing optical elements.
- at least part of the optical elements are moving.
- a driving module is used to drive the at least part of the optical elements to move.
- the moving optical elements can reflect, refract, or diffract the light beam to different directions at different times.
- the multiple optical elements of the scanning module 202 may rotate or vibrate around a common axis 209, and each rotating or vibrating optical element is used to continuously change the propagation direction of the incident light beam.
- the multiple optical elements of the scanning module 202 may rotate at different speeds or vibrate at different speeds.
- at least part of the optical elements of the scanning module 202 may rotate at substantially the same rotation speed.
- the multiple optical elements of the scanning module may also rotate around different axes.
- the multiple optical elements of the scanning module may also rotate in the same direction or in different directions; or vibrate in the same direction, or vibrate in different directions, which is not limited herein.
- the scanning module 202 includes a first optical element 214 and a driver 216 connected to the first optical element 214.
- the driver 216 is used to drive the first optical element 214 to rotate around the rotation axis 209 to change the first optical element 214.
- the direction of the beam 219 is collimated.
- the first optical element 214 projects the collimated light beam 219 to different directions.
- the angle between the direction of the collimated beam 219 changed by the first optical element and the rotation axis 209 changes as the first optical element 214 rotates.
- the first optical element 214 includes a pair of opposed non-parallel surfaces through which the collimated light beam 219 passes.
- the first optical element 214 includes a prism whose thickness varies in at least one radial direction.
- the first optical element 214 includes a wedge prism, and the collimated beam 219 is refracted.
- the scanning module 202 further includes a second optical element 215, the second optical element 215 rotates around the rotation axis 209, and the rotation speed of the second optical element 215 is different from the rotation speed of the first optical element 214.
- the second optical element 215 is used to change the direction of the light beam projected by the first optical element 214.
- the second optical element 215 is connected to another driver 217, and the driver 217 drives the second optical element 215 to rotate.
- the first optical element 214 and the second optical element 215 can be driven by the same or different drivers, so that the rotation speed and/or rotation of the first optical element 214 and the second optical element 215 are different, so as to project the collimated light beam 219 to the outside space.
- the controller 218 controls the drivers 216 and 217 to drive the first optical element 214 and the second optical element 215, respectively.
- the rotational speeds of the first optical element 214 and the second optical element 215 may be determined according to the area and pattern expected to be scanned in actual applications.
- the drivers 216 and 217 may include motors or other drivers.
- the second optical element 215 includes a pair of opposite non-parallel surfaces through which the light beam passes. In one embodiment, the second optical element 215 includes a prism whose thickness varies in at least one radial direction. In one embodiment, the second optical element 215 includes a wedge prism.
- the scanning module 202 further includes a third optical element (not shown) and a driver for driving the third optical element to move.
- the third optical element includes a pair of opposite non-parallel surfaces, and the light beam passes through the pair of surfaces.
- the third optical element includes a prism whose thickness varies in at least one radial direction.
- the third optical element includes a wedge prism. At least two of the first, second, and third optical elements rotate at different rotation speeds and/or rotation directions.
- each optical element in the scanning module 202 can project light to different directions, such as directions 211 and 213, so that the space around the distance measuring device 200 is scanned.
- directions 211 and 213 the directions that the space around the distance measuring device 200 is scanned.
- the return light 212 reflected by the probe 201 is incident on the collimating element 204 after passing through the scanning module 202.
- the detector 205 and the transmitter 203 are placed on the same side of the collimating element 204, and the detector 205 is used to convert at least part of the return light passing through the collimating element 204 into an electrical signal.
- an anti-reflection film is plated on each optical element.
- the thickness of the antireflection coating is equal to or close to the wavelength of the light beam emitted by the emitter 203, which can increase the intensity of the transmitted light beam.
- a filter layer is plated on the surface of an element located on the beam propagation path in the distance measuring device, or a filter is provided on the beam propagation path for transmitting at least the wavelength band of the beam emitted by the transmitter 203 , Reflect other bands to reduce the noise caused by ambient light to the receiver.
- the transmitter 203 may include a laser diode through which nanosecond laser pulses are emitted.
- the laser pulse receiving time can be determined, for example, the laser pulse receiving time can be determined by detecting the rising edge time and/or the falling edge time of the electrical signal pulse. In this way, the distance measuring device 200 can calculate the TOF using the pulse receiving time information and the pulse sending time information, so as to determine the distance between the probe 201 and the distance measuring device 200.
- the embodiments of the present application also provide a computer storage medium.
- the computer storage medium is used to store computer software instructions for detecting the above object. When it runs on a computer, the computer can perform various possible object detection in the above method embodiment. method. When the computer-executable instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application can be generated in whole or in part.
- the computer instructions can be stored in a computer storage medium, or transmitted from one computer storage medium to another computer storage medium, and the transmission can be transmitted to another by wireless (such as cellular communication, infrared, short-range wireless, microwave, etc.) Website site, computer, server or data center for transmission.
- the computer storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
- the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, an SSD).
- a person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware.
- the foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks, etc., which can store program codes Medium.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
L'invention concerne un procédé de détection d'objet, un dispositif électronique et une plateforme mobile. Le procédé de détection d'objet consiste : à acquérir des données de nuage de points épars et une image d'une scène à détecter (S201) ; à projeter les données de nuage de points épars et l'image sur un système de coordonnées cible afin d'acquérir des données à traiter (S202) ; à effectuer une détection tridimensionnelle sur lesdites données, afin d'acquérir un résultat de détection d'un objet compris dans ladite scène (S203). La présente invention réalise une détection d'objet par l'acquisition de données de nuage de points épars et d'une image, ce qui réduit la densité des données de nuage de points, ce qui permet de réduire les coûts de détection d'objet.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980012209.0A CN111712828A (zh) | 2019-06-06 | 2019-06-06 | 物体检测方法、电子设备和可移动平台 |
PCT/CN2019/090393 WO2020243962A1 (fr) | 2019-06-06 | 2019-06-06 | Procédé de détection d'objet, dispositif électronique et plateforme mobile |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/090393 WO2020243962A1 (fr) | 2019-06-06 | 2019-06-06 | Procédé de détection d'objet, dispositif électronique et plateforme mobile |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020243962A1 true WO2020243962A1 (fr) | 2020-12-10 |
Family
ID=72536815
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/090393 WO2020243962A1 (fr) | 2019-06-06 | 2019-06-06 | Procédé de détection d'objet, dispositif électronique et plateforme mobile |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111712828A (fr) |
WO (1) | WO2020243962A1 (fr) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112634439A (zh) * | 2020-12-25 | 2021-04-09 | 北京奇艺世纪科技有限公司 | 一种3d信息展示方法及装置 |
CN112734855A (zh) * | 2020-12-31 | 2021-04-30 | 网络通信与安全紫金山实验室 | 一种空间光束指向方法、系统及存储介质 |
CN112799067A (zh) * | 2020-12-30 | 2021-05-14 | 神华黄骅港务有限责任公司 | 装船机溜筒防撞预警方法、装置、系统和预警设备 |
CN112926461A (zh) * | 2021-02-26 | 2021-06-08 | 商汤集团有限公司 | 神经网络训练、行驶控制方法及装置 |
CN113625288A (zh) * | 2021-06-15 | 2021-11-09 | 中国科学院自动化研究所 | 基于点云配准的相机与激光雷达位姿标定方法和装置 |
CN113808096A (zh) * | 2021-09-14 | 2021-12-17 | 成都主导软件技术有限公司 | 一种非接触式的螺栓松动检测方法及其系统 |
CN114723715A (zh) * | 2022-04-12 | 2022-07-08 | 小米汽车科技有限公司 | 车辆目标检测方法、装置、设备、车辆及介质 |
CN116755441A (zh) * | 2023-06-19 | 2023-09-15 | 国广顺能(上海)能源科技有限公司 | 移动机器人的避障方法、装置、设备及介质 |
CN116973939A (zh) * | 2023-09-25 | 2023-10-31 | 中科视语(北京)科技有限公司 | 安全监测方法及装置 |
CN117611592A (zh) * | 2024-01-24 | 2024-02-27 | 长沙隼眼软件科技有限公司 | 一种异物检测方法、装置、电子设备以及存储介质 |
WO2024051025A1 (fr) * | 2022-09-07 | 2024-03-14 | 劢微机器人科技(深圳)有限公司 | Procédé, dispositif et équipement de positionnement de palette, et support d'enregistrement lisible |
CN118397616A (zh) * | 2024-06-24 | 2024-07-26 | 安徽大学 | 一种基于密度感知的补全和稀疏融合的3d目标检测方法 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116030423B (zh) * | 2023-03-29 | 2023-06-16 | 浪潮通用软件有限公司 | 一种区域边界侵入检测方法、设备及介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093191A (zh) * | 2012-12-28 | 2013-05-08 | 中电科信息产业有限公司 | 一种三维点云数据结合数字影像数据的物体识别方法 |
CN105783878A (zh) * | 2016-03-11 | 2016-07-20 | 三峡大学 | 一种基于小型无人机遥感的边坡变形检测及量算方法 |
CN106504328A (zh) * | 2016-10-27 | 2017-03-15 | 电子科技大学 | 一种基于稀疏点云曲面重构的复杂地质构造建模方法 |
CN108734728A (zh) * | 2018-04-25 | 2018-11-02 | 西北工业大学 | 一种基于高分辨序列图像的空间目标三维重构方法 |
CN109191509A (zh) * | 2018-07-25 | 2019-01-11 | 广东工业大学 | 一种基于结构光的虚拟双目三维重建方法 |
CN109685886A (zh) * | 2018-11-19 | 2019-04-26 | 国网浙江杭州市富阳区供电有限公司 | 一种基于混合现实技术的配网三维场景建模方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105719284B (zh) * | 2016-01-18 | 2018-11-06 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置及终端 |
CN108509918B (zh) * | 2018-04-03 | 2021-01-08 | 中国人民解放军国防科技大学 | 融合激光点云与图像的目标检测与跟踪方法 |
-
2019
- 2019-06-06 CN CN201980012209.0A patent/CN111712828A/zh active Pending
- 2019-06-06 WO PCT/CN2019/090393 patent/WO2020243962A1/fr active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093191A (zh) * | 2012-12-28 | 2013-05-08 | 中电科信息产业有限公司 | 一种三维点云数据结合数字影像数据的物体识别方法 |
CN105783878A (zh) * | 2016-03-11 | 2016-07-20 | 三峡大学 | 一种基于小型无人机遥感的边坡变形检测及量算方法 |
CN106504328A (zh) * | 2016-10-27 | 2017-03-15 | 电子科技大学 | 一种基于稀疏点云曲面重构的复杂地质构造建模方法 |
CN108734728A (zh) * | 2018-04-25 | 2018-11-02 | 西北工业大学 | 一种基于高分辨序列图像的空间目标三维重构方法 |
CN109191509A (zh) * | 2018-07-25 | 2019-01-11 | 广东工业大学 | 一种基于结构光的虚拟双目三维重建方法 |
CN109685886A (zh) * | 2018-11-19 | 2019-04-26 | 国网浙江杭州市富阳区供电有限公司 | 一种基于混合现实技术的配网三维场景建模方法 |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112634439B (zh) * | 2020-12-25 | 2023-10-31 | 北京奇艺世纪科技有限公司 | 一种3d信息展示方法及装置 |
CN112634439A (zh) * | 2020-12-25 | 2021-04-09 | 北京奇艺世纪科技有限公司 | 一种3d信息展示方法及装置 |
CN112799067A (zh) * | 2020-12-30 | 2021-05-14 | 神华黄骅港务有限责任公司 | 装船机溜筒防撞预警方法、装置、系统和预警设备 |
CN112734855A (zh) * | 2020-12-31 | 2021-04-30 | 网络通信与安全紫金山实验室 | 一种空间光束指向方法、系统及存储介质 |
CN112734855B (zh) * | 2020-12-31 | 2024-04-16 | 网络通信与安全紫金山实验室 | 一种空间光束指向方法、系统及存储介质 |
CN112926461A (zh) * | 2021-02-26 | 2021-06-08 | 商汤集团有限公司 | 神经网络训练、行驶控制方法及装置 |
CN112926461B (zh) * | 2021-02-26 | 2024-04-19 | 商汤集团有限公司 | 神经网络训练、行驶控制方法及装置 |
CN113625288A (zh) * | 2021-06-15 | 2021-11-09 | 中国科学院自动化研究所 | 基于点云配准的相机与激光雷达位姿标定方法和装置 |
CN113808096A (zh) * | 2021-09-14 | 2021-12-17 | 成都主导软件技术有限公司 | 一种非接触式的螺栓松动检测方法及其系统 |
CN113808096B (zh) * | 2021-09-14 | 2024-01-30 | 成都主导软件技术有限公司 | 一种非接触式的螺栓松动检测方法及其系统 |
CN114723715B (zh) * | 2022-04-12 | 2023-09-19 | 小米汽车科技有限公司 | 车辆目标检测方法、装置、设备、车辆及介质 |
CN114723715A (zh) * | 2022-04-12 | 2022-07-08 | 小米汽车科技有限公司 | 车辆目标检测方法、装置、设备、车辆及介质 |
WO2024051025A1 (fr) * | 2022-09-07 | 2024-03-14 | 劢微机器人科技(深圳)有限公司 | Procédé, dispositif et équipement de positionnement de palette, et support d'enregistrement lisible |
CN116755441B (zh) * | 2023-06-19 | 2024-03-12 | 国广顺能(上海)能源科技有限公司 | 移动机器人的避障方法、装置、设备及介质 |
CN116755441A (zh) * | 2023-06-19 | 2023-09-15 | 国广顺能(上海)能源科技有限公司 | 移动机器人的避障方法、装置、设备及介质 |
CN116973939A (zh) * | 2023-09-25 | 2023-10-31 | 中科视语(北京)科技有限公司 | 安全监测方法及装置 |
CN116973939B (zh) * | 2023-09-25 | 2024-02-06 | 中科视语(北京)科技有限公司 | 安全监测方法及装置 |
CN117611592A (zh) * | 2024-01-24 | 2024-02-27 | 长沙隼眼软件科技有限公司 | 一种异物检测方法、装置、电子设备以及存储介质 |
CN117611592B (zh) * | 2024-01-24 | 2024-04-05 | 长沙隼眼软件科技有限公司 | 一种异物检测方法、装置、电子设备以及存储介质 |
CN118397616A (zh) * | 2024-06-24 | 2024-07-26 | 安徽大学 | 一种基于密度感知的补全和稀疏融合的3d目标检测方法 |
Also Published As
Publication number | Publication date |
---|---|
CN111712828A (zh) | 2020-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020243962A1 (fr) | Procédé de détection d'objet, dispositif électronique et plateforme mobile | |
Liu et al. | TOF lidar development in autonomous vehicle | |
WO2021253430A1 (fr) | Procédé de détermination de pose absolue, dispositif électronique et plateforme mobile | |
WO2021072710A1 (fr) | Procédé et système de fusion de nuage de points pour un objet mobile, et support de stockage informatique | |
WO2022126427A1 (fr) | Procédé de traitement de nuage de points, appareil de traitement de nuage de points, plateforme mobile, et support de stockage informatique | |
US10860034B1 (en) | Barrier detection | |
EP4130798A1 (fr) | Procédé et dispositif d'identification de cible | |
US20210004566A1 (en) | Method and apparatus for 3d object bounding for 2d image data | |
RU2764708C1 (ru) | Способы и системы для обработки данных лидарных датчиков | |
US20210117696A1 (en) | Method and device for generating training data for a recognition model for recognizing objects in sensor data of a sensor, in particular, of a vehicle, method for training and method for activating | |
GB2573635A (en) | Object detection system and method | |
WO2022179207A1 (fr) | Procédé et appareil de détection d'occlusion de fenêtre | |
WO2021062581A1 (fr) | Procédé et appareil de reconnaissance de marquage routier | |
US11592820B2 (en) | Obstacle detection and vehicle navigation using resolution-adaptive fusion of point clouds | |
WO2022198637A1 (fr) | Procédé et système de filtrage de bruit en nuage de points et plate-forme mobile | |
US20190187253A1 (en) | Systems and methods for improving lidar output | |
CN111999744A (zh) | 一种无人机多方位探测、多角度智能避障方法 | |
CN111819602A (zh) | 增加点云采样密度的方法、点云扫描系统、可读存储介质 | |
Steinbaeck et al. | Occupancy grid fusion of low-level radar and time-of-flight sensor data | |
WO2020215252A1 (fr) | Procédé de débruitage de nuage de points de dispositif de mesure de distance, dispositif de mesure de distance et plateforme mobile | |
US20210255289A1 (en) | Light detection method, light detection device, and mobile platform | |
US20230341558A1 (en) | Distance measurement system | |
WO2021232227A1 (fr) | Procédé de construction de trame de nuage de points, procédé de détection de cible, appareil de télémétrie, plateforme mobile et support de stockage | |
WO2020155142A1 (fr) | Procédé, dispositif et système de rééchantillonnage de nuage de points | |
WO2024060209A1 (fr) | Procédé de traitement de nuage de points et radar |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19931933 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19931933 Country of ref document: EP Kind code of ref document: A1 |