WO2023178931A1 - 运动规划方法、装置及机器人 - Google Patents
运动规划方法、装置及机器人 Download PDFInfo
- Publication number
- WO2023178931A1 WO2023178931A1 PCT/CN2022/117222 CN2022117222W WO2023178931A1 WO 2023178931 A1 WO2023178931 A1 WO 2023178931A1 CN 2022117222 W CN2022117222 W CN 2022117222W WO 2023178931 A1 WO2023178931 A1 WO 2023178931A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion
- motion planning
- target
- models
- trajectory
- Prior art date
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 298
- 238000000034 method Methods 0.000 title claims abstract description 67
- 239000013598 vector Substances 0.000 claims description 61
- 239000011159 matrix material Substances 0.000 claims description 52
- 238000013507 mapping Methods 0.000 claims description 49
- 238000012545 processing Methods 0.000 claims description 39
- 238000000605 extraction Methods 0.000 claims description 35
- 230000011218 segmentation Effects 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 18
- 238000003860 storage Methods 0.000 claims description 8
- 238000012549 training Methods 0.000 description 16
- 238000004891 communication Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Definitions
- This application relates to the field of computer technology, and in particular to motion planning methods, devices and robots.
- Computer vision technology processes point clouds, pictures, videos and other data collected by equipment to achieve functions such as target recognition, scene analysis and image understanding in the scene, and is widely used in fields such as robot motion control.
- the object recognition ability of the network model is trained through commonly used standard image data sets or collected scene image data sets. Based on the objects recognized by the trained network model, motion planning is performed to realize intelligent control of the robot.
- the objects existing in the actual scene and the objects in the data set used for training are different in texture, structure, opening and closing status and other information.
- This application aims to solve at least one of the technical problems existing in the prior art. To this end, this application proposes a motion planning method to improve the accuracy of motion planning.
- a motion planning method including:
- N motion planning models Input the target object information to N motion planning models respectively, and obtain N first motion trajectories output by the N motion planning models.
- the N motion planning models have the same structure and different parameters, and N is greater than 1. a positive integer;
- the target movement trajectory is determined.
- the motion planning method of the embodiment of the present application by setting N motion planning models with the same structure and different parameters, the target object information is processed, and the N motion planning models output N different first motions that are accurate but differentiated. Trajectory can improve the accuracy of target motion trajectory. The whole process does not require redesigning the model structure. It is plug-and-play and has wide applicability.
- inputting the target object information to N motion planning models respectively and obtaining N first motion trajectories output by the N motion planning models includes:
- the second feature vector is input into the trajectory planning structure of the motion planning model, and the first motion trajectory output by the trajectory planning structure is obtained.
- mapping the first feature vector based on the mapping relationship to obtain the second feature vector includes:
- the target orthogonal matrix is determined through the following steps:
- the target orthogonal matrix is determined.
- the N motion planning models are trained through the following steps:
- the N first sample feature vectors are respectively mapped based on the mapping relationship to obtain N second sample feature vectors, and the mapping relationships of at least two motion planning models among the N motion planning models are different;
- the N second sample feature vectors are input to the trajectory planning structure of the N motion planning models in one-to-one correspondence, based on the motion trajectories output by the N motion planning models and the samples corresponding to the sample object information. motion trajectories, and update the parameters of the N motion planning models.
- determining the target movement trajectory based on the N first movement trajectories includes:
- the N first motion trajectories are summed and averaged to obtain the target motion trajectory.
- obtaining the image information of the current scene includes:
- Obtaining target object information based on the image information includes:
- the target object information is obtained based on the RGB image, the depth image and the target segmentation mask.
- a motion planning device including:
- a first processing module configured to obtain target object information based on the image information
- the second processing module is used to input the target object information to N motion planning models respectively, and obtain N first motion trajectories output by the N motion planning models.
- the structures of the N motion planning models are the same and The parameters are different, N is a positive integer greater than 1;
- the third processing module is used to determine the target movement trajectory based on the N first movement trajectories.
- the robot body is equipped with an image acquisition device, and the image acquisition device is used to collect image information of the current scene;
- a controller the controller is electrically connected to the image acquisition device, and the controller is used to control the robot to move according to the target motion trajectory based on the above motion planning method.
- An electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor.
- the processor executes the computer program, the above is implemented. Any of the motion planning methods.
- a computer program is stored thereon, and when the computer program is executed by a processor, it implements any one of the above motion planning methods.
- a computer program product includes a computer program that implements any one of the above motion planning methods when executed by a processor.
- the N motion planning models By setting up N motion planning models with the same structure and different parameters, and processing the target object information, the N motion planning models output N first motion trajectories with differences, which can improve the accuracy of the target motion trajectory without redesigning it.
- the model structure has wide applicability.
- the motion planning method can be expanded using existing models.
- the structures of the N motion planning models are the same and only the parameters are different.
- the structures of the N motion planning models are the same and the corresponding training processes are the same. This can greatly shorten the development cycle. Limit the type of model and be able to flexibly adapt to existing network models.
- N motion planning models can be trained using the sample object information and corresponding sample motion trajectories of the same sample training set.
- the structures of the N motion planning models are the same, and the training process is based on the internal parameters of the N motion planning models.
- the configuration process eliminates the need to re-collect image data sets, effectively reducing the workload of training the model.
- Figure 1 is a schematic flowchart of a motion planning method provided by an embodiment of the present application
- Figure 2 is a schematic algorithm flow diagram of the motion planning model provided by the embodiment of the present application.
- Figure 3 is a schematic structural diagram of a motion planning device provided by an embodiment of the present application.
- Figure 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- connection should be understood in a broad sense.
- it can be a fixed connection or a detachable connection. Or integrated connection; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium.
- connection should be understood in specific situations.
- references to the terms “one embodiment,” “some embodiments,” “an example,” “specific examples,” or “some examples” or the like means that specific features are described in connection with the embodiment or example. , structures, materials or features are included in at least one embodiment or example of the embodiments of this application. In this specification, the schematic expressions of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the specific features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine different embodiments or examples and features of different embodiments or examples described in this specification unless they are inconsistent with each other.
- Computer vision technology processes point clouds, pictures, videos and other data collected by equipment to achieve functions such as target recognition, scene analysis and image understanding in the scene, and is widely used in the field of motion control of intelligent service robots.
- the object recognition ability of the network model is trained through commonly used standard image data sets or collected scene image data sets. Based on the objects recognized by the trained network model, motion planning is performed to realize intelligent control of intelligent service robots. It can be understood that, The intelligent service robot may be a home service oriented robot.
- the objects existing in the actual scene are different from the objects in the data set used for training in terms of texture, structure, opening and closing status, etc.
- the object of the cabinet in the image data set used for training is a white mirror door.
- a cabinet that opens to the left, while the cabinet in the actual scene is a cabinet with a brown wooden texture door that opens to the right.
- Re-collecting image data sets will not only increase the workload of network model training, but also cannot cope with the infinite transformation of objects and targets in actual scenes.
- by accumulating the number of parameters of the network model, redesigning the structure of the network model, or collecting various Network models are aggregated to improve the recognition and planning performance of the network model.
- the motion planning method of the embodiment of the present application is described below in conjunction with Figures 1 and 2.
- the motion planning results predicted by the expanded multiple models are used to guide the movement of the robot. There is no need to redesign the network model, and rapid improvement can be achieved. Model recognition planning performance.
- the motion planning method of the embodiment of the present application includes steps 110 to 140.
- the method is applied to the motion trajectory planning of the robot.
- the execution subject of the method can be the controller of the robot and other equipment, or the cloud, or the edge server.
- Step 110 Obtain the image information of the current scene.
- the current scene is a scene that needs to be visually recognized.
- the current scene includes a target object, and the target object is an object that needs to be recognized.
- the image information of the current scene is image data or point cloud data reflected in the current scene collected by devices such as cameras or radars.
- a service robot is equipped with a camera, and the camera collects image data in front of the robot to obtain image information of the current scene where the robot is located.
- Step 120 Obtain target object information based on the image information.
- the target object information refers to the image information related to the target object in the current scene.
- the target object information corresponding to the target object in the current scene is obtained, which can remove the influence of irrelevant information in the current scene, making the subsequent motion trajectory planning process pay more attention to the target object.
- the target object information corresponding to the target object can be obtained from the image information of the current scene through processing methods such as semantic segmentation, target detection, and instance segmentation.
- target detection is performed on the image information of the current scene, the category information and location information of the target object are located, and the corresponding target object information is obtained.
- perform instance segmentation on the image information of the current scene classify all pixels of the image information of the current scene, and distinguish different individuals in the same category to obtain the corresponding target object information.
- Step 130 Input the target object information to N motion planning models respectively, and obtain N first motion trajectories output by the N motion planning models, where N is a positive integer greater than 1.
- the target object can be an obstacle in the current scene.
- the motion trajectory output by the motion planning model based on the target object information can be an avoidance trajectory for the target object. When moving according to this motion trajectory, collision with the target object can be avoided.
- the target object can be an item that needs to be operated in the current scene.
- the motion trajectory output by the motion planning model based on the target object information can be a motion trajectory moving towards the target object.
- the target object that the robot needs to operate is a refrigerator.
- the robot presses the The motion trajectory reaches the location of the refrigerator and operates the refrigerator.
- the motion planning implemented by the motion planning model includes two processes: path planning and trajectory optimization.
- Path planning refers to planning a path from the initial position to the target position based on the target object information. The path planning process only considers the geometric constraints of the current scene.
- Trajectory optimization constrains the given path calculated by previous path planning and the motion state of the robot, and outputs the corresponding motion parameters.
- the first motion trajectory output by each motion planning model for trajectory planning processing on the target object information includes motion parameters and path trajectories.
- the target object information is input into N motion planning models respectively.
- the structures of the N motion planning models are the same.
- the steps of trajectory planning processing of the target object information by the N motion planning models are the same.
- the N motion planning models The parameters are different, and the N first motion trajectories output by the N motion planning models for the target object information are different.
- Step 140 Determine the target movement trajectory based on the N first movement trajectories.
- the target object information is input into N motion planning models with different parameters for trajectory planning prediction.
- the N motion planning models correspondingly output N first motion trajectories. According to the different N first motion trajectories, determine Output the target motion trajectory used to guide the robot's movement.
- the motion planning models By expanding the number of motion planning models, the motion planning models have different model parameters under the same structure. Each motion planning model can output accurate but differentiated prediction results, and the prediction results of all the expanded N motion planning models can be Aggregating the planning results can effectively improve the accuracy of the target motion trajectory.
- the number of motion planning models can be set according to needs.
- the N motion planning models have the same structure and correspondingly the same training process, which can greatly shorten the development cycle compared to redesigning the network model or aggregating different networks. model, which can quickly improve the performance of model target recognition and trajectory planning.
- the motion planning method provided by the embodiment of the present application can be expanded using existing models.
- the structures of the N motion planning models are the same and only have different parameters.
- the type of the model is not limited and can be flexibly adapted to existing models. in the network model.
- the motion planning method by setting N motion planning models with the same structure and different parameters, the target object information is processed, and the N motion planning models output N different first motions that are accurate but differentiated.
- the motion trajectory can improve the accuracy of the target motion trajectory.
- the entire process does not require redesigning the model structure. It is plug-and-play and has wide applicability.
- step 130 includes:
- the motion planning model includes a feature extraction structure and a trajectory planning structure.
- the feature extraction structure performs feature extraction on the target object information.
- the trajectory planning structure performs trajectory planning on the feature vector output by the feature extraction structure, and then outputs the first motion corresponding to the target object information. trajectory.
- the feature extraction structure performs feature extraction on the target object information, obtains the first feature vector corresponding to the target object information, performs mapping processing on the first feature vector based on the preset mapping relationship, obtains a new second feature vector, and then maps the The second feature vector is input to the subsequent trajectory planning structure for trajectory planning.
- Mapping refers to the "correspondence" relationship between elements in two sets of elements. Mapping is performed on the first feature vector, and the values in the first feature vector are converted according to the preset mapping relationship to obtain the corresponding second feature vector. .
- mapping relationships of at least two motion planning models among the N motion planning models are different.
- the different mapping relationships indicate that the second feature vectors of the input trajectory planning structures are different, and then the second feature vector output by the trajectory planning structure is different.
- the movement trajectory is also different.
- the first eigenvector is projected onto the target orthogonal matrix to obtain the second eigenvector.
- the target orthogonal matrix is an orthogonal matrix.
- the orthogonal matrix is also called a square matrix.
- the row vector and column vector are both orthogonal unit vectors.
- the orthogonal result of any two rows is 0, that is, any two rows of points.
- the result of multiplication is 0, and the result of multiplying any row by itself is 1.
- the target orthogonal matrix is used as the projection matrix of the first eigenvector, that is, the target orthogonal matrix is used as the mapping relationship of the first eigenvector for mapping processing, and the first eigenvector is projected to the target orthogonal matrix.
- the second eigenvector is obtained.
- At least two of the N motion planning models have different target orthogonal matrices, and the target orthogonal matrices correspond to the mapping relationship between the first eigenvector and the second eigenvector.
- N motion planning models can correspond to N different target orthogonal matrices, that is, the mapping relationships of the N motion planning models for mapping processing are different.
- the target orthogonal matrix is determined through the following steps: obtaining the target symmetry matrix; obtaining an orthogonal eigenvector based on the target symmetry matrix; and determining the target orthogonal matrix based on the orthogonal eigenvector.
- the target symmetric matrix is a symmetric matrix.
- a symmetric matrix refers to a matrix with the main diagonal as the axis of symmetry and each element corresponding to equal values.
- the target orthogonal eigenvectors in the target symmetric matrix that is, the orthogonal eigenvectors in the target symmetric matrix are obtained, and based on the orthogonal eigenvectors, the target orthogonal corresponding to the target symmetric matrix is determined. matrix.
- the target orthogonal matrices in the N motion planning models can be different from each other, that is, the N motion planning models perform mapping processing with different mapping relationships.
- different target symmetry matrices are randomly generated, and then the target symmetry matrix is calculated.
- Orthogonal eigenvectors and then determine different target orthogonal matrices.
- the target symmetric matrix is a real symmetric matrix, where each value in the target symmetric matrix A 0 is sampled from the uniform distribution U(0,1).
- v l represents the l-th eigenvector generated by the A 0 matrix.
- the first feature vector 0 0 extracted through the feature extraction structure G 0 can be projected into the new feature representation space:
- f 0 is the second eigenvector
- the second feature vector f 0 obtained by mapping the target orthogonal matrix B 0 is input into the subsequent trajectory planning structure F 0 to perform the corresponding trajectory planning task.
- N motion planning models are trained through the following steps:
- the N first sample feature vectors are mapped separately according to the preset mapping relationship to obtain N second sample feature vectors;
- the N motion planning models are individually trained in one-to-one correspondence, and the parameters of the N motion planning models are updated.
- N motion planning models can be trained using the sample object information and corresponding sample motion trajectories of the same sample training set.
- the structures of the N motion planning models are the same, and the training process is for the N motion planning models.
- Mapping processing is introduced into the feature extraction structure and trajectory planning structure of the motion planning model to be trained, and the N first sample feature vectors output by the feature extraction structures of the N motion planning models to be trained are mapped respectively.
- the motion planning models at least two motion planning models perform mapping processing with different mapping relationships.
- a model with a feature extraction structure and a trajectory planning structure can be selected in advance as the original model, and the mapping process of the feature vector is introduced at the connection between the feature extraction structure and the trajectory planning structure of the original model, and quantitative analysis is performed on it. expansion.
- the original model 20 includes a feature extraction structure G0 .
- the feature extraction structure G0 can perform feature extraction on the input object information and obtain the corresponding feature vector. Then it is sent to the trajectory planning structure F 0 to predict the corresponding motion trajectory.
- Mapping processing is introduced at the feature extraction structure G 0 and the trajectory planning structure F 0 to expand the original model 2 0 and obtain a new model 2 0 , a new model 2 1 , a new model 2 2 ,..., a new model 2 N .
- the new model 2 0 , new model 2 1 , new model 2 2 ,..., new model 2 N are trained separately, and the feature extraction structure and trajectory planning structure are updated. internal parameters.
- the feature extraction structure ⁇ G 0 , G 1 , G 2 ,... .,G N ⁇ has the same structure, and the structure of the trajectory planning structure ⁇ F 0 , F 1 , F 2 ,...., F N ⁇ is also the same.
- Mapping processing based on the mapping relationship of the target orthogonal matrix the parameters of the feature extraction structure ⁇ G 0 , G 1 , G 2 ,...., G N ⁇ are different, and accordingly the parameters of the trajectory planning structure ⁇ F 0 , F 1 , F 2 ,...., F N ⁇ are The parameters are also different.
- the target object information is input to the new model 2 0 , new model 2 1 , new model 2 2 ,..., new model 2 N+1 feature extraction structures in N ⁇ G 0 , G 1 , G 2 , ...., G N ⁇ corresponds to outputting N+1 extracted first sample feature vectors ⁇ g 0 , g 1 , g 2 ,...., g N ⁇ , according to N+1 target orthogonal matrices , perform mapping processing based on the first feature vector ⁇ g 0 , g 1 , g 2 ,...., g N ⁇ , output N+1 new second feature vectors, and finally input N+1 trajectory planning structures ⁇ F 0 , F 1 , F 2 ,...., F N ⁇ , and the corresponding N+1 first motion trajectories ⁇ t 0 , t 1 , t 2 ,...., t N ⁇ are predicted.
- the expanded new model 2 0 , new model 2 1 , new model 2 2 ,..., new model 2 N are all obtained by separately training using the training method of the original model 2 0 , and the training process is continuous. Adjust the internal parameters that configure the feature extraction structure and trajectory planning structure of each model.
- step 140 includes:
- the N first motion trajectories are summed and averaged to obtain the target motion trajectory.
- N motion planning models output accurate but different N first motion trajectories, and the first motion trajectories predicted by each motion planning model are fused by the summation and averaging method.
- the summed average of the N first motion trajectories is used as the predicted result target motion trajectory of the final trajectory planning.
- the N first motion trajectories output by new model 2 N include ⁇ t 0 , t 1 , t 2 , .... ,t N ⁇ , calculated using the following formula:
- t * is the target movement trajectory
- t i is the i-th first movement trajectory
- step 110 includes: obtaining a depth image and an RGB image of the current scene;
- Step 120 includes: performing instance segmentation based on at least one of the depth image and the RGB image, and obtaining a target segmentation mask of the target object;
- the target object information is obtained.
- the RGB image refers to the color texture information of the current scene captured by devices such as RGB cameras
- the depth image refers to the geometric position information of the current scene captured by devices such as radar and depth cameras.
- the mask map corresponding to the target object can be obtained by performing processing such as semantic segmentation, target detection, and instance segmentation on the depth image or RGB image of the current scene.
- the target segmentation mask represents the range of the target object in the current scene. Processing based on the depth image or RGB image of the current scene to obtain the corresponding target segmentation mask can remove irrelevant information in the current scene and help Focus more on the target object than processing the target object information.
- a target segmentation mask is obtained from an RGB image through instance segmentation processing.
- the obtained target segmentation mask can not only distinguish the range of the target object in the current scene, but also distinguish different instances of the same category in the current scene.
- the RGB-based deeplab series model or the point cloud-based pointnet++ series model can be used to perform instance segmentation on the depth image or RGB image of the current scene.
- the RGB image S1 is obtained through the camera
- the depth image is obtained through the radar or depth camera
- the corresponding point cloud information S2 is obtained based on the depth image.
- the target mask corresponds to the target segmentation mask, which can be obtained from the segmentation of the RGB image S1 or from points. Cloud information is obtained by S2 segmentation.
- the target object information input to the motion planning model includes target mask, target RGB and target position, where the target mask corresponds to the target segmentation mask, the target RGB corresponds to the RGB image S1, and the target position corresponds to the point cloud information S2.
- model 1 can be any model that can realize the segmentation task, for example, the RGB-based deeplab series model or the point cloud-based pointnet++ series model.
- the current motion state of the robot can be input.
- Each motion planning model performs trajectory planning processing based on the target object information.
- the predicted output motion trajectory includes motion parameters and path trajectories, which can be calculated based on The predicted motion parameters and the current motion state of the robot are used to adjust the motion state.
- the motion planning device provided by the embodiment of the present application is described below.
- the motion planning device described below and the motion planning method described above can be referenced correspondingly.
- the motion planning device provided by the embodiment of the present application includes:
- Obtaining module 310 is used to obtain image information of the current scene
- the first processing module 320 is used to obtain target object information based on image information
- the second processing module 330 is used to input target object information to N motion planning models respectively, and obtain N first motion trajectories output by the N motion planning models.
- the N motion planning models have the same structure and different parameters, and N is A positive integer greater than 1;
- the third processing module 340 is used to determine the target movement trajectory based on the N first movement trajectories.
- the motion planning device by setting N motion planning models with the same structure and different parameters, and processing the target object information, the N different first motion trajectories output by the N motion planning models can be To improve the accuracy of target motion trajectories, the entire process does not require redesigning the model structure. It is plug-and-play and has wide applicability.
- the second processing module 330 is used to input the target object information into the feature extraction structure of the motion planning model, and obtain the first feature vector output by the feature extraction structure;
- the first feature vector is mapped based on the mapping relationship to obtain the second feature vector, and at least two of the N motion planning models have different mapping relationships;
- the second processing module 330 is configured to projectively map the first feature vector to the target orthogonal matrix to obtain the second feature vector, and the mapping relationship includes the target orthogonal matrix.
- the target orthogonal matrix is determined by the following steps:
- the target orthogonal matrix is determined.
- N motion planning models can be trained through the following steps:
- the third processing module 340 is used to perform sum and average processing on the N first motion trajectories to obtain the target motion trajectory.
- the acquisition module 310 is used to acquire the depth image and RGB image of the current scene
- the first processing module 320 is configured to obtain the target segmentation mask of the target object based on at least one of the depth image and the RGB image; and obtain the target object information based on the depth image, the RGB image, and the target segmentation mask.
- An embodiment of the present application also provides a robot.
- the robot may be a mechanical device such as an intelligent robot, a general service robot, a cleaning robot, a drone, or a robotic arm.
- the robot body is provided with an image acquisition device for collecting image information of the current scene.
- the controller of the robot is electrically connected to the image acquisition device.
- the controller can determine based on the above motion planning method and the image information of the current scene collected by the image acquisition device.
- Target motion trajectory and control the robot to move according to the target motion trajectory.
- the image acquisition device may be an RGB camera, an infrared camera, an RGB-D camera, a laser radar, or other image acquisition device capable of imaging and ranging.
- Figure 4 illustrates a schematic diagram of the physical structure of an electronic device.
- the electronic device may include: a processor (processor) 410, a communications interface (Communications Interface) 420, a memory (memory) 430 and a communication bus 440.
- the processor 410, the communication interface 420, and the memory 430 complete communication with each other through the communication bus 440.
- the processor 410 can call logical instructions in the memory 430 to execute a motion planning method, which method includes: acquiring image information of the current scene; acquiring target object information based on the image information; inputting the target object information to N motion planning models respectively , obtain N first motion trajectories output from N motion planning models.
- the N motion planning models have the same structure but different parameters.
- N is a positive integer greater than 1; based on the N first motion trajectories, determine the target motion trajectory.
- the above-mentioned logical instructions in the memory 430 can be implemented in the form of software functional units and can be stored in a computer-readable storage medium when sold or used as an independent product.
- the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
- the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code. .
- this application also provides a computer program product, which includes a computer program.
- the computer program can be stored on a non-transitory computer-readable storage medium.
- the computer program can execute
- the motion planning method provided by the above method embodiments includes: obtaining image information of the current scene; obtaining target object information based on the image information; inputting the target object information into N motion planning models respectively to obtain N motion plans
- the N first motion trajectories output by the model, the N motion planning models have the same structure and different parameters, and N is a positive integer greater than 1; based on the N first motion trajectories, the target motion trajectory is determined.
- embodiments of the present application also provide a non-transitory computer-readable storage medium on which a computer program is stored.
- the computer program is implemented when executed by a processor to execute the motion planning method provided by the above embodiments.
- the method includes: obtaining image information of the current scene; obtaining target object information based on the image information; inputting the target object information to N motion planning models respectively, and obtaining N first motion trajectories and N motions output by the N motion planning models.
- the planning model has the same structure but different parameters, and N is a positive integer greater than 1; based on the N first movement trajectories, the target movement trajectory is determined.
- the device embodiments described above are only illustrative.
- the units described as separate components may or may not be physically separated.
- the components shown as units may or may not be physical units, that is, they may be located in One location, or it can be distributed across multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. Persons of ordinary skill in the art can understand and implement the method without any creative effort.
- each embodiment can be implemented by software plus a necessary general hardware platform, and of course, it can also be implemented by hardware.
- the computer software product can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., including a number of instructions to cause a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods described in various embodiments or certain parts of the embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
本申请涉及计算机技术领域,提供一种运动规划方法、装置及机器人,该方法包括:获取当前场景的图像信息;基于所述图像信息,获取目标对象信息;将所述目标对象信息分别输入至N个运动规划模型,获得所述N个运动规划模型输出的N个第一运动轨迹,所述N个运动规划模型的结构相同且参数不同,N为大于1的正整数;基于所述N个第一运动轨迹,确定目标运动轨迹。该方法通过设置结构相同、参数不同的N个运动规划模型,针对目标对象信息进行处理,N个运动规划模型输出具有差异性的N个第一运动轨迹,可以提高目标运动轨迹的准确度,无需重新设计模型结构,具有广泛的适用性,且基于该运动规划方法,可以智能控制服务机器人按目标运动轨迹动作。
Description
相关申请的交叉引用
本公开要求于2022年3月24日在中国知识产权局提交的标题为“运动规划方法、装置及机器人”中国专利申请No.2022103037089的优先权,通过引用将该中国专利申请公开的全部内容并入本文。
本申请涉及计算机技术领域,尤其涉及运动规划方法、装置及机器人。
计算机视觉技术通过对设备采集的点云、图片或视频等数据进行处理,实现场景中的目标识别、景物分析和图像理解等功能,广泛应用于机器人运动控制等领域。
通过常用的标准图像数据集或收集的场景图像数据集来训练网络模型的物体识别能力,根据训练好的网络模型所识别出的物体,进行运动规划,实现机器人的智能控制。
但实际场景中所存在的物体与用于训练的数据集中的物体,在纹理、结构及开合状态等信息上有所不同,在使用网络模型进行识别规划时,容易出现物体的错误识别,进而导致为服务机器人规划的运动轨迹出现偏差,影响对服务机器人的智能控制。
发明内容
本申请旨在至少解决现有技术中存在的技术问题之一。为此,本申请提出一种运动规划方法,提升运动规划的准确度。
根据本申请的第一方面,提供了一种运动规划方法,包括:
获取当前场景的图像信息;
基于所述图像信息,获取目标对象信息;
将所述目标对象信息分别输入至N个运动规划模型,获得所述N个运动规划模型输出的N个第一运动轨迹,所述N个运动规划模型的结构相同且参数不同,N为大于1的正整数;
基于所述N个第一运动轨迹,确定目标运动轨迹。
根据本申请实施例的运动规划方法,通过设置结构相同、参数不同的N个运动规划模型,针对目标对象信息进行处理,N个运动规划模型输出准确但具有差异性的N个不同的第一运动轨迹,可以提高目标运动轨迹的准确性,整个过程无需重新设计模型结构,即插即用,具有广泛的适用性。
根据本申请的一个实施例,所述将所述目标对象信息分别输入至N个运动规划模型,获得所述N个运动规划模型输出的N个第一运动轨迹,包括:
将所述目标对象信息输入至所述运动规划模型的特征提取结构,获得所述特征提取结构输出的第一特征向量;
基于映射关系对所述第一特征向量进行映射处理,获得第二特征向量,所述N个运动规划模型中至少两个运动规划模型的所述映射关系不同;
将所述第二特征向量输入至所述运动规划模型的轨迹规划结构,获得所述轨迹规划结构输出的所述第一运动轨迹。
根据本申请的一个实施例,所述基于映射关系对所述第一特征向量进行映射处理,获得第二特征向量,包括:
将所述第一特征向量投影映射到目标正交矩阵,获得所述第二特征向量,所述映射关系包括所述目标正交矩阵。
根据本申请的一个实施例,所述目标正交矩阵通过如下步骤确定:
获取目标对称矩阵;
基于所述目标对称矩阵,获得正交特征向量;
基于所述正交特征向量,确定所述目标正交矩阵。
根据本申请的一个实施例,所述N个运动规划模型通过如下步骤训练得到:
将样本对象信息分别输入至待训练的所述N个运动规划模型的特征提取结构,获得N个第一样本特征向量;
基于映射关系对所述N个第一样本特征向量分别进行映射处理,获得N个第二样本特征向量,所述N个运动规划模型中至少两个运动规划模型的所述映 射关系不同;
将所述N个第二样本特征向量一一对应地输入至的所述N个运动规划模型的轨迹规划结构,基于所述N个运动规划模型输出的运动轨迹和所述样本对象信息对应的样本运动轨迹,更新所述N个运动规划模型的参数。
根据本申请的一个实施例,所述基于所述N个第一运动轨迹,确定目标运动轨迹,包括:
对所述N个第一运动轨迹进行求和平均处理,获得所述目标运动轨迹。
根据本申请的一个实施例,所述获取当前场景的图像信息,包括:
获取所述当前场景的RGB图像和深度图像;
所述基于所述图像信息,获取目标对象信息,包括:
基于所述RGB图像和所述深度图像中至少一个图像,获取所述目标对象的目标分割掩码;
基于所述RGB图像、所述深度图像和所述目标分割掩码,获得所述目标对象信息。
根据本申请的第二方面,提供了一种运动规划装置,包括:
获取模块,用于获取当前场景的图像信息;
第一处理模块,用于基于所述图像信息,获取目标对象信息;
第二处理模块,用于将所述目标对象信息分别输入至N个运动规划模型,获得所述N个运动规划模型输出的N个第一运动轨迹,所述N个运动规划模型的结构相同且参数不同,N为大于1的正整数;
第三处理模块,用于基于所述N个第一运动轨迹,确定目标运动轨迹。
根据本申请第三方面实施例的机器人,包括:
机器人本体,所述机器人本体设有图像采集装置,所述图像采集装置用于采集当前场景的图像信息;
控制器,所述控制器与所述图像采集装置电连接,所述控制器用于基于上述运动规划方法,控制机器人按目标运动轨迹动作。
根据本申请第四方面实施例的电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述任一种所述运动规划方法。
根据本申请第五方面实施例的非暂态计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上述任一种所述运动规划方法。
根据本申请第六方面实施例的计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上述任一种所述运动规划方法。
本申请实施例中的上述一个或多个技术方案,至少具有如下技术效果之一:
通过设置结构相同、参数不同的N个运动规划模型,针对目标对象信息进行处理,N个运动规划模型输出具有差异性的N个第一运动轨迹,可以提高目标运动轨迹的准确度,无需重新设计模型结构,具有广泛的适用性。
进一步的,运动规划方法可以使用现有的模型进行扩充,N个运动规划模型的结构相同仅参数不同,N个运动规划模型的结构相同,相应地训练过程相同,可以极大地缩小开发周期,不限制模型的类型,能够灵活适配于现有的网络模型中。
更进一步的,N个运动规划模型可以使用同一样本训练集的样本对象信息及对应的样本运动轨迹进行训练,N个运动规划模型的结构相同,训练的过程是针对N个运动规划模型的内部参数进行配置的过程,无需重新收集图像数据集,有效降低训练模型的工作量。
本申请的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的运动规划方法的流程示意图;
图2是本申请实施例提供的运动规划模型的算法流程示意图;
图3是本申请实施例提供的运动规划装置的结构示意图;
图4是本申请实施例提供的电子设备的结构示意图。
下面结合附图和实施例对本申请的实施方式作进一步详细描述。以下实施例用于说明本申请,但不能用来限制本申请的范围。
在本申请实施例的描述中,需要说明的是,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。
在本申请实施例的描述中,需要说明的是,除非另有明确的规定和限定,术语“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本申请实施例中的具体含义。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请实施例的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
计算机视觉技术通过对设备采集的点云、图片或视频等数据进行处理,实现场景中的目标识别、景物分析和图像理解等功能,广泛应用于智能服务机器人的运动控制领域。
通过常用的标准图像数据集或收集的场景图像数据集来训练网络模型的物体识别能力,根据训练好的网络模型所识别出的物体,进行运动规划,实现智能服务机器人的智能控制,可以理解,该智能服务机器人可以是面向家庭服务的机器人。
但实际场景中所存在的物体与用于训练的数据集中的物体在纹理、结构及开合状态等信息上有所不同,例如,用于训练的图像数据集中橱柜这一物体为白色镜面门体向左开启的橱柜,而实际场景中的橱柜为棕色木质纹理门体向右开启的 橱柜。
在使用图像数据集训练的网络模型进行目标识别和运动规划时,容易出现物体目标的错误识别,进而导致为机器人的运动轨迹出现偏差,影响机器人的智能控制。
重新收集图像数据集不仅会增加网络模型训练的工作量,并且无法应对实际场景中物体目标无限变换的情况,相关技术中,通过累加网络模型的参数数量、重新设计网络模型的结构或是收集各种网络模型进行聚合的方式来提升网络模型的识别规划性能。
其中,在保持网络模型结构的情况下,通过简单累加参数数量改进网络模型的方式,对目标识别和轨迹规划的效果提升有限,且具有大量参数的网络模型难以训练调优。
重新设计网络模型存在不确定性,涉及大量的参数和结构的调整,对开发人员的要求高,开发迭代周期长,网络模型设计失败的风险较大。
广泛收集各种网络模型来聚合,需要严格确保收集的模型对当前任务具有积极的作用,并且不同网络模型的训练难易程度不同,需要人工不断调试干预,收集周期长,需要耗费过多的时间、计算资源和人工成本。
下面结合图1和图2描述本申请实施例的运动规划方法,通过对网络模型进行扩充,利用扩充的多个模型预测的运动规划结果来指导机器人的运动,无需重新设计网络模型,能够快速提升模型的识别规划性能。
如图1所示,本申请实施例的运动规划方法包括步骤110至步骤140,该方法应用于机器人的运动轨迹规划,该方法的执行主体可以为机器人等设备的控制器,或者云端,或者边缘服务器。
步骤110、获取当前场景的图像信息。
其中,当前场景是需要进行视觉识别的场景,当前场景中包括目标对象,目标对象是需要进行识别的对象。
在该实施例中,当前场景的图像信息是通过摄像头或雷达等设备采集的当前场景所反映的图像数据或点云数据。
例如,服务机器人上设置有摄像头,通过摄像头采集机器人前方的图像数据,获取机器人所处的当前场景的图像信息。
步骤120、基于图像信息,获取目标对象信息。
其中,目标对象信息指当前场景中目标对象相关的图像信息。
在该步骤中,根据当前场景的图像信息,获取当前场景中目标对象对应的目标对象信息,可以去除当前场景中无关信息的影响,使得后续的运动轨迹规划过程更加关注于目标对象。
在实际执行中,可通过语义分割、目标检测和实例分割等处理手段,从当前场景的图像信息中获取目标对象对应的目标对象信息。
例如,对当前场景的图像信息进行目标检测,定位出目标对象的类别信息和位置信息,获得对应的目标对象信息。
再例如,对当前场景的图像信息进行实例分割,将当前场景的图像信息的所有像素进行分类,并区分出了相同类别中不同个体,获得对应的目标对象信息。
步骤130、将目标对象信息分别输入至N个运动规划模型,获得N个运动规划模型输出的N个第一运动轨迹,N为大于1的正整数。
目标对象可以为当前场景中的障碍物,运动规划模型基于目标对象信息所输出的运动轨迹可以是针对目标对象进行避让的避让轨迹,按该运动轨迹运动时,可以避免碰撞到目标对象。
目标对象可以为当前场景中的需要进行操作的物品,运动规划模型基于目标对象信息所输出的运动轨迹可以是朝向目标对象运动的运动轨迹,例如,机器人需要操作的目标对象为冰箱,机器人按该运动轨迹运动到达冰箱所在的位置,对冰箱进行操作。
在该实施例中,运动规划模型所实现的运动规划包括路径规划和轨迹优化两个过程。
路径规划指根据目标对象信息,规划出从初始位置到目标位置的路径,路径规划的过程只考虑当前场景的几何约束。
轨迹优化将先前路径规划计算的给定路径与机器人的运动状态进行约束,输出对应的运动参数。
各个运动规划模型针对目标对象信息进行轨迹规划处理所输出的第一运动轨迹包括运动参数和路径轨迹。
在该实施例中,将目标对象信息分别输入到N个运动规划模型,N个运动 规划模型的结构相同,N个运动规划模型对目标对象信息进行轨迹规划处理的步骤相同,N个运动规划模型的参数不同,N个运动规划模型针对目标对象信息所输出的N个第一运动轨迹不同。
步骤140、基于N个第一运动轨迹,确定目标运动轨迹。
在该实施例中,将目标对象信息输入到了参数不同的N个运动规划模型中进行轨迹规划预测,N个运动规划模型对应输出N个第一运动轨迹,根据不同N个第一运动轨迹,确定出用于引导机器人运动的目标运动轨迹。
通过对运动规划模型的数量进行扩充,运动规划模型在结构相同的条件下,模型参数不同,各个运动规划模型可以输出准确但具有差异性的预测结果,将所有扩充的N个运动规划模型所预测的规划结果进行聚合,可以有效提高目标运动轨迹的准确度。
需要说明的是,运动规划模型的个数可根据需求进行设置,N个运动规划模型的结构相同,相应地训练过程相同,可以极大地缩小开发周期,相较于重新设计网络模型或聚合不同网络模型,可以快速地实现模型目标识别和轨迹规划性能的提升。
可以理解的是,本申请实施例所提供的运动规划方法可以使用现有的模型进行扩充,N个运动规划模型的结构相同仅参数不同,不限制模型的类型,能够灵活适配于现有的网络模型中。
根据本申请实施例提供的运动规划方法,通过设置结构相同、参数不同的N个运动规划模型,针对目标对象信息进行处理,N个运动规划模型输出准确但具有差异性的N个不同的第一运动轨迹,可以提高目标运动轨迹的准确性,整个过程无需重新设计模型结构,即插即用,具有广泛的适用性。
本申请实施例提供的运动规划方法是一种通用性的方法,除智能服务机器人以外,还可以应用于其他的机器运动规划领域,包括但不限于自动驾驶以及机械臂加工等领域。在一些实施例中,步骤130包括:
将目标对象信息输入至运动规划模型的特征提取结构进行特征提取,获得特征提取结构输出的第一特征向量;
基于映射关系对第一特征向量进行映射处理,获得第二特征向量;
将第二特征向量输入至运动规划模型的轨迹规划结构进行轨迹规划,获得轨 迹规划结构输出的第一运动轨迹。
在该实施例中,以单独的一个运动规划模型为例,针对运动规划模型处理目标对象信息,输出对应的第一运动轨迹的过程进行描述。
运动规划模型包括特征提取结构和轨迹规划结构,特征提取结构对目标对象信息进行特征提取,轨迹规划结构针对特征提取结构所输出的特征向量进行轨迹规划,进而输出目标对象信息所对应的第一运动轨迹。
特征提取结构对目标对象信息进行特征提取,得到目标对象信息所对应的第一特征向量,基于预先设置的映射关系针对第一特征向量进行映射处理,获得新的第二特征向量,再将映射得到的第二特征向量输入到后续的轨迹规划结构进行轨迹规划。
映射指两个元素的集之间元素相互“对应”的关系,针对第一特征向量进行映射处理,根据预先设置的映射关系对第一特征向量中的值进行转换,得到对应的第二特征向量。
在该实施例中,N个运动规划模型中至少两个运动规划模型进行映射处理的映射关系不同,映射关系不同,表明输入轨迹规划结构的第二特征向量不同,进而轨迹规划结构所输出的第一运动轨迹也不同。
在一些实施例中,将第一特征向量投影映射到目标正交矩阵,获得第二特征向量。目标正交矩阵属于正交矩阵,正交矩阵又称为方块矩阵,在正交矩阵中行向量和列向量皆为正交的单位向量,任意两行正交结果为0,也即任意两行点乘的结果为0,任意行点乘自己的结果为1。
在该实施例中,将目标正交矩阵作为第一特征向量的投影矩阵,也即目标正交矩阵作为第一特征向量进行映射处理的映射关系,将第一特征向量投影映射到目标正交矩阵所表征的空间中,得到第二特征向量。
N个运动规划模型中至少两个运动规划模型的目标正交矩阵不同,目标正交矩阵对应第一特征向量和第二特征向量间的映射关系。
在实际执行中,N个运动规划模型可以对应N个不同的目标正交矩阵,也即N个运动规划模型进行映射处理的映射关系各不相同。
在一些实施例中,目标正交矩阵通过如下步骤确定:获取目标对称矩阵;基于目标对称矩阵,获得正交特征向量;基于正交特征向量,确定目标正交矩阵。
目标对称矩阵属于对称矩阵,对称矩阵是指以主对角线为对称轴,各元素对应相等的矩阵。
在该实施例中,通过计算目标对称矩阵中两两正交的特征向量,也即获取目标对称矩阵中的正交特征向量,根据正交特征向量,确定出目标对称矩阵所对应的目标正交矩阵。
需要说明的是,N个运动规划模型中的目标正交矩阵可以互不相同,也即N个运动规划模型执行映射处理的映射关系不同,先随机生成不同的目标对称矩阵,再计算目标对称矩阵的正交特征向量,进而确定出不同的目标正交矩阵。
下面介绍一个具体的实施例。
将目标正交矩阵B
0作为投影矩阵,可以将通过特征提取结构G
0提取的第一特征向量0
0投影到新的特征表示空间中:
f
0=g
0B
0
在一些实施例中,N个运动规划模型通过如下步骤训练得到:
将样本对象信息分别输入至待训练的N个运动规划模型的特征提取结构进行特征提取,获得N个运动规划模型的特征提取结构输出N个第一样本特征向量;
根据预先设置的映射关系对N个第一样本特征向量分别进行映射处理,获得N个第二样本特征向量;
将N个第二样本特征向量一一对应地输入至的N个运动规划模型的轨迹规划结构;
基于N个运动规划模型的轨迹规划结构输出的运动轨迹和样本对象信息对应的样本运动轨迹,一一对应地对N个运动规划模型进行单独训练,更新N个 运动规划模型的参数。
在该实施例中,N个运动规划模型可以使用同一样本训练集的样本对象信息及对应的样本运动轨迹进行训练,N个运动规划模型的结构相同,训练的过程是针对N个运动规划模型的内部参数进行配置的过程。
在待训练的运动规划模型的特征提取结构和轨迹规划结构引入映射处理,将待训练的N个运动规划模型的特征提取结构所输出的N个第一样本特征向量分别进行映射处理,N个运动规划模型中至少有两个运动规划模型执行映射处理的映射关系是不同的。
在实际执行中,可以预先选取一个具有特征提取结构和轨迹规划结构的模型作为原始模型,在原始模型的特征提取结构和轨迹规划结构连接处引入特征向量的映射处理,并对其进行数量上的扩充。
下面介绍一个具体的实施例。
在特征提取结构G
0和轨迹规划结构F
0处引入映射处理,对原始模型2
0进行扩充,得到新模型2
0,新模型2
1,新模型2
2,...,新模型2
N。
基于同一样本训练集的样本对象信息和样本运动轨迹对新模型2
0,新模型2
1,新模型2
2,...,新模型2
N进行单独地训练,更新特征提取结构和轨迹规划结构的内部参数。在新模型2
0,新模型2
1,新模型2
2,...,新模型2
N这N+1个运动规划模型中,特征提取结构{G
0,G
1,G
2,....,G
N}的结构相同,轨迹规划结构{F
0,F
1,F
2,....,F
N}的结构也相同。
构建N+1个目标对称矩阵{A
0,A
1,A
2,....,A
N},进而获得N+1个目标正交矩阵,基于目标正交矩阵这一映射关系的映射处理,特征提取结构{G
0,G
1,G
2,....,G
N}的参数不同,相应地轨迹规划结构{F
0,F
1,F
2,....,F
N}的参数也不同。
目标对象信息输入到训练完成的新模型2
0,新模型2
1,新模型2
2,...,新模型2
N中的N+1个特征提取结构{G
0,G
1,G
2,....,G
N}对应输出N+1个提取的第一样本特征向量{g
0,g
1,g
2,....,g
N},根据N+1个目标正交矩阵,基于第一特征向量 {g
0,g
1,g
2,....,g
N}进行映射处理,输出N+1个新的第二特征向量,最后输入N+1个轨迹规划结构{F
0,F
1,F
2,....,F
N},预测得到对应的N+1个第一运动轨迹{t
0,t
1,t
2,....,t
N}。
需要注意的是,扩充后的新模型2
0,新模型2
1,新模型2
2,...,新模型2
N都是采用原始模型2
0的训练方法进行单独训练得到的,训练过程不断调整配置每个模型的特征提取结构和轨迹规划结构的内部参数。
在一些实施例中,步骤140包括:
对N个第一运动轨迹进行求和平均处理,获得目标运动轨迹。
在该实施例中,N个运动规划模型所输出的是准确但具有差异性的N个第一运动轨迹,通过求和平均的方法将每个运动规划模型预测得到的第一运动轨迹进行融合,将N个第一运动轨迹的求和平均值作为最终轨迹规划的预测结果目标运动轨迹。
例如,对新模型2
0,新模型2
1,新模型2
2,...,新模型2
N所输出的N个第一运动轨迹包括{t
0,t
1,t
2,....,t
N},采用下式进行计算:
其中,t
*为目标运动轨迹,t
i为第i个第一运动轨迹。
在该实施例中,通过对运动规划模型的数量进行扩充,利用参数不同的运动规划模型输出准确但具有差异性的预测结果,将所有预测的规划结果进行求和平均处理,可以有效提高预测的目标运动轨迹的准确度。
在一些实施例中,步骤110包括:获取当前场景的深度图像和RGB图像;
步骤120包括:基于深度图像和RGB图像中至少一个图像进行实例分割,获取目标对象的目标分割掩码;
基于深度图像、RGB图像和目标分割掩码,获得目标对象信息。
其中,RGB图像是指通过RGB相机等设备采集的当前场景的颜色纹理信息,深度图像指通过雷达、深度相机等设备捕获的当前场景的几何位置信息。
在该实施例中,可以通过对当前场景的深度图像或RGB图像进行语义分割、目标检测和实例分割等处理,获得目标对象对应的掩码图,也即目标分割掩码。
可以理解的是,目标分割掩码表征了当前场景中目标对象所在的范围,基于当前场景的深度图像或RGB图像进行处理,获得对应的目标分割掩码,可以去除当前场景中无关信息,有助于对目标对象信息的处理更加关注于目标对象。
例如,通过实例分割处理从RGB图像中获取目标分割掩码,得到的目标分割掩码不仅能够当前场景中目标对象所在的范围,还可以区分当前场景中相同类别的不同实例。
在实际执行中,可采用基于RGB的deeplab系列模型或是基于点云的poinnet++系列模型对当前场景的深度图像或RGB图像进行实例分割。
下面介绍一个具体的实施例。
如图2所示,通过相机获得RGB图像S1,通过雷达或深度相机获取深度图像,根据深度图像得到对应的点云信息S2。
利用模型1对当前场景的RGB图像S1或点云信息S2进行识别,获得目标对象的目标掩码,目标掩码对应目标分割掩码,可以是从RGB图像S1分割得到的,也可以是从点云信息S2分割得到的。
输入运动规划模型的目标对象信息包括目标掩码、目标RGB和目标位置,其中,目标掩码对应目标分割掩码,目标RGB对应RGB图像S1,目标位置对应点云信息S2。
其中,模型1可以为任意可实现分割任务的模型,例如,基于RGB的deeplab系列模型或基于点云的poinnet++系列模型。
以机器人为例,在通过运动规划模型进行运动规划时,可以输入机器人当前的运动状态,各个运动规划模型针对目标对象信息进行轨迹规划处理,预测输出的运动轨迹包括运动参数和路径轨迹,可以根据预测的运动参数和机器人当前的运动状态进行运动状态的调整。
下面对本申请实施例提供的运动规划装置进行描述,下文描述的运动规划装置与上文描述的运动规划方法可相互对应参照。
如图3所示,本申请实施例提供的运动规划装置包括:
获取模块310,用于获取当前场景的图像信息;
第一处理模块320,用于基于图像信息,获取目标对象信息;
第二处理模块330,用于将目标对象信息分别输入至N个运动规划模型,获 得N个运动规划模型输出的N个第一运动轨迹,N个运动规划模型的结构相同且参数不同,N为大于1的正整数;
第三处理模块340,用于基于N个第一运动轨迹,确定目标运动轨迹。
根据本申请实施例提供的运动规划装置,通过设置结构相同、参数不同的N个运动规划模型,针对目标对象信息进行处理,N个运动规划模型所输出的N个不同的第一运动轨迹,可以提高目标运动轨迹的准确性,整个过程无需重新设计模型结构,即插即用,具有广泛的适用性。
在一些实施例中,第二处理模块330用于将目标对象信息输入至运动规划模型的特征提取结构,获得特征提取结构输出的第一特征向量;
基于映射关系对第一特征向量进行映射处理,获得第二特征向量,N个运动规划模型中至少两个运动规划模型的映射关系不同;
将第二特征向量输入至运动规划模型的轨迹规划结构,获得轨迹规划结构输出的第一运动轨迹。
在一些实施例中,第二处理模块330用于将第一特征向量投影映射到目标正交矩阵,获得第二特征向量,映射关系包括目标正交矩阵。
在一些实施例中,目标正交矩阵通过如下步骤确定:
获取目标对称矩阵;
基于目标对称矩阵,获得正交特征向量;
基于正交特征向量,确定目标正交矩阵。
在一些实施例中,N个运动规划模型可以通过如下步骤训练得到:
将样本对象信息分别输入至待训练的N个运动规划模型的特征提取结构,获得N个第一样本特征向量;
基于映射关系对N个第一样本特征向量分别进行映射处理,获得N个第二样本特征向量,N个运动规划模型中至少两个运动规划模型的映射关系不同;
将N个第二样本特征向量一一对应地输入至的N个运动规划模型的轨迹规划结构,基于N个运动规划模型输出的运动轨迹和样本对象信息对应的样本运动轨迹,更新N个运动规划模型的参数。
在一些实施例中,第三处理模块340用于对N个第一运动轨迹进行求和平均处理,获得目标运动轨迹。
在一些实施例中,获取模块310用于获取当前场景的深度图像和RGB图像;
第一处理模块320,用于基于深度图像和RGB图像中至少一个图像,获取目标对象的目标分割掩码;基于深度图像、RGB图像以及目标分割掩码,获得目标对象信息。
本申请实施例还提供一种机器人。
在本申请实施例中,机器人可以为智能机器人、通用服务机器人、清洁机器人、无人机及机械臂等机械装置。
机器人本体设有用于采集当前场景的图像信息的图像采集装置,机器人的控制器与图像采集装置电连接,控制器可以基于上述运动规划方法,根据图像采集装置所采集的当前场景的图像信息,确定目标运动轨迹,并控制机器人按目标运动轨迹动作。
在实际执行中,图像采集装置可以为RGB摄像头、红外摄像头、RGB-D相机、激光雷达等可成像测距的图像采集装置。
图4示例了一种电子设备的实体结构示意图,如图4所示,该电子设备可以包括:处理器(processor)410、通信接口(Communications Interface)420、存储器(memory)430和通信总线440,其中,处理器410,通信接口420,存储器430通过通信总线440完成相互间的通信。处理器410可以调用存储器430中的逻辑指令,以执行运动规划方法,该方法包括:获取当前场景的图像信息;基于图像信息,获取目标对象信息;将目标对象信息分别输入至N个运动规划模型,获得N个运动规划模型输出的N个第一运动轨迹,N个运动规划模型的结构相同且参数不同,N为大于1的正整数;基于N个第一运动轨迹,确定目标运动轨迹。
此外,上述的存储器430中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存 储程序代码的介质。
进一步地,本申请还提供一种计算机程序产品,所述计算机程序产品包括计算机程序,计算机程序可存储在非暂态计算机可读存储介质上,所述计算机程序被处理器执行时,计算机能够执行上述各方法实施例所提供的运动规划方法,该方法包括:获取当前场景的图像信息;基于图像信息,获取目标对象信息;将目标对象信息分别输入至N个运动规划模型,获得N个运动规划模型输出的N个第一运动轨迹,N个运动规划模型的结构相同且参数不同,N为大于1的正整数;基于N个第一运动轨迹,确定目标运动轨迹。
另一方面,本申请实施例还提供一种非暂态计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现以执行上述各实施例提供的运动规划方法,该方法包括:获取当前场景的图像信息;基于图像信息,获取目标对象信息;将目标对象信息分别输入至N个运动规划模型,获得N个运动规划模型输出的N个第一运动轨迹,N个运动规划模型的结构相同且参数不同,N为大于1的正整数;基于N个第一运动轨迹,确定目标运动轨迹。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技 术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。
以上实施方式仅用于说明本申请,而非对本申请的限制。尽管参照实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,对本申请的技术方案进行各种组合、修改或者等同替换,都不脱离本申请技术方案的精神和范围,均应涵盖在本申请的权利要求范围中。
Claims (12)
- 一种运动规划方法,包括:获取当前场景的图像信息;基于所述图像信息,获取目标对象信息;将所述目标对象信息分别输入至N个运动规划模型,获得所述N个运动规划模型输出的N个第一运动轨迹,所述N个运动规划模型的结构相同且参数不同,N为大于1的正整数;基于所述N个第一运动轨迹,确定目标运动轨迹。
- 根据权利要求1所述的运动规划方法,其中,所述将所述目标对象信息分别输入至N个运动规划模型,获得所述N个运动规划模型输出的N个第一运动轨迹,包括:将所述目标对象信息输入至所述运动规划模型的特征提取结构,获得所述特征提取结构输出的第一特征向量;基于映射关系对所述第一特征向量进行映射处理,获得第二特征向量,所述N个运动规划模型中至少两个运动规划模型的所述映射关系不同;将所述第二特征向量输入至所述运动规划模型的轨迹规划结构,获得所述轨迹规划结构输出的所述第一运动轨迹。
- 根据权利要求2所述的运动规划方法,其中,所述基于映射关系对所述第一特征向量进行映射处理,获得第二特征向量,包括:将所述第一特征向量投影映射到目标正交矩阵,获得所述第二特征向量,所述映射关系包括所述目标正交矩阵。
- 根据权利要求3所述的运动规划方法,其中,所述目标正交矩阵通过如下步骤确定:获取目标对称矩阵;基于所述目标对称矩阵,获得正交特征向量;基于所述正交特征向量,确定所述目标正交矩阵。
- 根据权利要求1-4任一项所述的运动规划方法,其中,所述N个运动规划模型通过如下步骤训练得到:将样本对象信息分别输入至待训练的所述N个运动规划模型的特征提取结构,获得N个第一样本特征向量;基于映射关系对所述N个第一样本特征向量分别进行映射处理,获得N个第二样本特征向量,所述N个运动规划模型中至少两个运动规划模型的所述映射关系不同;将所述N个第二样本特征向量一一对应地输入至的所述N个运动规划模型的轨迹规划结构,基于所述N个运动规划模型输出的运动轨迹和所述样本对象信息对应的样本运动轨迹,更新所述N个运动规划模型的参数。
- 根据权利要求1-5任一项所述的运动规划方法,其中,所述基于所述N个第一运动轨迹,确定目标运动轨迹,包括:对所述N个第一运动轨迹进行求和平均处理,获得所述目标运动轨迹。
- 根据权利要求1-6任一项所述的运动规划方法,其中,所述获取当前场景的图像信息,包括:获取所述当前场景的RGB图像和深度图像;所述基于所述图像信息,获取目标对象信息,包括:基于所述RGB图像和所述深度图像中至少一个图像,获取所述目标对象的目标分割掩码;基于所述RGB图像、所述深度图像和所述目标分割掩码,获得所述目标对象信息。
- 一种运动规划装置,包括:获取模块,用于获取当前场景的图像信息;第一处理模块,用于基于所述图像信息,获取目标对象信息;第二处理模块,用于将所述目标对象信息分别输入至N个运动规划模型,获得所述N个运动规划模型输出的N个第一运动轨迹,所述N个运动规划模型的结构相同且参数不同,N为大于1的正整数;第三处理模块,用于基于所述N个第一运动轨迹,确定目标运动轨迹。
- 一种机器人,包括:机器人本体,所述机器人本体设有图像采集装置,所述图像采集装置用于采集当前场景的图像信息;控制器,所述控制器与所述图像采集装置电连接,所述控制器用于基于权利要求1至7任一项所述运动规划方法,控制机器人按目标运动轨迹动作。
- 一种电子设备,包括:处理器;以及存储器,存储了可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现如权利要求1至7任一项所述运动规划方法。
- 一种非暂态计算机可读存储介质,其上存储有计算机程序,其中,该计算机程序被处理器执行时实现如权利要求1至7任一项所述运动规划方法。
- 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述运动规划方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210303708.9A CN114693721B (zh) | 2022-03-24 | 2022-03-24 | 运动规划方法、装置及机器人 |
CN202210303708.9 | 2022-03-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023178931A1 true WO2023178931A1 (zh) | 2023-09-28 |
Family
ID=82139775
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/117222 WO2023178931A1 (zh) | 2022-03-24 | 2022-09-06 | 运动规划方法、装置及机器人 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114693721B (zh) |
WO (1) | WO2023178931A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114693721B (zh) * | 2022-03-24 | 2023-09-01 | 美的集团(上海)有限公司 | 运动规划方法、装置及机器人 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111126396A (zh) * | 2019-12-25 | 2020-05-08 | 北京科技大学 | 图像识别方法、装置、计算机设备以及存储介质 |
CN111367318A (zh) * | 2020-03-31 | 2020-07-03 | 华东理工大学 | 一种基于视觉语义信息的动态障碍环境导航方法和装置 |
CN112287728A (zh) * | 2019-07-24 | 2021-01-29 | 鲁班嫡系机器人(深圳)有限公司 | 智能体轨迹规划方法、装置、系统、存储介质及设备 |
CN112927260A (zh) * | 2021-02-26 | 2021-06-08 | 商汤集团有限公司 | 一种位姿生成方法、装置、计算机设备和存储介质 |
CN113392359A (zh) * | 2021-08-18 | 2021-09-14 | 腾讯科技(深圳)有限公司 | 多目标预测方法、装置、设备及存储介质 |
CN114693721A (zh) * | 2022-03-24 | 2022-07-01 | 美的集团(上海)有限公司 | 运动规划方法、装置及机器人 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104820905B (zh) * | 2015-05-19 | 2018-11-20 | 威海北洋电气集团股份有限公司 | 基于空间轨迹大数据分析的人员管控方法及系统 |
CN112140101A (zh) * | 2019-06-28 | 2020-12-29 | 鲁班嫡系机器人(深圳)有限公司 | 轨迹规划方法、装置及系统 |
EP4074563A4 (en) * | 2019-12-30 | 2022-12-28 | Huawei Technologies Co., Ltd. | TRAJECTORY PREDICTION METHOD AND ASSOCIATED DEVICE |
CN111409072B (zh) * | 2020-04-02 | 2023-03-07 | 北京航空航天大学杭州创新研究院 | 运动轨迹规划处理方法和装置 |
CN111583715B (zh) * | 2020-04-29 | 2022-06-03 | 宁波吉利汽车研究开发有限公司 | 一种车辆轨迹预测方法、车辆碰撞预警方法、装置及存储介质 |
CN113506317B (zh) * | 2021-06-07 | 2022-04-22 | 北京百卓网络技术有限公司 | 一种基于Mask R-CNN和表观特征融合的多目标跟踪方法 |
CN113553909B (zh) * | 2021-06-23 | 2023-08-04 | 北京百度网讯科技有限公司 | 用于皮肤检测的模型训练方法、皮肤检测方法 |
-
2022
- 2022-03-24 CN CN202210303708.9A patent/CN114693721B/zh active Active
- 2022-09-06 WO PCT/CN2022/117222 patent/WO2023178931A1/zh unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287728A (zh) * | 2019-07-24 | 2021-01-29 | 鲁班嫡系机器人(深圳)有限公司 | 智能体轨迹规划方法、装置、系统、存储介质及设备 |
CN111126396A (zh) * | 2019-12-25 | 2020-05-08 | 北京科技大学 | 图像识别方法、装置、计算机设备以及存储介质 |
CN111367318A (zh) * | 2020-03-31 | 2020-07-03 | 华东理工大学 | 一种基于视觉语义信息的动态障碍环境导航方法和装置 |
CN112927260A (zh) * | 2021-02-26 | 2021-06-08 | 商汤集团有限公司 | 一种位姿生成方法、装置、计算机设备和存储介质 |
CN113392359A (zh) * | 2021-08-18 | 2021-09-14 | 腾讯科技(深圳)有限公司 | 多目标预测方法、装置、设备及存储介质 |
CN114693721A (zh) * | 2022-03-24 | 2022-07-01 | 美的集团(上海)有限公司 | 运动规划方法、装置及机器人 |
Non-Patent Citations (1)
Title |
---|
BINGYU LIU; ZHEN ZHAO; ZHENPENG LI; JIANAN JIANG; YUHONG GUO; HAIFENG SHEN; JIEPING YE: "Feature Transformation Ensemble Model with Batch Spectral Regularization for Cross-Domain Few-Shot Classification", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 1 January 1900 (1900-01-01), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081675207 * |
Also Published As
Publication number | Publication date |
---|---|
CN114693721A (zh) | 2022-07-01 |
CN114693721B (zh) | 2023-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022217840A1 (zh) | 一种复杂背景下高精度多目标跟踪方法 | |
US11954870B2 (en) | Dynamic scene three-dimensional reconstruction method, apparatus and system, server, and medium | |
KR102175491B1 (ko) | 상관 필터 기반 객체 추적 방법 | |
Barranco et al. | Real-time clustering and multi-target tracking using event-based sensors | |
CN103149940B (zh) | 一种结合均值移动与粒子滤波的无人机目标跟踪方法 | |
CN108229416B (zh) | 基于语义分割技术的机器人slam方法 | |
WO2022042304A1 (zh) | 识别场景轮廓的方法、装置、计算机可读介质及电子设备 | |
CN110287907B (zh) | 一种对象检测方法和装置 | |
CN106886748B (zh) | 一种基于tld的适用于无人机的变尺度目标跟踪方法 | |
CN106991408A (zh) | 一种候选框生成网络的生成方法及人脸检测方法 | |
US11741615B2 (en) | Map segmentation method and device, motion estimation method, and device terminal | |
WO2019041660A1 (zh) | 人脸去模糊方法及装置 | |
CN113052907B (zh) | 一种动态环境移动机器人的定位方法 | |
CN112200157A (zh) | 一种降低图像背景干扰的人体3d姿态识别方法及其系统 | |
WO2023178931A1 (zh) | 运动规划方法、装置及机器人 | |
WO2022246605A1 (zh) | 一种关键点标定方法和装置 | |
US20230281862A1 (en) | Sampling based self-supervised depth and pose estimation | |
Liu et al. | A simplified swarm optimization for object tracking | |
Suzui et al. | Toward 6 dof object pose estimation with minimum dataset | |
CN112561995A (zh) | 一种实时高效的6d姿态估计网络、构建方法及估计方法 | |
Lee et al. | LCCRAFT: LiDAR and Camera Calibration Using Recurrent All-Pairs Field Transforms Without Precise Initial Guess | |
Su et al. | Multiplicative gaussian particle filter | |
CN114155281B (zh) | 一种无人机目标跟踪自动初始化方法 | |
Pandit et al. | Generalized method to validate social distancing using median angle proximity methodology | |
Qiu et al. | MAC-VO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22932983 Country of ref document: EP Kind code of ref document: A1 |