WO2022205841A1 - Procédé et appareil de navigation de robot, et dispositif terminal et support de stockage lisible par ordinateur - Google Patents

Procédé et appareil de navigation de robot, et dispositif terminal et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2022205841A1
WO2022205841A1 PCT/CN2021/125040 CN2021125040W WO2022205841A1 WO 2022205841 A1 WO2022205841 A1 WO 2022205841A1 CN 2021125040 W CN2021125040 W CN 2021125040W WO 2022205841 A1 WO2022205841 A1 WO 2022205841A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
area
robot
processed
Prior art date
Application number
PCT/CN2021/125040
Other languages
English (en)
Chinese (zh)
Inventor
程骏
顾在旺
庞建新
谭欢
Original Assignee
深圳市优必选科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市优必选科技股份有限公司 filed Critical 深圳市优必选科技股份有限公司
Publication of WO2022205841A1 publication Critical patent/WO2022205841A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Definitions

  • the present application belongs to the technical field of image processing, and in particular, relates to a robot navigation method, device, terminal device and computer-readable storage medium.
  • Robot navigation and mapping is the key technology for robot applications. This technology means that the robot starts to move from an unknown position in an unknown environment, and during the movement process, it positions itself according to the estimated position, and at the same time builds a map on the basis of its own positioning, so as to realize the autonomous positioning and navigation of the robot.
  • the embodiments of the present application provide a robot navigation method, device, terminal device, and computer-readable storage medium, which can improve navigation efficiency while ensuring robot navigation accuracy, thereby ensuring real-time and effective control of robot motion.
  • an embodiment of the present application provides a robot navigation method, including:
  • the robot movement is controlled according to the target passage area.
  • the initial passing area is first segmented from the image to be processed by the image segmentation method, and then the first position information of the target obstacle is detected from the image to be processed by the target detection method; in the above method, it is equivalent to Target detection is used for smaller objects, and image segmentation is used for larger objects instead of target detection.
  • Target detection is used for smaller objects
  • image segmentation is used for larger objects instead of target detection.
  • the target passage area is determined according to the initial passage area and the first position information, that is, the segmented initial passage area is adjusted by using the detected first position information of the target obstacle, so as to ensure the accuracy of navigation.
  • the navigation efficiency of the robot can be improved while ensuring the navigation accuracy of the robot, thereby ensuring real-time and effective control of the robot motion.
  • the segmenting the initial pass area from the to-be-processed image includes:
  • An initial pass area is segmented from the to-be-processed image according to the optical three primary color information and the image depth information.
  • segmenting the initial pass area from the to-be-processed image according to the optical three primary color information and the image depth information includes:
  • the optical three primary color information and the image depth information are input into the pass area identification model, and the initial pass area is output.
  • the passing area identification model includes a first feature extraction network, a second feature extraction network, and a segmentation network;
  • the first feature information and the second feature information are input into the segmentation network, and the initial pass area is output.
  • the passing area identification model further includes a detection network
  • the detecting the first position information of the target obstacle in the to-be-processed image includes:
  • the first feature information and the second feature information are input into the detection network, and the first position information is output.
  • the determining a target passage area according to the initial passage area and the first location information includes:
  • the target communication area is determined according to the third location information.
  • an embodiment of the present application provides a robot navigation device, including:
  • the image acquisition unit is used to acquire the to-be-processed image of the road ahead of the robot;
  • an image segmentation unit used for segmenting an initial pass area from the to-be-processed image
  • a target detection unit configured to detect the first position information of the target obstacle in the to-be-processed image
  • a passage area determination unit configured to determine a target passage area according to the initial passage area and the first position information
  • a motion control unit configured to control the motion of the robot according to the target passing area.
  • an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes all When the computer program is used, the robot navigation method according to any one of the above first aspects is realized.
  • an embodiment of the present application provides a computer-readable storage medium, and an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, wherein the The computer program, when executed by the processor, implements the robot navigation method according to any one of the above first aspects.
  • an embodiment of the present application provides a computer program product that, when the computer program product runs on a terminal device, enables the terminal device to execute the robot navigation method described in any one of the first aspects above.
  • FIG. 1 is a schematic flowchart of a robot navigation method provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a pass area identification model provided by an embodiment of the present application.
  • FIG. 3 is a structural block diagram of a robot navigation device provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • FIG. 1 it is a schematic flowchart of a robot navigation method provided by an embodiment of the present application.
  • the method may include the following steps:
  • a camera device can be installed on the robot. During the process of the robot's traveling, the captured image of the road ahead of the robot is obtained in real time through the camera device. The acquired captured image may be used as the image to be processed in the embodiment of the present application.
  • each captured image may be processed separately as an image to be processed.
  • Part of the captured images may also be extracted from the captured images according to a certain sampling frequency as images to be processed for processing.
  • the frequency of obtaining an image to be processed is one image every 1 second.
  • the frequency of performing the navigation control is also controlled once every 1 second. It is also possible to extract a photographed image every 5 seconds, and use the photographed image as the image to be processed, then the frequency of obtaining the image to be processed is to obtain one image every 5 seconds, and correspondingly, the frequency of navigation control of the robot is also every 5 seconds. Control once per second.
  • the frequency of robot navigation control can be controlled.
  • each object in the to-be-processed image needs to be detected, which requires pixel-by-pixel processing of the to-be-processed image, requires a large number of pixel-level annotations, and requires a large amount of data processing.
  • the image segmentation method is used for processing larger objects
  • the target detection method is used for processing smaller objects.
  • the image segmentation method is used to process walls and floors
  • the target detection method is used to process tables and water cups. The details are as described in the following S102 and S103.
  • step S102 the ground area can be segmented from the image to be processed, and the ground area is used as the initial pass area.
  • the image to be processed includes optical three primary color information (ie, RGB information).
  • RGB information optical three primary color information
  • the image to be processed is usually segmented by using the RGB information.
  • the depth information in the image can reflect the distance of the object from the camera device, and the depth information also contains many image features.
  • the acquisition process of depth information can be added in the process of image segmentation.
  • S102 may include:
  • the optical three primary color information and the image depth information of the image to be processed are acquired; the initial pass area is segmented from the to-be-processed image according to the optical three primary color information and the image depth information.
  • the camera device on the robot may be a camera device with both a depth information acquisition function and an RGB information acquisition function.
  • the to-be-processed image captured in this way contains both RGB information and depth information. It is enough to extract RGB information and depth information respectively from the image to be processed.
  • the step of dividing the initial pass area may include:
  • the first feature information can reflect the RGB information in the image to be processed, and the second feature information can reflect the distance of each object in the image to be processed relative to the camera device.
  • step S103 the position of the target obstacle that affects the passage, such as chairs, tables, etc. on the ground, can be detected from the to-be-processed image.
  • a process of acquiring depth information may be added in the process of target detection.
  • S103 may include:
  • Target detection processing is performed on the image to be processed according to the first feature information and the second feature information, so as to obtain the first position information of the target obstacle.
  • image segmentation processing is used for large objects
  • target detection processing is used for small objects, which greatly reduces the amount of data processing.
  • image segmentation and target detection are performed by combining the depth information and RGB information of the image, which increases the depth feature of the image and can effectively improve the accuracy of image segmentation and target detection.
  • image feature information required by the image segmentation process and the image feature information required by the target detection process are shared, which further improves the accuracy of image segmentation and target detection; and only one feature extraction process is required for the image to be processed, which improves the accuracy of image segmentation and target detection. The efficiency of feature extraction, thereby improving the processing speed of image segmentation and target detection.
  • the methods in the above-mentioned embodiments S102 and S103 can be implemented by a trained traffic area identification model.
  • S102 and S103 may be: input the image to be processed into the In the passing area identification model, the initial passing area and the first position information are output.
  • S102 and S103 may be: shooting the camera with depth information acquisition function
  • the obtained image to be processed and the to-be-processed image captured by the photographing device with the RGB information acquisition function are input into the passage area identification model, and the initial passage area and the first position information are output.
  • the superimposed image is input into the passing area recognition model, and the initial passing area and the first position information are output.
  • the passing area identification model has the function of extracting optical three primary color information and image depth information.
  • an implementation manner of S102 and S103 is: obtaining optical three primary color information and image depth information of the image to be processed; obtaining a trained pass area identification model; inputting the optical three primary color information and image depth information into the pass In the area identification model, the initial passing area and the first location information are output.
  • the passing area identification model does not have the function of extracting optical three primary color information and image depth information.
  • FIG. 2 it is a schematic structural diagram of a passing area identification model provided by an embodiment of the present application.
  • the passing area identification model in this embodiment of the present application may include a first feature extraction network, a second feature extraction network, a segmentation network, and a detection network.
  • the optical three primary color information and the image depth information are input into the pass area identification model, and the initial pass area and the first position information are output, which may include the following steps:
  • the optical three primary color information is input into the first feature extraction network, and the first feature information is output; the image depth information is input into the second feature extraction network, and the second feature information is output; the first feature information and the second feature information are input into In the segmentation network, the initial pass area is output; the first feature information and the second feature information are input into the detection network, and the first position information is output.
  • the passing area identification model in the embodiment of the present application is essentially a multi-task learning model. Due to the strong correlation between the two tasks of image segmentation and target detection, the two tasks are made to share image feature information, and the two tasks complement each other, effectively ensuring the accuracy of image segmentation and target detection.
  • S104 Determine a target passage area according to the initial passage area and the first location information.
  • the initial pass area may include coordinate information of pixel points in the area, and the first position information may include coordinate information of pixel points corresponding to the target obstacle.
  • the step of determining the target traffic area may include:
  • the initial pass area includes coordinates corresponding to pixel 0-pixel 100
  • the first position information includes coordinates corresponding to pixel 50-pixel 60
  • the second position information includes pixel 0-pixel.
  • the coordinates of the pixels contained in the target passing area can be mapped to the physical coordinate system to obtain the physical coordinates corresponding to the target communication area, and then the motion route can be planned according to the physical coordinates, and then the robot motion can be controlled.
  • FIG. 3 is a structural block diagram of the robot navigation device provided by the embodiments of the present application. For convenience of description, only the parts related to the embodiments of the present application are shown.
  • the device includes:
  • the image acquisition unit 31 is used to acquire the to-be-processed image of the road ahead of the robot.
  • the image segmentation unit 32 is configured to segment the initial traffic area from the to-be-processed image.
  • the target detection unit 33 is configured to detect the first position information of the target obstacle in the to-be-processed image.
  • the passage area determination unit 34 is configured to determine a target passage area according to the initial passage area and the first position information.
  • the motion control unit 35 is configured to control the motion of the robot according to the target passage area.
  • the image segmentation unit 32 includes:
  • the information acquisition module is used to acquire the optical three primary color information and image depth information of the image to be processed.
  • An image segmentation module configured to segment an initial pass area from the to-be-processed image according to the optical three primary color information and the image depth information.
  • the image segmentation module is also used to:
  • the passing area identification model includes a first feature extraction network, a second feature extraction network and a segmentation network.
  • the image segmentation module is also used to:
  • the optical three primary color information is input into the first feature extraction network, and the first feature information is output; the image depth information is input into the second feature extraction network, and the second feature information is output; the first feature information and the second feature information are input into In the segmentation network, the initial pass area is output.
  • the passing area identification model further includes a detection network.
  • the target detection unit 33 is also used for:
  • the first feature information and the second feature information are input into the detection network, and the first position information is output.
  • the passing area determination unit 34 is further configured to:
  • the device shown in FIG. 3 may be a software unit, a hardware unit, or a unit combining software and hardware built into the existing terminal equipment, or may be integrated into the terminal equipment as an independent pendant, or may be used as an independent of terminal equipment exists.
  • FIG. 4 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 4 in this embodiment includes: at least one processor 40 (only one is shown in FIG. 4 ), a memory 41 , and a processor stored in the memory 41 and can be processed in the at least one processor
  • a computer program 42 running on the processor 40 the processor 40 implements the steps in any of the robot navigation method embodiments described above when the processor 40 executes the computer program 42.
  • the terminal device may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor and a memory.
  • FIG. 4 is only an example of the terminal device 4, and does not constitute a limitation to the terminal device 4. It may include more or less components than the one shown, or combine some components, or different components , for example, may also include input and output devices, network access devices, and the like.
  • the so-called processor 40 may be a central processing unit (Central Processing Unit, CPU), and the processor 40 may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuits) , ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 41 may be an internal storage unit of the terminal device 4 in some embodiments, such as a hard disk or a memory of the terminal device 4 .
  • the memory 41 may also be an external storage device of the terminal device 4 in other embodiments, such as a plug-in hard disk equipped on the terminal device 4, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.
  • the memory 41 may also include both an internal storage unit of the terminal device 4 and an external storage device.
  • the memory 41 is used to store an operating system, an application program, a boot loader (Boot Loader), data, and other programs, such as program codes of the computer program, and the like.
  • the memory 41 can also be used to temporarily store data that has been output or will be output.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiments of the present application provide a computer program product, when the computer program product runs on a terminal device, so that the terminal device can implement the steps in the foregoing method embodiments when executed.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the present application realizes all or part of the processes in the methods of the above embodiments, which can be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium.
  • the computer program includes computer program code
  • the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable medium may include at least: any entity or device capable of carrying the computer program code to the device/terminal device, a recording medium, a computer memory, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media.
  • ROM read-only memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunication signals
  • software distribution media For example, U disk, mobile hard disk, disk or CD, etc.
  • computer readable media may not be electrical carrier signals and telecommunications signals.
  • the disclosed apparatus/terminal device and method may be implemented in other manners.
  • the apparatus/terminal device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

La présente demande est applicable au domaine technique du traitement d'image et concerne un procédé et un appareil de navigation de robot, un dispositif terminal et un support de stockage lisible par ordinateur. Le procédé comprend : l'acquisition d'une image à traiter d'une route devant un robot ; la réalisation d'une segmentation pour obtenir une zone de passage initiale à partir de ladite image ; la détection de premières informations de position d'un obstacle cible dans ladite image ; la détermination d'une zone de passage cible en fonction de la zone de passage initiale et des premières informations de position ; et, en fonction de la zone de passage cible, la commande du déplacement du robot. Au moyen du procédé, l'efficacité de navigation peut être améliorée tout en assurant une précision de navigation pour un robot, garantissant ainsi que le mouvement du robot est efficacement commandé en temps réel.
PCT/CN2021/125040 2021-03-30 2021-10-20 Procédé et appareil de navigation de robot, et dispositif terminal et support de stockage lisible par ordinateur WO2022205841A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110340441.6 2021-03-30
CN202110340441.6A CN112966658B (zh) 2021-03-30 2021-03-30 机器人导航方法、装置、终端设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022205841A1 true WO2022205841A1 (fr) 2022-10-06

Family

ID=76279689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/125040 WO2022205841A1 (fr) 2021-03-30 2021-10-20 Procédé et appareil de navigation de robot, et dispositif terminal et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN112966658B (fr)
WO (1) WO2022205841A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966658B (zh) * 2021-03-30 2024-06-18 深圳市优必选科技股份有限公司 机器人导航方法、装置、终端设备及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107636680A (zh) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 一种障碍物检测方法及装置
CN110246142A (zh) * 2019-06-14 2019-09-17 深圳前海达闼云端智能科技有限公司 一种检测障碍物的方法、终端和可读存储介质
US20200209880A1 (en) * 2018-12-28 2020-07-02 Ubtech Robotics Corp Ltd Obstacle detection method and apparatus and robot using the same
CN112183476A (zh) * 2020-10-28 2021-01-05 深圳市商汤科技有限公司 一种障碍检测方法、装置、电子设备以及存储介质
CN112966658A (zh) * 2021-03-30 2021-06-15 深圳市优必选科技股份有限公司 机器人导航方法、装置、终端设备及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107636680A (zh) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 一种障碍物检测方法及装置
US20200209880A1 (en) * 2018-12-28 2020-07-02 Ubtech Robotics Corp Ltd Obstacle detection method and apparatus and robot using the same
CN110246142A (zh) * 2019-06-14 2019-09-17 深圳前海达闼云端智能科技有限公司 一种检测障碍物的方法、终端和可读存储介质
CN112183476A (zh) * 2020-10-28 2021-01-05 深圳市商汤科技有限公司 一种障碍检测方法、装置、电子设备以及存储介质
CN112966658A (zh) * 2021-03-30 2021-06-15 深圳市优必选科技股份有限公司 机器人导航方法、装置、终端设备及计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WU XIN, WANG GUIYING, CONG YANG: "Object recognition method by combining color and depth information", TRANSACTIONS OF THE CHINESE SOCIETY OF AGRICULTURAL ENGINEERING, CHINESE SOCIETY OF AGRICULTURAL ENGINEERING, ZHONGGUO NONGYE GONGCHENG XUEHUI, CN, vol. 29, no. S1, 30 April 2013 (2013-04-30), CN , pages 96 - 100, XP055972156, ISSN: 1002-6819 *

Also Published As

Publication number Publication date
CN112966658A (zh) 2021-06-15
CN112966658B (zh) 2024-06-18

Similar Documents

Publication Publication Date Title
CN110874594B (zh) 基于语义分割网络的人体外表损伤检测方法及相关设备
CN110322500B (zh) 即时定位与地图构建的优化方法及装置、介质和电子设备
CN112528831B (zh) 多目标姿态估计方法、多目标姿态估计装置及终端设备
CN109934065B (zh) 一种用于手势识别的方法和装置
WO2018120038A1 (fr) Procédé et dispositif de détection de cible
CN108564082B (zh) 图像处理方法、装置、服务器和介质
US9639943B1 (en) Scanning of a handheld object for 3-dimensional reconstruction
CN111612841A (zh) 目标定位方法及装置、移动机器人及可读存储介质
WO2019128504A1 (fr) Procédé et appareil de traitement d'images dans un jeu de billard, et dispositif terminal
CN114565863B (zh) 无人机图像的正射影像实时生成方法、装置、介质及设备
CN110673607B (zh) 动态场景下的特征点提取方法、装置、及终端设备
CN111199198B (zh) 一种图像目标定位方法、图像目标定位装置及移动机器人
CN114187333A (zh) 一种图像对齐方法、图像对齐装置及终端设备
WO2022205841A1 (fr) Procédé et appareil de navigation de robot, et dispositif terminal et support de stockage lisible par ordinateur
CN110599520B (zh) 一种旷场实验数据分析方法、系统及终端设备
CN113228105A (zh) 一种图像处理方法、装置和电子设备
WO2023273056A1 (fr) Procédé de navigation de robot, robot et support de stockage lisible par ordinateur
CN113902932A (zh) 特征提取方法、视觉定位方法及装置、介质和电子设备
CN111242084B (zh) 机器人控制方法、装置、机器人及计算机可读存储介质
CN113191189A (zh) 人脸活体检测方法、终端设备及计算机可读存储介质
JPH11167455A (ja) 手形状認識装置及び単色物体形状認識装置
JP6365117B2 (ja) 情報処理装置、画像判定方法、及びプログラム
CN114419564B (zh) 车辆位姿检测方法、装置、设备、介质及自动驾驶车辆
CN114219831A (zh) 目标跟踪方法、装置、终端设备及计算机可读存储介质
CN114359915A (zh) 图像处理方法、装置和可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21934477

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21934477

Country of ref document: EP

Kind code of ref document: A1