CN116363693A - Automatic following method and device based on depth camera and vision algorithm - Google Patents

Automatic following method and device based on depth camera and vision algorithm Download PDF

Info

Publication number
CN116363693A
CN116363693A CN202310112092.1A CN202310112092A CN116363693A CN 116363693 A CN116363693 A CN 116363693A CN 202310112092 A CN202310112092 A CN 202310112092A CN 116363693 A CN116363693 A CN 116363693A
Authority
CN
China
Prior art keywords
depth camera
algorithm
following
personnel
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310112092.1A
Other languages
Chinese (zh)
Inventor
洪健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Rigo Robot Co ltd
Original Assignee
Suzhou Rigo Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Rigo Robot Co ltd filed Critical Suzhou Rigo Robot Co ltd
Priority to CN202310112092.1A priority Critical patent/CN116363693A/en
Publication of CN116363693A publication Critical patent/CN116363693A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an automatic following method based on a depth camera and a visual algorithm, which solves the problems that the conventional positioning following schemes such as Bluetooth, ultrasonic and radar require personnel to wear signal receiving and transmitting equipment, and has complex operation and unstable followingAnd the cost is high; the pure vision scheme comprises the following steps of: s1, capturing a current environment image frame I through a depth camera t The method comprises the steps of carrying out a first treatment on the surface of the S2, detecting the current frame I through an SSD-MobileNet target detection model t Confirming pixel coordinates and space coordinate positions of all people in the image, and outputting data P t The method comprises the steps of carrying out a first treatment on the surface of the S3, the current environment image frame I t Output data P of detection model t And (4) determining an algorithm execution flow according to the current stage of the following algorithm through the following algorithm input, and correspondingly converting the output information of the following algorithm into an equipment moving instruction, issuing an internal motor of the equipment and driving the equipment to move towards the direction of personnel.

Description

Automatic following method and device based on depth camera and vision algorithm
Technical Field
The invention relates to the technical field of computer vision, in particular to an automatic following method and device based on a depth camera and a vision algorithm.
Background
With the continuous development of science and technology, the demand of portability and intelligence of various devices is continuously improved, wherein the intelligent following function of various devices is included. There are various products currently on the market that support automatic following, such as automatic following luggage, baseball bat, forklift, etc. The following modes of various products are different, and the current equipment following modes mainly adopt schemes such as pure vision, bluetooth/UWB/ultrasonic positioning, laser radar and the like.
The following function scheme is realized through Bluetooth/UWB/ultrasonic positioning technology, personnel are required to wear relevant signal receiving and transmitting equipment, and the operation is complicated; the scheme using the radar has the problems of instability and higher cost; the pure vision scheme often cannot achieve the ideal following effect due to the fact that the common camera lacks accurate distance perception.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides an automatic following method and device capable of acquiring the spatial orientation of a tracking person in real time based on a depth camera and a visual algorithm.
In order to solve the technical problems, the invention adopts the following technical scheme: an automatic following method based on a depth camera and a vision algorithm comprises the following steps:
s1, capturing a current environment image frame I through a depth camera t
S2, detecting the current frame I through an SSD-MobileNet target detection model t Confirming pixel coordinates and space coordinate positions of all people in the image, and outputting data P t
S3, the current environment image frame I t Output data P of detection model t By inputting the following algorithm, the following algorithm is usedThe current stage determines the algorithm execution flow, the stages include,
in the initialization stage, the nearest personnel to the equipment are identified, and the output data of the detection model is secondarily confirmed,
the following stage is to extract all the detected personnel features, learn and judge the personnel features through an Online-Boosting Online learning model, and generate following algorithm output information according to the spatial position of personnel from equipment;
s4, correspondingly converting the output information of the following algorithm into an equipment moving instruction, issuing an internal motor of the equipment and driving the equipment to move towards the direction of the personnel.
Further, the current environmental image frame in step S1 includes an RGB image of the current frame, where the RGB image is used to identify pixel coordinates of the person in the picture, and a depth image, where the depth image is used to obtain a spatial position of each person in the environment.
Further, the SSD-MobileNet object detection model in the step S2 is obtained by training on the MS COCO dataset, which is used to identify 91 kinds of objects including personnel.
Further, the step S2 outputs data
Figure BDA0004077139890000021
Wherein (1)>
Figure BDA0004077139890000022
Representing the upper left and lower right coordinates of the identified object in the image in the pixel box,/->
Figure BDA0004077139890000023
For three-dimensional coordinates of the identified object in the camera coordinate system +.>
Figure BDA0004077139890000024
Is a horizontal coordinate>
Figure BDA0004077139890000025
Is vertical direction coordinate>
Figure BDA0004077139890000026
Is the direction coordinate of the optical axis of the camera,
Figure BDA0004077139890000027
whether the identified target is an identification bit of a tracking person.
Further, in the step S3, the device is a robot, and the output information of the corresponding following algorithm includes a movement speed V of the robot t ={v x W }, where v x The forward speed of the robot is represented by w, which is the steering angular speed of the robot.
Further, when the following algorithm in the step S3 is in the following stage, the personnel features are extracted through the shallow convolutional neural network of the two-layer convolutional kernel, and 10 feature graphs are extracted at the same time to be used as training data of the Online-Boosting Online learning model.
Further, the online learning model is composed of 30 bayesian classifiers.
The automatic following device based on the depth camera and the vision algorithm comprises a robot and the depth camera, wherein the robot is driven to move through an internal motor, the depth camera is fixedly arranged on one side of the robot and used for capturing an external environment image frame, and the steps of the automatic following method based on the depth camera and the vision algorithm are embedded in an internal control chip of the robot.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of an automatic following method based on a depth camera and vision algorithms described above when the program is executed by the processor.
A non-transitory computer readable storage medium having stored thereon a computer program for implementing the steps of an automatic following method based on a depth camera and a vision algorithm as described above when executed by a processor.
Compared with the prior art, the invention has the beneficial effects that: the following of the personnel is realized by combining the depth camera and the visual algorithm, the detection of the personnel in the picture is realized by means of the deep learning technology, the space orientation of the tracked personnel is obtained by the depth camera, and the information is continuously provided for the equipment. Meanwhile, the scheme also utilizes an online learning technology to continuously learn the characteristics of following personnel, so that the following stability and the anti-interference capability of the equipment are improved.
Drawings
The disclosure of the present invention is described with reference to the accompanying drawings. It is to be understood that the drawings are designed solely for the purposes of illustration and not as a definition of the limits of the invention. In the drawings, like reference numerals are used to refer to like parts. Wherein:
FIG. 1 schematically shows an overall process flow diagram according to one embodiment of the present invention;
FIG. 2 schematically shows a flow chart of a proposed follow-up algorithm according to one embodiment of the invention;
fig. 3 schematically shows a pseudo-code table diagram of a proposed robot following algorithm according to one embodiment of the invention.
Reference numerals in the drawings: 1. a depth camera; 2. and (3) a robot.
Detailed Description
It is to be understood that, according to the technical solution of the present invention, those skilled in the art may propose various alternative structural modes and implementation modes without changing the true spirit of the present invention. Accordingly, the following detailed description and drawings are merely illustrative of the invention and are not intended to be exhaustive or to limit the invention to the precise form disclosed.
An embodiment according to the invention is shown in connection with fig. 1-2.
In general, as shown in fig. 1, an automatic following method based on a depth camera and a vision algorithm in the present solution includes the following steps:
s1, capturing a current environment image frame I through a depth camera 1 t
S2, detecting the current frame I through an SSD-MobileNet target detection model t Identifying all people in an imagePixel coordinates and spatial coordinate positions of the person, and output data P t
S3, the current environment image frame I t Output data P of detection model t By inputting the following algorithm, determining an algorithm execution flow according to the current stage of the following algorithm, wherein the stages comprise,
in the initialization stage, the nearest personnel to the equipment are identified, and the output data of the detection model is secondarily confirmed,
the following stage is to extract all the detected personnel features, learn and judge the personnel features through an Online-Boosting Online learning model, and generate following algorithm output information according to the spatial position of personnel from equipment;
s4, correspondingly converting the output information of the following algorithm into an equipment moving instruction, issuing an internal motor of the equipment and driving the equipment to move towards the direction of the personnel.
The following steps are specifically described in conjunction with the specific implementation process, and the main flow is as follows:
acquisition of RGB and depth images
A current ambient image frame is captured by the depth camera 1, comprising an RGB image of the current frame for identifying pixel coordinates of persons in the picture, and a depth image for acquiring spatial positions of persons in the environment.
2. Personnel detection
And detecting various objects contained in the RGB image of the current frame by using an SSD-MobileNet-based target detection model, and confirming pixel coordinates of all people in the image. And then, calculating the spatial position coordinates of the corresponding personnel according to the depth map and the internal parameters of the camera. The SSD-MobileNet object detection model was trained on MS COCO data sets to identify class 91 objects, including personnel.
While SSD-MobileNet object detection model outputs data
Figure BDA0004077139890000031
Wherein (1)>
Figure BDA0004077139890000032
Representing the upper left and lower right coordinates of the identified object in the image in the pixel box,/->
Figure BDA0004077139890000041
For three-dimensional coordinates of the identified object in the camera coordinate system +.>
Figure BDA0004077139890000042
Is a horizontal coordinate>
Figure BDA0004077139890000043
Is vertical direction coordinate>
Figure BDA0004077139890000044
For the camera optical axis direction coordinates, +.>
Figure BDA0004077139890000045
Whether the identified target is an identification bit of a tracking person.
3. Following person confirmation and feature extraction
This step requires deciding how to execute the flow according to the following stage in which the following algorithm is currently located. There are two stages in the execution of the following algorithm:
an initialization stage: the method is used for identifying the person closest to the equipment, and meanwhile, carrying out secondary confirmation on the output data of the detection model, and because the appointed person based on vision needs to be followed, the following algorithm needs an initialization stage before the following, and the characteristics of the appointed person are learned. The following initialization stage requires the person to be specified to follow, stands in the specified area in front of the camera, and the algorithm utilizes an online learning model to learn the characteristics of the specified person to follow for the first time. The device can then start following the designated person, and the algorithm can follow it at any location, without the person being in a fixed position in front of the camera at any time during the subsequent phase.
The following stage: the method comprises the steps of extracting all detected personnel features, learning and judging the personnel features through an Online-Boosting Online learning model, generating following algorithm output information according to the spatial positions of personnel from equipment, extracting all detected personnel features by the algorithm in the following stage of the algorithm, judging the features by means of the Online learning model, and finding the positions of appointed following personnel. In this stage, the online learning model updates itself by using these features, so as to ensure that the stability of the algorithm is not affected by the environment and illumination.
The above mentioned person feature extraction is also using deep learning techniques. And extracting 10 feature graphs from the detected images of the personnel as training data of an online learning model through a pre-trained shallow convolutional neural network containing two layers of convolutional kernels. The f_conv () is a two-layer convolutional neural network, and the reason why the two-layer convolutional neural network is employed to extract image features is two: 1. the neural network is richer in the extraction of the image features than the traditional image operators, because the features are obtained by self-learning 2. The two-layer convolution can give consideration to the computing efficiency and the richness of the features.
The Online learning model is composed of 30 Bayesian classifiers, namely, the characteristics of the detected personnel are input into the 30 Bayesian classifiers, and the classifiers make a common decision on whether the detected personnel are specified following personnel or not. The update of the online model also requires the update of all bayesian classifiers. In order to keep the balance of various features during feature learning, the tracked features of the personnel are input as positive examples, and the features of the other detected personnel are input as negative examples into an online model.
The essence of the online-boosting technology is that the decision classification is performed by a plurality of weak classifiers together, and the online-boosting is divided into two stages of prediction and updating. The update of online_boosting is performed continuously throughout the life of the algorithm. The update and prediction phase takes as input the personnel features extracted by f_conv ().
Reasons for using the online_boosting technique: in the moving process of the robot, the surrounding environment is greatly changed, and a model capable of learning and updating the environment change in real time is required to adapt to the changed environment. And the calculation efficiency is also considered, and the smoothness of the operation of the robot is not influenced.
The Bayesian classifier is used as an online-boosting weak classifier, and is used for ensuring the calculation efficiency and the recognition accuracy through continuous test adjustment, so that 30 Bayesian classifiers are finally determined.
The final operation result also shows that the combination of the two can ensure the operation efficiency of the robot and the identification accuracy of personnel identity
4. Following personnel spatial position confirmation
In order to prevent the online learning model from misjudging the information of the personnel, the following algorithm can carry out secondary confirmation on the information of the following personnel output by the online learning model. Since the person does not have a large and rapid spatial movement when the device is following, if there is a large displacement of the person's position and the previous frame, the recognition result of the frame is invalidated.
5. Drive device following
After the personnel position information output by the secondary confirmation algorithm is valid, the position information is converted into a movement instruction of the equipment, and then the movement instruction is sent to the motor to drive the equipment to move towards the personnel.
Similarly, as shown in fig. 2, an automatic following device based on a depth camera and a visual algorithm based on the above-mentioned automatic following method includes a robot 2 and a depth camera 1, where the robot 2 is a device terminal in the above-mentioned method steps, and is driven to move by an internal motor, the depth camera 1 is fixedly installed on one side of the robot 2 and is used to capture an external environment image frame, a control chip in the robot 2 is embedded with the steps of the above-mentioned automatic following method based on the depth camera and the visual algorithm, and the pseudo code of the automatic following algorithm of the robot 2 is as shown in fig. 3, and the device can also realize the functions of acquiring the spatial orientation of a tracking person in real time and continuously learning and upgrading online.
The technical scope of the present invention is not limited to the above description, and those skilled in the art may make various changes and modifications to the above-described embodiments without departing from the technical spirit of the present invention, and these changes and modifications should be included in the scope of the present invention.

Claims (10)

1. An automatic following method based on a depth camera and a vision algorithm is characterized by comprising the following steps:
s1, capturing a current environment image frame I through a depth camera t
S2, detecting the current frame I through an SSD-MobileNet target detection model t Confirming pixel coordinates and space coordinate positions of all people in the image, and outputting data P t
S3, the current environment image frame I t Output data P of detection model t By inputting the following algorithm, determining an algorithm execution flow according to the current stage of the following algorithm, wherein the stages comprise,
in the initialization stage, the nearest personnel to the equipment are identified, and the output data of the detection model is secondarily confirmed,
the following stage is to extract all the detected personnel features, learn and judge the personnel features through an Online-Boosting Online learning model, and generate following algorithm output information according to the spatial position of personnel from equipment;
s4, correspondingly converting the output information of the following algorithm into an equipment moving instruction, issuing an internal motor of the equipment and driving the equipment to move towards the direction of the personnel.
2. An automatic following method based on a depth camera and a vision algorithm according to claim 1, characterized in that: the current environmental image frame in step S1 includes an RGB image of the current frame and a depth image, where the RGB image is used to identify pixel coordinates of people in the image, and the depth image is used to obtain spatial positions of people in the environment.
3. An automatic following method based on a depth camera and a vision algorithm according to claim 1, characterized in that: the SSD-MobileNet target detection model in step S2 is trained on an MS COCO data set to identify class 91 objects including personnel.
4. An automatic following method based on a depth camera and a vision algorithm according to claim 1, characterized in that: the output data in the step S2
Figure FDA0004077139880000011
Wherein,,
Figure FDA0004077139880000012
representing the upper left and lower right coordinates of the identified object in the image in the pixel box,/->
Figure FDA0004077139880000013
For three-dimensional coordinates of the identified object in the camera coordinate system +.>
Figure FDA0004077139880000014
Is a horizontal coordinate>
Figure FDA0004077139880000015
Is a coordinate in the vertical direction, and the coordinate is a coordinate in the vertical direction,
Figure FDA0004077139880000016
for the camera optical axis direction coordinates, +.>
Figure FDA0004077139880000017
Whether the identified target is an identification bit of a tracking person.
5. An automatic following method based on a depth camera and a vision algorithm according to claim 1, characterized in that: the equipment in the step S3 is a robot, and the corresponding following algorithm output information comprises the movement speed V of the robot t ={v x W }, where v x Is the advancing speed of the robot, w is the robotIs set in the steering angle speed of the vehicle.
6. An automatic following method based on a depth camera and a vision algorithm according to claim 1, characterized in that: and when the following algorithm in the step S3 is in the following stage, extracting personnel features through a shallow convolutional neural network of a two-layer convolutional kernel, and simultaneously extracting 10 feature images as training data of an Online-Boosting Online learning model.
7. An automatic following method based on a depth camera and a vision algorithm according to claim 1, characterized in that: the online learning model consists of 30 bayesian classifiers.
8. An automatic following device based on a depth camera and a visual algorithm is characterized in that: the method comprises a robot and a depth camera, wherein the robot is driven to move through an internal motor, the depth camera is fixedly arranged on one side of the robot and is used for capturing external environment image frames, and the internal control chip of the robot is embedded with the steps of an automatic following method based on the depth camera and a vision algorithm according to any one of claims 1 to 7.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of an automatic following method based on a depth camera and a vision algorithm as claimed in any one of claims 1 to 7 when the program is executed by the processor.
10. A non-transitory computer readable storage medium having a computer program stored thereon, characterized by: the computer program is executed by a processor for implementing the steps of an automatic following method based on a depth camera and a vision algorithm as claimed in any one of the preceding claims 1 to 7.
CN202310112092.1A 2023-02-14 2023-02-14 Automatic following method and device based on depth camera and vision algorithm Pending CN116363693A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310112092.1A CN116363693A (en) 2023-02-14 2023-02-14 Automatic following method and device based on depth camera and vision algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310112092.1A CN116363693A (en) 2023-02-14 2023-02-14 Automatic following method and device based on depth camera and vision algorithm

Publications (1)

Publication Number Publication Date
CN116363693A true CN116363693A (en) 2023-06-30

Family

ID=86926438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310112092.1A Pending CN116363693A (en) 2023-02-14 2023-02-14 Automatic following method and device based on depth camera and vision algorithm

Country Status (1)

Country Link
CN (1) CN116363693A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116901085A (en) * 2023-09-01 2023-10-20 苏州立构机器人有限公司 Intelligent robot obstacle avoidance method and device, intelligent robot and readable storage medium
CN118068318A (en) * 2024-04-17 2024-05-24 德心智能科技(常州)有限公司 Multimode sensing method and system based on millimeter wave radar and environment sensor

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116901085A (en) * 2023-09-01 2023-10-20 苏州立构机器人有限公司 Intelligent robot obstacle avoidance method and device, intelligent robot and readable storage medium
CN116901085B (en) * 2023-09-01 2023-12-22 苏州立构机器人有限公司 Intelligent robot obstacle avoidance method and device, intelligent robot and readable storage medium
CN118068318A (en) * 2024-04-17 2024-05-24 德心智能科技(常州)有限公司 Multimode sensing method and system based on millimeter wave radar and environment sensor

Similar Documents

Publication Publication Date Title
US11645765B2 (en) Real-time visual object tracking for unmanned aerial vehicles (UAVs)
CN110543867B (en) Crowd density estimation system and method under condition of multiple cameras
US9990736B2 (en) Robust anytime tracking combining 3D shape, color, and motion with annealed dynamic histograms
WO2021139484A1 (en) Target tracking method and apparatus, electronic device, and storage medium
US7308112B2 (en) Sign based human-machine interaction
CN109741369B (en) Method and system for robot to track target pedestrian
WO2018028361A1 (en) Charging method, apparatus, and device for robot
CN109800689A (en) A kind of method for tracking target based on space-time characteristic fusion study
CN113158833A (en) Unmanned vehicle control command method based on human body posture
CN116363693A (en) Automatic following method and device based on depth camera and vision algorithm
CN114445853A (en) Visual gesture recognition system recognition method
Shi et al. Fuzzy dynamic obstacle avoidance algorithm for basketball robot based on multi-sensor data fusion technology
CN116665097A (en) Self-adaptive target tracking method combining context awareness
CN115307622A (en) Autonomous mapping method and system based on deep learning in dynamic environment
Zhou et al. Visual tracking using improved multiple instance learning with co-training framework for moving robot
CN109934155B (en) Depth vision-based collaborative robot gesture recognition method and device
CN111724438B (en) Data processing method and device
CN112862865A (en) Detection and identification method and device for underwater robot and computer storage medium
Kasebi et al. Hybrid navigation based on GPS data and SIFT-based place recognition using Biologically-inspired SLAM
JP2022019339A (en) Information processing apparatus, information processing method, and program
Wang et al. Research and Design of Human Behavior Recognition Method in Industrial Production Based on Depth Image
CN115797397B (en) Method and system for all-weather autonomous following of robot by target personnel
Huang et al. Face Detection and Tracking Using Raspberry Pi based on Haar Cascade Classifier
Wang et al. Semantic Segmentation based network for 6D pose estimation
Liu et al. Vision and Laser-Based Mobile Robot Following and Mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination