CN113192107A - Target identification tracking method and robot - Google Patents

Target identification tracking method and robot Download PDF

Info

Publication number
CN113192107A
CN113192107A CN202110492441.8A CN202110492441A CN113192107A CN 113192107 A CN113192107 A CN 113192107A CN 202110492441 A CN202110492441 A CN 202110492441A CN 113192107 A CN113192107 A CN 113192107A
Authority
CN
China
Prior art keywords
tracking
target object
detected
learning model
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110492441.8A
Other languages
Chinese (zh)
Inventor
赵立恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qiangmei Artificial Intelligence Technology Co ltd
Original Assignee
Shanghai Qiangmei Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qiangmei Artificial Intelligence Technology Co ltd filed Critical Shanghai Qiangmei Artificial Intelligence Technology Co ltd
Priority to CN202110492441.8A priority Critical patent/CN113192107A/en
Publication of CN113192107A publication Critical patent/CN113192107A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Fuzzy Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a target identification tracking method and a robot, and relates to the technical field of intelligent robots. The method is applied to a control chip of a robot and comprises the following steps: and acquiring the specific target object and the current image information to be detected. And inputting the image information to be detected into a pre-trained deep learning model. And identifying a tracking target object from the current image information to be detected according to the deep learning model, wherein the tracking target object is matched with the specific target object. And tracking the position of the tracking target object in the next frame image of the current image to be detected by adopting a tracking algorithm. After the control chip of the robot acquires the specific target object and the current image information to be detected, the tracking target object matched with the specific target object can be accurately identified from the image information to be detected through the depth learning model, and then the position of the tracking target object in the next frame of image of the current image to be detected is tracked by adopting a tracking algorithm, so that the target object can be intelligently identified and tracked.

Description

Target identification tracking method and robot
Technical Field
The invention relates to the technical field of intelligent robots, in particular to a target identification tracking method and a robot.
Background
With the improvement of living standards of people, the requirements on the intelligent robot are higher and higher, and the intelligent robot is expected to realize more intelligent operations, for example, intelligently identify surrounding target objects according to the requirements of users and track the target objects according to identification results.
However, the existing robot can only simply recognize the target, the recognition accuracy is not high, and the robot continues to walk to realize the tracking purpose after the recognition according to the control of the rear-end monitoring control end by the user, and cannot recognize the target and track the target to walk.
Disclosure of Invention
The invention aims to provide a target identification and tracking method which can identify and track a target.
Another object of the present invention is to provide a robot capable of recognizing a target and tracking the target.
The embodiment of the invention is realized by the following steps:
in a first aspect, an embodiment of the present application provides a target identification and tracking method, which is applied to a control chip of a robot, and the method includes: and acquiring the specific target object and the current image information to be detected. And inputting the image information to be detected into a pre-trained deep learning model. And identifying a tracking target object from the current image information to be detected according to the deep learning model, wherein the tracking target object is matched with the specific target object. And tracking the position of the tracking target object in the next frame image of the current image to be detected by adopting a tracking algorithm. In the implementation process, after the control chip of the robot acquires the specific target object and the current image information to be detected, the tracking target object matched with the specific target object can be accurately identified from the image information to be detected through the depth learning model, and then the position of the tracking target object in the next frame of image of the current image to be detected is tracked by adopting a tracking algorithm, so that the target object can be intelligently identified and tracked.
In some embodiments of the present invention, before the step of inputting the image information to be detected into the pre-trained deep learning model, the method further includes the following steps of obtaining a training sample set and a testing sample set, constructing the deep learning model, training the deep learning model by using the training sample set to obtain the trained deep learning model, and testing and correcting the trained deep learning model by using the testing sample set to obtain the trained deep learning model. The deep learning model can learn the intrinsic rules and the representation levels of the data, imitate the thinking of human beings and accurately identify the target. The deep learning model is trained through the training sample set, so that the deep learning model has the capability of data analysis, and the accuracy of identifying the target is ensured. And then the trained deep learning model is tested and corrected through the test sample set, so that the accuracy of the recognition result of the deep learning model is further ensured.
In some embodiments of the present invention, the step of identifying the tracking target object from the current image information to be detected according to the deep learning model includes the steps of identifying an initial target object from the current image information to be detected according to the deep learning model, obtaining a matching degree between the initial target object and a specific target object, and if the matching degree is greater than a preset threshold, using the initial target object as the tracking target object. In order to avoid the occurrence of an excessive difference between the identified target and the specific target, whether the identified initial target is the tracking target can be judged by adopting a matching degree mode, so that the accuracy of target identification is improved.
In some embodiments of the present invention, after the step of tracking the position of the tracking target object in the next frame image of the current image to be detected by using the tracking algorithm, the method further includes the following steps of obtaining map information of the scene and a preset tracking parameter of the robot, and generating a tracking instruction according to the map information, the position of the tracking target object in the next frame image of the current image to be detected and the preset tracking parameter, so as to control the robot to track the tracking target object according to the tracking instruction. After the tracking target object is tracked in the next frame of image of the current image to be detected by adopting a tracking algorithm, the map information of the scene and the preset tracking parameters can be obtained, so that a tracking instruction for controlling the robot to carry out walking tracking is generated.
In some embodiments of the present invention, the step of generating the tracking instruction according to the map information, the position of the tracking target object in the next frame image of the current image to be detected, and the preset tracking parameter includes: and determining tracking distance data according to the map information and the position of the tracking target object in the next frame image of the current image to be detected. And generating a tracking instruction according to the tracking distance data and the tracking parameters.
In a second aspect, an embodiment of the present application provides a robot, which includes a depth camera and a control chip, where the depth camera and the control chip are connected. And the depth camera is used for acquiring the current image information to be detected. And the control chip is used for acquiring the specific target object and the current image information to be detected. And the control chip is also used for inputting the information of the image to be detected into the pre-trained deep learning model. And the control chip is also used for identifying a tracking target object from the current image information to be detected according to the deep learning model, wherein the tracking target object is matched with the specific target object. And the control chip is also used for tracking the position of the tracking target object in the next frame image of the current image to be detected by adopting a tracking algorithm.
In some embodiments of the present invention, the control chip is further configured to obtain a training sample set and a testing sample set; the deep learning model is constructed and trained by utilizing a training sample set to obtain a trained deep learning model; and the method is also used for testing and correcting the trained deep learning model by utilizing the test sample set so as to obtain the trained deep learning model.
In some embodiments of the present invention, the control chip is further configured to identify an initial target object from the current image information to be detected according to the deep learning model; the method is also used for acquiring the matching degree between the initial target object and the specific target object; and the tracking module is also used for taking the initial target object as the tracking target object if the matching degree is greater than a preset threshold value.
In some embodiments of the present invention, the depth camera is further configured to obtain map information of a scene, and the control chip is further configured to obtain the map information and preset tracking parameters of the robot. The control chip is further used for generating a tracking instruction according to the map information, the position of the tracking target object in the next frame image of the current image to be detected and the preset tracking parameter so as to control the robot to track the tracking target object according to the tracking instruction.
In some embodiments of the present invention, when the control chip generates the tracking instruction according to the map information, the position of the tracking target object in the next frame image of the current image to be detected, and the preset tracking parameter, the control chip is further configured to determine the tracking distance data according to the map information and the position of the tracking target object in the next frame image of the current image to be detected. The control chip is also used for generating a tracking instruction according to the tracking distance data and the tracking parameters.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory for storing one or more programs and a processor. The one or more programs, when executed by the processor, implement the method of any of the above first aspects.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method of any one of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flow chart of a target identification and tracking method according to an embodiment of the present invention;
FIG. 2 is a flowchart of deep learning model training according to an embodiment of the present invention;
fig. 3 is a schematic view of structural connection inside a robot according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a robot according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Icon: 100-a robot; 110-depth camera; 120-a control chip; 130-a walking device; 101-a memory; 102-a processor; 103-communication interface.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the description of the present application, it is also to be noted that, unless otherwise explicitly specified or limited, the terms "disposed" and "connected" are to be interpreted broadly, e.g., as being either fixedly connected, detachably connected, or integrally connected; there may be communication between the interiors of the two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the individual features of the embodiments can be combined with one another without conflict.
Referring to fig. 1, fig. 1 is a flowchart illustrating a target identification and tracking method according to an embodiment of the present invention. The target identification tracking method is applied to a control chip of a robot and comprises the following steps:
step S110: and acquiring the specific target object and the current image information to be detected.
After the robot is started, visual perception devices such as a depth camera and the like arranged on the robot can acquire the surrounding environment of the robot and acquire the current image information to be detected. If the robot is only provided with a depth camera, the current image information to be detected can be a depth image, and if the robot is also provided with a thermal imaging camera, the current image information to be detected can also comprise a thermal image.
The specific target object can be acquired through manual input, and the user can input the specific target object into the control chip through the input device according to different application scenes, for example, if the user needs to use the robot for detecting the temperature of a crowd, the specific target object can be set as a person, if the user needs to use the robot for tracking tennis balls, the specific target object can be set as tennis balls, and if the user needs to use the robot for detecting and tracking fire scenes, the specific target object can be set as flames. The specific object may also be a feature of a person, such as a human face, a gender, and the like. In addition, the specific target object can be obtained through automatic identification and determination according to the environment around the strange figure, for example, if the robot is started and the scene where the robot is located is identified as an airport through a depth image shot by a depth camera, the specific target object corresponding to the airport scene preset by the user can be obtained.
Step S120: and inputting the image information to be detected into a pre-trained deep learning model.
The deep learning model can learn the intrinsic rules and the expression levels of the data and imitate the thinking of human beings. The image information to be detected can be identified through the pre-trained deep learning model.
Step S130: and identifying the tracking target object from the current image information to be detected according to the deep learning model.
Wherein the tracking target object is matched with the specific target object. In order to avoid the occurrence of an excessive difference between the identified target and the specific target, whether the identified initial target is the tracking target can be judged by adopting a matching degree mode, so that the accuracy of target identification is improved.
In detail, when the tracking target object is identified from the current image information to be detected according to the deep learning model, an initial target object may be identified from the current image information to be detected according to the deep learning model, then the matching degree between the initial target object and the specific target object is obtained, and if the matching degree is greater than a preset threshold value, the initial target object is used as the tracking target object. If the matching degree is too small, it indicates that the difference between the initial target object identified from the current image information to be detected and the specific target object is too large according to the deep learning model, for example, if the specific target object is a flame, the deep learning model identifies the initial target object from the current image information to be detected as a simulated flame lamp, the initial target object identified this time can be discarded by calculating the matching degree between the initial target object and the specific target object, and the specific target object is continuously identified. By the method, other influences caused by misjudgment can be effectively avoided.
As another embodiment, whether the initial target object identified by the deep learning model is the tracking target object may be determined by a manual review. For example, when an initial target object is identified from the current image information to be detected according to the deep learning model, the initial target object may be sent to an output device, for example, the output device may be a display screen or a remote display terminal. After the user looks at the initial target object through the output device, the user determines whether the initial target object is a tracking target object through the input device.
Step S140: and tracking the position of the tracking target object in the next frame image of the current image to be detected by adopting a tracking algorithm.
The tracking algorithm can be a correlation filtering method, a mean shift algorithm, a moving object modeling method and the like. For example, a KCF algorithm in the correlation filtering method may be employed.
In the implementation process, after the control chip of the robot acquires the specific target object and the current image information to be detected, the tracking target object matched with the specific target object can be accurately identified from the image information to be detected through the depth learning model, and then the position of the tracking target object in the next frame of image of the current image to be detected is tracked by adopting a tracking algorithm, so that the target object can be intelligently identified and tracked.
Referring to fig. 2, fig. 2 is a flowchart of deep learning model training according to an embodiment of the present invention, in some embodiments of the present invention, before the step of inputting the image information to be detected into the pre-trained deep learning model, the pre-trained deep learning model may be obtained through the following steps:
step S210: a training sample set and a testing sample set are obtained.
Step S220: and constructing a deep learning model, and training the deep learning model by utilizing a training sample set to obtain the trained deep learning model.
Step S230: and testing and correcting the trained deep learning model by using the test sample set to obtain the trained deep learning model.
The deep learning model can learn the intrinsic rules and the representation levels of the data, imitate the thinking of human beings and accurately identify the target. The deep learning model is trained through the training sample set, so that the deep learning model has the capability of data analysis, and the accuracy of identifying the target is ensured. And then the trained deep learning model is tested and corrected through the test sample set, so that the accuracy of the recognition result of the deep learning model is further ensured.
In some embodiments of the present invention, after the step of tracking the position of the tracking target object in the next frame image of the current image to be detected by using the tracking algorithm, the map information of the scene and the preset tracking parameters of the robot may also be obtained. And then generating a tracking instruction according to the map information, the position of the tracking target object in the next frame image of the current image to be detected and a preset tracking parameter so as to control the robot to track the tracking target object according to the tracking instruction. After the tracking target object is tracked in the next frame of image of the current image to be detected by adopting a tracking algorithm, the map information of the scene and the preset tracking parameters can be obtained, so that a tracking instruction for controlling the robot to carry out walking tracking is generated.
As one embodiment, when the tracking instruction is generated according to the map information, the position of the tracking target object in the next frame image of the current image to be detected, and the preset tracking parameter, the tracking distance data may be determined according to the map information and the position of the tracking target object in the next frame image of the current image to be detected, and then the tracking instruction may be generated according to the tracking distance data and the tracking parameter. The tracking distance data can comprise the distance between the robot and the target object and the direction of the target object taking the right front of the robot as a reference, so that a tracking instruction can be generated according to the tracking distance data and the tracking parameters, and the robot can be ensured to accurately track the target object.
In detail, can include running gear among the robot, this running gear can include chassis control system, encoder and motor, and chassis control system is connected with the encoder, and the encoder is connected with the motor, and after chassis control system received the pursuit instruction, can send the control code to the encoder to make the encoder drive the motor and move, and then drive the action that the robot accomplished the pursuit walking.
As one of the embodiments, the tracking action of the robot may be implemented based on a KCF tracking algorithm that applies a depth camera under ROS. And after the ROS environment is started, opening a node of the depth camera, sending out topic capable of playing the depth image and the rgb image, checking whether the topic of the depth image and the rgb image exists in a topic list of the ROS, then compiling, and starting a tracking program after the compiling is finished. After the tracking program is started, the user can select the target to be tracked in the image window through the input device and plan the speed, for example, the parameters that can be set include the minimum distance to the target, the maximum distance to the target, the minimum linear speed, the maximum linear speed, the minimum rotation degree and the maximum rotation speed. For example, when the target starts tracking at a distance of 1.5m from the robot, the initial speed is 0.4m/s, the speed increases with increasing distance, and when the distance between the target and the robot reaches a set maximum distance of 5m, the speed may increase to a maximum linear speed of 0.6 m/s. If the initial rotation speed of the robot is 0, the rotation speed of the robot will increase when the angle between the target and the center point of the camera is gradually increased, and the maximum rotation speed is 0.75 rad/s.
The target identification tracking method provided by the application can be used for different scenes, such as flame identification tracking, identification tracking of real-time video, lane line identification tracking and the like.
Referring to fig. 3, fig. 3 is a schematic diagram of the internal structure connection of a robot according to an embodiment of the present invention, the robot 100 includes a depth camera 110 and a control chip 120, and the depth camera 110 and the control chip 120 are connected. And the depth camera 110 is used for acquiring the current image information to be detected. And the control chip 120 is configured to obtain a specific target object and current image information to be detected. The control chip 120 is further configured to input the image information to be detected into a pre-trained deep learning model. The control chip 120 is further configured to identify a tracking target object from the current image information to be detected according to the deep learning model, where the tracking target object is matched with the specific target object. The control chip 120 is further configured to track the position of the tracking target object in the next frame of image of the current image to be detected by using a tracking algorithm.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a robot 100 provided in the present application, where the robot 100 further includes a walking device 130, the walking device 130 includes a chassis control system, an encoder and a motor, the chassis control system is connected with the encoder, and the encoder is connected with the motor.
In some embodiments of the present invention, the control chip 120 is further configured to obtain a training sample set and a testing sample set; the deep learning model is constructed and trained by utilizing a training sample set to obtain a trained deep learning model; and the method is also used for testing and correcting the trained deep learning model by utilizing the test sample set so as to obtain the trained deep learning model.
In some embodiments of the present invention, the control chip 120 is further configured to identify an initial target object from the current image information to be detected according to a deep learning model; the method is also used for acquiring the matching degree between the initial target object and the specific target object; and the tracking module is also used for taking the initial target object as the tracking target object if the matching degree is greater than a preset threshold value.
In some embodiments of the present invention, the depth camera 110 is further configured to obtain map information of a scene, and the control chip 120 is further configured to obtain the map information and preset tracking parameters of the robot 100. The control chip 120 is further configured to generate a tracking instruction according to the map information, the position of the tracking target object in the next frame of image of the current image to be detected, and a preset tracking parameter, so as to control the robot 100 to track the tracking target object according to the tracking instruction.
In some embodiments of the present invention, when the control chip 120 generates the tracking instruction according to the map information, the position of the tracking target object in the next frame image of the current image to be detected, and the preset tracking parameter, the control chip 120 is further configured to determine the tracking distance data according to the map information and the position of the tracking target object in the next frame image of the current image to be detected. The control chip 120 is further configured to generate a tracking command according to the tracking distance data and the tracking parameter.
Referring to fig. 5, fig. 5 is a schematic structural block diagram of an electronic device according to an embodiment of the present disclosure. The electronic device comprises a memory 101, a processor 102 and a communication interface 103, wherein the memory 101, the processor 102 and the communication interface 103 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 101 may be used to store software programs and modules, and the processor 102 executes the software programs and modules stored in the memory 101 to thereby execute various functional applications and data processing. The communication interface 103 may be used for communicating signaling or data with other node devices.
The Memory 101 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 102 may be an integrated circuit chip having signal processing capabilities. The Processor 102 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
It will be appreciated that the configuration shown in fig. 5 is merely illustrative and that the electronic device may include more or fewer components than shown in fig. 5 or have a different configuration than shown in fig. 5. The components shown in fig. 5 may be implemented in hardware, software, or a combination thereof.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
To sum up, the method for identifying and tracking a target and the robot provided by the embodiment of the present application are applied to a control chip of the robot, and include: and acquiring the specific target object and the current image information to be detected. And inputting the image information to be detected into a pre-trained deep learning model. And identifying a tracking target object from the current image information to be detected according to the deep learning model, wherein the tracking target object is matched with the specific target object. And tracking the position of the tracking target object in the next frame image of the current image to be detected by adopting a tracking algorithm. In the implementation process, after the control chip of the robot acquires the specific target object and the current image information to be detected, the tracking target object matched with the specific target object can be accurately identified from the image information to be detected through the depth learning model, and then the position of the tracking target object in the next frame of image of the current image to be detected is tracked by adopting a tracking algorithm, so that the target object can be intelligently identified and tracked.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A target identification and tracking method is applied to a control chip of a robot, and comprises the following steps:
acquiring a specific target object and current image information to be detected;
inputting the image information to be detected into a pre-trained deep learning model;
identifying a tracking target object from the current image information to be detected according to the deep learning model; wherein the tracking target matches the specific target;
and tracking the position of the tracking target object in the next frame image of the current image to be detected by adopting a tracking algorithm.
2. The method of claim 1, wherein before the step of inputting the image information to be detected into the pre-trained deep learning model, the method further comprises:
acquiring a training sample set and a test sample set;
constructing a deep learning model, and training the deep learning model by using the training sample set to obtain a trained deep learning model;
and testing and correcting the trained deep learning model by using a test sample set to obtain the trained deep learning model.
3. The method according to claim 1, wherein the step of identifying a tracking target object from the current image information to be detected according to the deep learning model comprises:
identifying an initial target object from the current image information to be detected according to a deep learning model;
acquiring the matching degree between the initial target object and the specific target object;
and if the matching degree is greater than a preset threshold value, taking the initial target object as the tracking target object.
4. The method according to claim 1, wherein after the step of tracking the position of the tracking target object in the image of the next frame of the current image to be detected by using the tracking algorithm, the method further comprises:
acquiring map information of a scene and preset tracking parameters of the robot;
and generating a tracking instruction according to the map information, the position of the tracking target object in the next frame image of the current image to be detected and the preset tracking parameter so as to control the robot to track the tracking target object according to the tracking instruction.
5. The method according to claim 4, wherein the step of generating a tracking instruction according to the map information, the position of the tracking target object in the image of the next frame of the current image to be detected, and the preset tracking parameter comprises:
determining tracking distance data according to the map information and the position of the tracking target object in the next frame image of the current image to be detected;
and generating the tracking instruction according to the tracking distance data and the tracking parameters.
6. A robot, characterized in that the robot comprises: the device comprises a depth camera and a control chip, wherein the depth camera is connected with the control chip;
the depth camera is used for acquiring the current image information to be detected;
the control chip is used for acquiring a specific target object and current image information to be detected;
the control chip is also used for inputting the image information to be detected into a pre-trained deep learning model;
the control chip is further used for identifying a tracking target object from the current image information to be detected according to the deep learning model; wherein the tracking target matches the specific target;
the control chip is further configured to track the position of the tracking target object in the next frame of image of the current image to be detected by using a tracking algorithm.
7. The robot of claim 6, wherein the control chip is further configured to obtain a training sample set and a testing sample set; the deep learning model is also constructed and trained by utilizing the training sample set to obtain a trained deep learning model; and the deep learning model is also used for testing and correcting the trained deep learning model by utilizing the test sample set so as to obtain the trained deep learning model.
8. The robot according to claim 6, wherein the control chip is further configured to identify an initial target object from the current image information to be detected according to a deep learning model; the matching degree between the initial target object and the specific target object is obtained; and the tracking module is also used for taking the initial target object as the tracking target object if the matching degree is greater than a preset threshold value.
9. An electronic device, comprising:
a memory for storing one or more programs;
a processor;
the one or more programs, when executed by the processor, implement the method of any of claims 1-5.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN202110492441.8A 2021-05-06 2021-05-06 Target identification tracking method and robot Pending CN113192107A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110492441.8A CN113192107A (en) 2021-05-06 2021-05-06 Target identification tracking method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110492441.8A CN113192107A (en) 2021-05-06 2021-05-06 Target identification tracking method and robot

Publications (1)

Publication Number Publication Date
CN113192107A true CN113192107A (en) 2021-07-30

Family

ID=76983824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110492441.8A Pending CN113192107A (en) 2021-05-06 2021-05-06 Target identification tracking method and robot

Country Status (1)

Country Link
CN (1) CN113192107A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111002349A (en) * 2019-12-13 2020-04-14 中国科学院深圳先进技术研究院 Robot following steering method and robot system adopting same
WO2023019559A1 (en) * 2021-08-20 2023-02-23 深圳先进技术研究院 Automated stem cell detection method and system, and terminal and storage medium
CN116758111A (en) * 2023-08-21 2023-09-15 中通信息服务有限公司 Construction site target object tracking method and device based on AI algorithm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458866A (en) * 2019-08-13 2019-11-15 北京积加科技有限公司 Target tracking method and system
CN110688873A (en) * 2018-07-04 2020-01-14 上海智臻智能网络科技股份有限公司 Multi-target tracking method and face recognition method
CN111488766A (en) * 2019-01-28 2020-08-04 北京京东尚科信息技术有限公司 Target detection method and device
CN112069879A (en) * 2020-07-22 2020-12-11 深圳市优必选科技股份有限公司 Target person following method, computer-readable storage medium and robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688873A (en) * 2018-07-04 2020-01-14 上海智臻智能网络科技股份有限公司 Multi-target tracking method and face recognition method
CN111488766A (en) * 2019-01-28 2020-08-04 北京京东尚科信息技术有限公司 Target detection method and device
CN110458866A (en) * 2019-08-13 2019-11-15 北京积加科技有限公司 Target tracking method and system
CN112069879A (en) * 2020-07-22 2020-12-11 深圳市优必选科技股份有限公司 Target person following method, computer-readable storage medium and robot

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111002349A (en) * 2019-12-13 2020-04-14 中国科学院深圳先进技术研究院 Robot following steering method and robot system adopting same
WO2023019559A1 (en) * 2021-08-20 2023-02-23 深圳先进技术研究院 Automated stem cell detection method and system, and terminal and storage medium
CN116758111A (en) * 2023-08-21 2023-09-15 中通信息服务有限公司 Construction site target object tracking method and device based on AI algorithm
CN116758111B (en) * 2023-08-21 2023-11-17 中通信息服务有限公司 Construction site target object tracking method and device based on AI algorithm

Similar Documents

Publication Publication Date Title
CN113192107A (en) Target identification tracking method and robot
CN109145680B (en) Method, device and equipment for acquiring obstacle information and computer storage medium
US20210133468A1 (en) Action Recognition Method, Electronic Device, and Storage Medium
CN111656362B (en) Cognitive and occasional depth plasticity based on acoustic feedback
CN112488104B (en) Depth and confidence estimation system
US20210124914A1 (en) Training method of network, monitoring method, system, storage medium and computer device
US9892326B2 (en) Object detection in crowded scenes using context-driven label propagation
CN112200129B (en) Three-dimensional target detection method and device based on deep learning and terminal equipment
CN108009477B (en) Image people flow number detection method and device, storage medium and electronic equipment
US11087224B2 (en) Out-of-vehicle communication device, out-of-vehicle communication method, information processing device, and computer readable medium
CN109597943A (en) Learning content recommendation method based on scene and learning equipment
CN110991385A (en) Method and device for identifying ship driving track and electronic equipment
CN111598117B (en) Image recognition method and device
CN110427849B (en) Face pose determination method and device, storage medium and electronic equipment
CN113012054A (en) Sample enhancement method and training method based on sectional drawing, system and electronic equipment thereof
CN109255360B (en) Target classification method, device and system
CN111783640A (en) Detection method, device, equipment and storage medium
Brunner et al. Perception quality evaluation with visual and infrared cameras in challenging environmental conditions
CN114267041A (en) Method and device for identifying object in scene
WO2018103024A1 (en) Intelligent guidance method and apparatus for visually handicapped person
CN114040094B (en) Preset position adjusting method and device based on cradle head camera
CN110291771B (en) Depth information acquisition method of target object and movable platform
CN111553474A (en) Ship detection model training method and ship tracking method based on unmanned aerial vehicle video
CN113052019B (en) Target tracking method and device, intelligent equipment and computer storage medium
CN113591885A (en) Target detection model training method, device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination