CN114972415A - Robot vision tracking method, system, electronic device and medium - Google Patents

Robot vision tracking method, system, electronic device and medium Download PDF

Info

Publication number
CN114972415A
CN114972415A CN202111624324.9A CN202111624324A CN114972415A CN 114972415 A CN114972415 A CN 114972415A CN 202111624324 A CN202111624324 A CN 202111624324A CN 114972415 A CN114972415 A CN 114972415A
Authority
CN
China
Prior art keywords
image frame
target
monitoring
video data
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111624324.9A
Other languages
Chinese (zh)
Other versions
CN114972415B (en
Inventor
杨斌
张胜田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Institute Guangdong
Original Assignee
Neusoft Institute Guangdong
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Institute Guangdong filed Critical Neusoft Institute Guangdong
Priority to CN202111624324.9A priority Critical patent/CN114972415B/en
Publication of CN114972415A publication Critical patent/CN114972415A/en
Application granted granted Critical
Publication of CN114972415B publication Critical patent/CN114972415B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention relates to the technical field of digital image processing, and aims to provide a robot vision tracking method, a robot vision tracking system, an electronic device and a medium. The robot vision tracking method comprises the following steps: collecting monitoring video data in real time; judging whether a target monitoring instruction is received or not in real time, if so, entering the next step; outputting a target information acquisition request until receiving a contour image frame of a target in a latest video frame in the monitoring video data; tracking the current target in the monitoring video data according to the outline image frame, and acquiring the moving data of the current target; and generating a displacement instruction according to the movement data so as to drive the current robot to adjust the acquisition direction of the monitoring video data, and then acquiring the contour image frame of the current target again. The invention can realize automatic monitoring of the specified target, and is convenient for reducing the workload of manual monitoring of the user.

Description

Robot vision tracking method, system, electronic device and medium
Technical Field
The present invention relates to the field of digital image processing technologies, and in particular, to a robot vision tracking method, system, electronic device, and medium.
Background
The robot is an intelligent machine capable of working semi-autonomously or fully autonomously, has basic characteristics of perception, decision, execution and the like, can assist or even replace human beings to finish dangerous, heavy and complex work, improves the working efficiency and quality, serves human life, and expands or extends the activity and capability range of the human beings. With the development of computer science and automatic control technology, more and more intelligent robots of different types appear in production and life.
At present, in scenes such as criminal behavior monitoring, indoor pet behavior monitoring, wild animal behavior monitoring, a certain target generally needs to be continuously tracked so that a user can know behaviors of the current target. However, in the process of using the prior art, the inventor finds that at least the following problems exist in the prior art: when monitoring the target behavior, the user is required to continuously watch the monitoring picture, and the camera is controlled to rotate towards the specified direction when the target moves, so that the waste of human resources is caused.
Disclosure of Invention
The present invention is directed to solving at least some of the above problems and provides a robot vision tracking method, system, electronic device, and medium.
The technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides a robot vision tracking method, including:
collecting monitoring video data in real time;
judging whether a target monitoring instruction is received or not in real time, if so, entering the next step;
outputting a target information acquisition request until receiving a contour image frame of a target in a latest video frame in the monitoring video data;
tracking the current target in the monitoring video data according to the outline image frame, and acquiring the moving data of the current target;
and generating a displacement instruction according to the movement data so as to drive the current robot to adjust the acquisition direction of the monitoring video data, and then acquiring the contour image frame of the current target again.
The invention can realize automatic monitoring of the designated target, and is convenient for reducing the workload of manual monitoring of the user. Specifically, in the implementation process, after a target monitoring instruction is received, a target information acquisition request is output until a contour image frame of a target in a latest video frame in monitoring video data is received; then, tracking the current target in the monitoring video data according to the outline image frame so as to obtain the moving data of the current target; and finally, a displacement instruction for driving the current robot to adjust the acquisition direction can be generated according to the position artist, so that the robot can continuously track the target and acquire monitoring video data. The invention can be applied to the existing monitoring system, the robot can automatically track the current target after acquiring the outline image frame of the target, and adjust the acquisition direction of the target in real time according to the position of the target, thereby effectively reducing the workload of manual monitoring of a user.
In one possible design, after outputting the target information acquisition request, the method further includes:
and judging whether a moving instruction is received or not in real time, and if so, driving the current robot to adjust the acquisition direction of the monitoring video data according to the moving instruction.
In one possible design, tracking the current target in the surveillance video data according to the outline image frame, and obtaining the movement data of the current target, includes:
according to the outline image frame, acquiring a plurality of related image frames of the original video frame after the outline image frame is translated by a specified distance;
acquiring a latest video frame in the monitoring video data;
acquiring a plurality of comparison image frames corresponding to the positions of a plurality of related image frames in the original video frame in the latest video frame;
obtaining the correlation degree between the outline image frame and the plurality of comparison image frames, and defining the comparison image frame with the maximum correlation degree as the latest outline image frame of the target;
and obtaining the moving data of the current target according to the contour image frame and the latest contour image frame.
In one possible design, obtaining the movement data of the current target according to the outline image frame and the latest outline image frame includes:
acquiring the center point coordinate of the contour image frame and the center point coordinate of the latest contour image frame;
and obtaining the moving data of the current target according to the center point coordinate of the contour image frame and the center point coordinate of the latest contour image frame.
In one possible design, obtaining the correlation between the outline image frame and the plurality of the comparison image frames includes:
respectively carrying out image segmentation on the outline image frame and the plurality of comparison image frames;
respectively carrying out center weighting on the segmented outline image frame and the plurality of comparison image frames;
calculating the correlation degree of the outline image frame and a plurality of the comparison image frames; wherein, the correlation degree between the outline image frame and any one of the comparison image frames is as follows:
Figure 359387DEST_PATH_IMAGE001
in the formula (I), the compound is shown in the specification,Lthe frame of the map is represented,Ma frame representing the outline image is displayed,wrepresenting the total area of the outline image frame,nrepresenting the number of the areas after the outline image frame and any one of the comparison image frames are divided,α i a weighting function representing the ith area in the silhouette image frame and the comparison image frame,r i representing the first in the outline image frameiThe area of each of the regions is,l i representing the outline image frame and the second of the comparison image frameiThe area of overlap of the regions.
In one possible design, after the monitoring video data is collected in real time, the robot vision tracking method further includes:
carrying out decoding processing and frame interception processing on the monitoring video data to obtain continuous video frames;
and sequentially carrying out image enhancement processing, smoothing processing and sharpening processing on the continuous video frames to obtain the processed video frames.
In one possible design, the robot visual tracking method further includes:
and judging whether a monitoring stopping instruction is received or not in real time, if so, deleting the target information and resetting the robot so as to facilitate the next target tracking.
In a second aspect, the present invention provides a robot vision tracking system, for implementing the robot vision tracking method described in any one of the above; the robot vision tracking system comprises a video data acquisition module, a monitoring instruction acquisition module, a data processing module, a target tracking module and a robot control module, wherein the video data acquisition module and the monitoring instruction acquisition module are both in communication connection with the data processing module, the data processing module is in communication connection with the robot control module through the target tracking module, wherein,
the video data acquisition module is used for acquiring monitoring video data in real time and sending the monitoring video data to the data processing module;
the monitoring instruction acquisition module is used for receiving a target monitoring instruction and sending the target monitoring instruction to the data processing module after receiving the target monitoring instruction;
the data processing module is used for judging whether a target monitoring instruction is received in real time, and outputting a target information acquisition request after the target monitoring instruction is received until a contour image frame of a target in a latest video frame in the monitoring video data is received;
the target tracking module is used for tracking the current target in the monitoring video data according to the outline image frame and acquiring the moving data of the current target;
and the robot control module is used for generating a displacement instruction according to the movement data so as to drive the current robot to adjust the acquisition direction of the monitoring video data.
In a third aspect, the present invention provides an electronic device, comprising:
a memory for storing computer program instructions; and the number of the first and second groups,
a processor for executing the computer program instructions to perform the operations of the robot vision tracking method of any one of the above.
In a fourth aspect, the present invention provides a computer-readable storage medium for storing computer-readable computer program instructions configured to, when executed, perform the operations of the robot vision tracking method described in any one of the above.
Drawings
FIG. 1 is a flow chart of a robot vision tracking method in the present invention;
FIG. 2 is a block diagram of a robot vision tracking system in accordance with the present invention;
fig. 3 is a block diagram of an electronic device according to the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments.
It should be understood that in some embodiments, the functions/acts may occur out of the order in which the figures occur. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
Example 1:
a first aspect of the present embodiment provides a method, a system, an electronic device, and a medium for robot vision tracking, which may be, but not limited to, executed by a Computer device or a virtual machine with certain computing resources, for example, executed by an electronic device such as a Personal Computer (PC, which refers to a multipurpose Computer with a size, price, and performance suitable for Personal use; a desktop Computer, a notebook Computer, a mini-notebook Computer, a tablet Computer, a super-notebook, etc. all belong to a Personal Computer), a smart phone, a Personal digital assistant (PAD), or a wearable device, or executed by a virtual machine, so as to achieve automatic monitoring of a specified target.
As shown in fig. 1, a method, system, electronic device and medium for robot vision tracking may include, but are not limited to, the following steps:
s1, collecting monitoring video data in real time;
in this embodiment, after the monitoring video data is collected in real time, the robot visual tracking method further includes:
carrying out decoding processing and frame interception processing on the monitoring video data to obtain continuous video frames;
and sequentially carrying out image enhancement processing, smoothing processing and sharpening processing on the continuous video frames to obtain the processed video frames.
It should be noted that, the image enhancement processing, the smoothing processing, and the sharpening processing all adopt the prior art, and in this embodiment, the video frame is subjected to the preprocessing such as the image enhancement processing, the smoothing processing, and the sharpening processing, so that it is ensured that the quality of the output video frame is improved, and the subsequent tracking of the target in the video frame is facilitated.
S2, judging whether a target monitoring instruction is received or not in real time, if so, entering a step S3, and if not, not acting;
s3, outputting a target information acquisition request;
and S4, judging whether a moving instruction is received in real time, if so, driving the current robot to adjust the acquisition direction of the monitoring video data according to the moving instruction, and if not, stopping the operation.
It should be noted that, when a specific target is monitored, a situation that the target is separated from the monitoring picture may exist, and when the target is not in the monitoring picture, the robot may adjust the position of the robot and/or monitor the turning direction of the video data acquisition module by receiving a movement instruction sent by a user until the target is in the monitoring picture, thereby expanding the application scenario of this embodiment.
S5, receiving a contour image frame of a target in the latest video frame in the monitoring video data;
s6, tracking the current target in the monitoring video data according to the outline image frame, and acquiring the moving data of the current target;
in this embodiment, the specific steps of step S6 are as follows:
s601, acquiring a plurality of related image frames of the original video frame after the contour image frame is translated by a specified distance according to the contour image frame;
s602, acquiring the latest video frame in the monitoring video data;
s603, acquiring a plurality of comparison image frames corresponding to the positions of a plurality of related image frames in the original video frame in the latest video frame;
s604, obtaining the correlation degree between the outline image frame and the plurality of comparison image frames, and defining the comparison image frame with the maximum correlation degree as the latest outline image frame of the target;
in this embodiment, obtaining the correlation between the outline image frame and the plurality of comparison image frames includes:
A1. respectively carrying out image segmentation on the outline image frame and the plurality of comparison image frames; in the image segmentation, according to the consistency attribute of the image (for example, the gray value of a gray image), the pixel points in the image are divided into different regions representing specific gray features according to a certain rule, so as to perform the subsequent correlation calculation.
A2. Respectively carrying out center weighting on the segmented outline image frame and a plurality of comparison image frames to ensure that the contribution rate of each region is reduced from the center to the edge when the correlation degree is matched so as to reduce the influence of noise and target deformation on the calculation of the correlation degree;
A3. calculating the correlation degree of the outline image frame and a plurality of the comparison image frames; wherein, the correlation degree between the outline image frame and any one of the comparison image frames is as follows:
Figure 377021DEST_PATH_IMAGE001
in the formula (I), the compound is shown in the specification,Lthe frame of the map is represented,Ma frame representing the outline image is displayed,wrepresenting the total area of the outline image frame,nrepresenting said outline image frame and any of said referencesThe number of regions into which the image frame is divided,α i a weighting function representing the ith area in the silhouette image frame and the comparison image frame,r i representing the first in the outline image frameiThe area of each of the regions is,l i representing the outline image frame and the second of the comparison image frameiThe area of overlap of the regions.
In the present embodiment, the first and second electrodes are,α i the method specifically comprises the following steps:
Figure 933905DEST_PATH_IMAGE002
wherein (A), (B), (C), (D), (C), (B), (C)x i ,y i ) And (c) the centroid coordinates of the ith area in the outline image frame and the comparison image frame are both constant.
In this embodiment, a weighting function is introducedα i The accuracy of the matching result of the outline image frame and the plurality of comparison image frames is improved.
And S605, obtaining the moving data of the current target according to the outline image frame and the latest outline image frame.
Specifically, step S605 includes:
B1. acquiring the center point coordinate of the outline image frame and the center point coordinate of the latest outline image frame;
B2. and obtaining the moving data of the current target according to the center point coordinate of the contour image frame and the center point coordinate of the latest contour image frame.
S7, generating a displacement instruction according to the movement data so as to drive the current robot to adjust the acquisition direction of the monitoring video data, further ensure that the target is continuously located in the monitoring video, then re-acquire the contour image frame of the current target, and then return to the step S6 so as to update the contour image frame of the current target;
and S8, judging whether a monitoring stopping instruction is received or not in real time, if so, deleting the target information and resetting the robot so as to facilitate the next target tracking.
The embodiment can realize automatic monitoring of the specified target, and is convenient for reducing the workload of manual monitoring of the user. Specifically, in the implementation process of the embodiment, after a target monitoring instruction is received, a target information acquisition request is output until a contour image frame of a target in a latest video frame in monitoring video data is received; then, tracking the current target in the monitoring video data according to the outline image frame, and further acquiring the moving data of the current target; and finally, a displacement instruction for driving the current robot to adjust the acquisition direction can be generated according to the position artist, so that the robot can continuously track the target and acquire monitoring video data. The embodiment can be applied to the existing monitoring system, the robot can automatically track the current target after acquiring the outline image frame of the target, and the acquisition direction of the robot can be adjusted in real time according to the position of the target, so that the workload of manual monitoring of a user can be effectively reduced.
Example 2:
the embodiment provides a robot vision tracking system, which is used for realizing the robot vision tracking method in the embodiment 1; as shown in fig. 2, the robot vision tracking system includes a video data acquisition module, a monitoring instruction acquisition module, a data processing module, a target tracking module and a robot control module, wherein the video data acquisition module and the monitoring instruction acquisition module are both connected with the data processing module in a communication manner, the data processing module is connected with the robot control module in a communication manner through the target tracking module, wherein,
the video data acquisition module is used for acquiring monitoring video data in real time and sending the monitoring video data to the data processing module;
the monitoring instruction acquisition module is used for receiving a target monitoring instruction and sending the target monitoring instruction to the data processing module after receiving the target monitoring instruction;
the data processing module is used for judging whether a target monitoring instruction is received or not in real time, outputting a target information acquisition request after the target monitoring instruction is received, and till a contour image frame of a target in a latest video frame in the monitoring video data is received;
the target tracking module is used for tracking the current target in the monitoring video data according to the outline image frame and acquiring the moving data of the current target;
and the robot control module is used for generating a displacement instruction according to the movement data so as to drive the current robot to adjust the acquisition direction of the monitoring video data.
Example 3:
on the basis of embodiment 1 or 2, this embodiment discloses an electronic device, and this device may be a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like. The electronic device may be referred to as a device for a terminal, a portable terminal, a desktop terminal, or the like, and as shown in fig. 3, the electronic device includes:
a memory for storing computer program instructions; and the number of the first and second groups,
a processor for executing the computer program instructions to perform the operations of the robot vision tracking method of any of embodiment 1.
In particular, the processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 801 to implement the robot vision tracking method provided by method embodiments herein.
In some embodiments, the terminal may further include: a communication interface 303 and at least one peripheral device. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. Various peripheral devices may be connected to communication interface 303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, a display screen 305, and a power source 306.
The communication interface 303 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the communication interface 303 may be implemented on a single chip or circuit board, which is not limited by the embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 304 communicates with communication networks and other communication devices via electromagnetic signals.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof.
The power supply 306 is used to power various components in the electronic device.
Example 4:
on the basis of any one of embodiments 1 to 3, this embodiment discloses a computer-readable storage medium for storing computer-readable computer program instructions configured to, when executed, perform the operations of the robot vision tracking method according to embodiment 1.
It should be noted that the functions described herein, if implemented in software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present invention is not limited to any specific combination of hardware and software.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: modifications of the technical solutions described in the embodiments or equivalent replacements of some technical features may still be made. And such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Finally, it should be noted that the present invention is not limited to the above alternative embodiments, and that various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (10)

1. A robot vision tracking method is characterized in that: the method comprises the following steps:
collecting monitoring video data in real time;
judging whether a target monitoring instruction is received or not in real time, if so, entering the next step;
outputting a target information acquisition request until receiving a contour image frame of a target in a latest video frame in the monitoring video data;
tracking the current target in the monitoring video data according to the outline image frame, and acquiring the moving data of the current target;
and generating a displacement instruction according to the movement data so as to drive the current robot to adjust the acquisition direction of the monitoring video data, and then acquiring the contour image frame of the current target again.
2. The robot vision tracking method of claim 1, wherein: after outputting the target information acquisition request, the method further comprises the following steps:
and judging whether a moving instruction is received or not in real time, and if so, driving the current robot to adjust the acquisition direction of the monitoring video data according to the moving instruction.
3. The robot vision tracking method of claim 1, wherein: tracking the current target in the monitoring video data according to the outline image frame, and acquiring the moving data of the current target, wherein the method comprises the following steps:
according to the outline image frame, acquiring a plurality of related image frames of the original video frame after the outline image frame is translated by a specified distance;
acquiring a latest video frame in the monitoring video data;
acquiring a plurality of comparison image frames corresponding to the positions of a plurality of related image frames in the original video frame in the latest video frame;
obtaining the correlation degree between the outline image frame and the plurality of comparison image frames, and defining the comparison image frame with the maximum correlation degree as the latest outline image frame of the target;
and obtaining the moving data of the current target according to the contour image frame and the latest contour image frame.
4. A robot vision tracking method according to claim 3, characterized in that: obtaining the moving data of the current target according to the contour image frame and the latest contour image frame, wherein the moving data comprises the following steps:
acquiring the center point coordinate of the contour image frame and the center point coordinate of the latest contour image frame;
and obtaining the moving data of the current target according to the center point coordinate of the contour image frame and the center point coordinate of the latest contour image frame.
5. A robot vision tracking method according to claim 3, characterized in that: obtaining the correlation degree between the outline image frame and a plurality of the comparison image frames, comprising:
respectively carrying out image segmentation on the outline image frame and the plurality of comparison image frames;
respectively carrying out center weighting on the segmented outline image frame and the plurality of comparison image frames;
calculating the correlation degree of the outline image frame and a plurality of the comparison image frames; wherein, the correlation degree between the outline image frame and any one of the comparison image frames is as follows:
Figure 296819DEST_PATH_IMAGE001
in the formula (I), the compound is shown in the specification,Lthe frame of the map is represented,Ma frame representing the outline image is displayed,wrepresenting the total area of the outline image frame,nto representThe number of the divided regions of the outline image frame and any one of the comparison image frames,α i a weighting function representing the ith area in the silhouette image frame and the comparison image frame,r i representing the first in the outline image frameiThe area of each of the regions is,l i representing the outline image frame and the second of the comparison image frameiThe area of overlap of the regions.
6. The robot vision tracking method of claim 1, wherein: after the monitoring video data is collected in real time, the robot vision tracking method further comprises the following steps:
carrying out decoding processing and frame interception processing on the monitoring video data to obtain continuous video frames;
and sequentially carrying out image enhancement processing, smoothing processing and sharpening processing on the continuous video frames to obtain the processed video frames.
7. The robot vision tracking method of claim 1, wherein: the robot vision tracking method further includes:
and judging whether a monitoring stopping instruction is received or not in real time, if so, deleting the target information and resetting the robot so as to facilitate the next target tracking.
8. A robotic vision tracking system, characterized by: for implementing a robot vision tracking method as claimed in any one of claims 1 to 7; the robot vision tracking system comprises a video data acquisition module, a monitoring instruction acquisition module, a data processing module, a target tracking module and a robot control module, wherein the video data acquisition module and the monitoring instruction acquisition module are both in communication connection with the data processing module, the data processing module is in communication connection with the robot control module through the target tracking module, wherein,
the video data acquisition module is used for acquiring monitoring video data in real time and sending the monitoring video data to the data processing module;
the monitoring instruction acquisition module is used for receiving a target monitoring instruction and sending the target monitoring instruction to the data processing module after receiving the target monitoring instruction;
the data processing module is used for judging whether a target monitoring instruction is received or not in real time, outputting a target information acquisition request after the target monitoring instruction is received, and till a contour image frame of a target in a latest video frame in the monitoring video data is received;
the target tracking module is used for tracking the current target in the monitoring video data according to the outline image frame and acquiring the moving data of the current target;
and the robot control module is used for generating a displacement instruction according to the movement data so as to drive the current robot to adjust the acquisition direction of the monitoring video data.
9. An electronic device, characterized in that: the method comprises the following steps:
a memory for storing computer program instructions; and the number of the first and second groups,
a processor for executing the computer program instructions to perform the operations of the robot vision tracking method of any one of claims 1 to 7.
10. A computer-readable storage medium storing computer-readable computer program instructions, characterized in that: the computer program instructions are configured to perform the operations of the robotic visual tracking method of any one of claims 1-7 when executed.
CN202111624324.9A 2021-12-28 2021-12-28 Robot vision tracking method, system, electronic device and medium Active CN114972415B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111624324.9A CN114972415B (en) 2021-12-28 2021-12-28 Robot vision tracking method, system, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111624324.9A CN114972415B (en) 2021-12-28 2021-12-28 Robot vision tracking method, system, electronic device and medium

Publications (2)

Publication Number Publication Date
CN114972415A true CN114972415A (en) 2022-08-30
CN114972415B CN114972415B (en) 2023-03-28

Family

ID=82974373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111624324.9A Active CN114972415B (en) 2021-12-28 2021-12-28 Robot vision tracking method, system, electronic device and medium

Country Status (1)

Country Link
CN (1) CN114972415B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116895043A (en) * 2023-06-13 2023-10-17 郑州宝冶钢结构有限公司 Intelligent safety monitoring and early warning method, system and storage medium for construction site

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844669A (en) * 2016-03-28 2016-08-10 华中科技大学 Video target real-time tracking method based on partial Hash features
US20170050563A1 (en) * 2015-08-19 2017-02-23 Harris Corporation Gimbaled camera object tracking system
WO2018068771A1 (en) * 2016-10-12 2018-04-19 纳恩博(北京)科技有限公司 Target tracking method and system, electronic device, and computer storage medium
CN108257158A (en) * 2018-03-27 2018-07-06 福州大学 A kind of target prediction and tracking based on Recognition with Recurrent Neural Network
CN108472095A (en) * 2015-12-29 2018-08-31 皇家飞利浦有限公司 The system of virtual reality device, controller and method are used for robotic surgical
CN108875683A (en) * 2018-06-30 2018-11-23 北京宙心科技有限公司 Robot vision tracking method and system
CN109376601A (en) * 2018-09-21 2019-02-22 深圳市九洲电器有限公司 Object tracking methods, monitoring server based on clipping the ball, video monitoring system
CN111862154A (en) * 2020-07-13 2020-10-30 中移(杭州)信息技术有限公司 Robot vision tracking method and device, robot and storage medium
CN112884809A (en) * 2021-02-26 2021-06-01 北京市商汤科技开发有限公司 Target tracking method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170050563A1 (en) * 2015-08-19 2017-02-23 Harris Corporation Gimbaled camera object tracking system
CN108472095A (en) * 2015-12-29 2018-08-31 皇家飞利浦有限公司 The system of virtual reality device, controller and method are used for robotic surgical
CN105844669A (en) * 2016-03-28 2016-08-10 华中科技大学 Video target real-time tracking method based on partial Hash features
WO2018068771A1 (en) * 2016-10-12 2018-04-19 纳恩博(北京)科技有限公司 Target tracking method and system, electronic device, and computer storage medium
CN108257158A (en) * 2018-03-27 2018-07-06 福州大学 A kind of target prediction and tracking based on Recognition with Recurrent Neural Network
CN108875683A (en) * 2018-06-30 2018-11-23 北京宙心科技有限公司 Robot vision tracking method and system
CN109376601A (en) * 2018-09-21 2019-02-22 深圳市九洲电器有限公司 Object tracking methods, monitoring server based on clipping the ball, video monitoring system
CN111862154A (en) * 2020-07-13 2020-10-30 中移(杭州)信息技术有限公司 Robot vision tracking method and device, robot and storage medium
CN112884809A (en) * 2021-02-26 2021-06-01 北京市商汤科技开发有限公司 Target tracking method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116895043A (en) * 2023-06-13 2023-10-17 郑州宝冶钢结构有限公司 Intelligent safety monitoring and early warning method, system and storage medium for construction site
CN116895043B (en) * 2023-06-13 2024-01-26 郑州宝冶钢结构有限公司 Intelligent safety monitoring and early warning method, system and storage medium for construction site

Also Published As

Publication number Publication date
CN114972415B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
JP7248799B2 (en) IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, COMPUTER PROGRAM, AND IMAGE PROCESSING DEVICE
US10891473B2 (en) Method and device for use in hand gesture recognition
US11940774B2 (en) Action imitation method and robot and computer readable storage medium using the same
CN110853076A (en) Target tracking method, device, equipment and storage medium
US11850747B2 (en) Action imitation method and robot and computer readable medium using the same
US20210272306A1 (en) Method for training image depth estimation model and method for processing image depth information
US10867390B2 (en) Computer vision processing
CN112336342B (en) Hand key point detection method and device and terminal equipment
WO2020244075A1 (en) Sign language recognition method and apparatus, and computer device and storage medium
US10945888B2 (en) Intelligent blind guide method and apparatus
CN111738072A (en) Training method and device of target detection model and electronic equipment
US20220262093A1 (en) Object detection method and system, and non-transitory computer-readable medium
CN113808162B (en) Target tracking method, device, electronic equipment and storage medium
CN111722245A (en) Positioning method, positioning device and electronic equipment
CN114972415B (en) Robot vision tracking method, system, electronic device and medium
CN111640123A (en) Background-free image generation method, device, equipment and medium
EP3901908B1 (en) Method and apparatus for tracking target, device, medium and computer program product
CN111242084B (en) Robot control method, robot control device, robot and computer readable storage medium
CN115661493B (en) Method, device, equipment and storage medium for determining object pose
CN114973006B (en) Method, device and system for picking Chinese prickly ash and storage medium
WO2022205841A1 (en) Robot navigation method and apparatus, and terminal device and computer-readable storage medium
CN112101284A (en) Image recognition method, training method, device and system of image recognition model
CN111507944A (en) Skin smoothness determination method and device and electronic equipment
WO2021214540A1 (en) Robust camera localization based on a single color component image and multi-modal learning
CN117911311A (en) Image anomaly detection method, device and storage medium based on self-encoder model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant