CN111246181B - Robot monitoring method, system, equipment and storage medium - Google Patents

Robot monitoring method, system, equipment and storage medium Download PDF

Info

Publication number
CN111246181B
CN111246181B CN202010093396.4A CN202010093396A CN111246181B CN 111246181 B CN111246181 B CN 111246181B CN 202010093396 A CN202010093396 A CN 202010093396A CN 111246181 B CN111246181 B CN 111246181B
Authority
CN
China
Prior art keywords
shooting
camera
monitoring
information
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010093396.4A
Other languages
Chinese (zh)
Other versions
CN111246181A (en
Inventor
董彦明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bozhilin Robot Co Ltd
Original Assignee
Guangdong Bozhilin Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bozhilin Robot Co Ltd filed Critical Guangdong Bozhilin Robot Co Ltd
Priority to CN202010093396.4A priority Critical patent/CN111246181B/en
Publication of CN111246181A publication Critical patent/CN111246181A/en
Application granted granted Critical
Publication of CN111246181B publication Critical patent/CN111246181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Abstract

The embodiment of the invention discloses a monitoring method, a system, equipment and a storage medium of a robot, wherein the monitoring method of the robot comprises the following steps: acquiring shooting information of a monitoring target sent by each camera in a camera array, wherein the camera array comprises a preset number of cameras, the position information of each camera is different, and the shooting information of the monitoring target is acquired based on different shooting angles; and determining pose parameters of the monitoring target according to the position information of each camera and the acquired shooting information, wherein the pose parameters comprise at least one of a robot angle and a robot position of the monitoring target. According to the technical scheme of the embodiment of the invention, the cameras with different positions and angles are arranged around the monitored target for monitoring, and the pose parameters of the monitored target are determined according to the shooting information and the positions acquired by the cameras, so that the robot can be monitored in multiple angles and all directions.

Description

Robot monitoring method, system, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of monitoring, in particular to a monitoring method, a monitoring system, monitoring equipment and a storage medium for a robot.
Background
In the intelligent building engineering, the construction operation of the building engineering can be completed by using the building robot. In the process of building robot operation, the operation state of the robot needs to be monitored so as to achieve the purposes of finding problems and correcting in time.
However, most of the existing monitoring systems for construction robots monitor construction robots through a fixed camera, the monitoring range is limited, and the monitoring pictures are not comprehensive enough, so that the problems existing in the robots cannot be effectively found.
Disclosure of Invention
The embodiment of the invention discloses a monitoring method, a monitoring system, monitoring equipment and a storage medium of a robot, which realize real-time and comprehensive monitoring of the robot.
In a first aspect, an embodiment of the present invention provides a robot monitoring method, where the method includes:
acquiring shooting information of a monitoring target sent by each camera in a camera array, wherein the camera array comprises a preset number of cameras, the position information of each camera is different, and the shooting information of the monitoring target is acquired based on different shooting angles;
and determining pose parameters of the monitoring target according to the position information of each camera and the acquired shooting information, wherein the pose parameters comprise at least one of a robot angle and a robot position of the monitoring target.
In a second aspect, an embodiment of the present invention further provides a monitoring system for a robot, where the system includes: the system comprises a camera array and a monitoring and scheduling center, wherein the camera array comprises a preset number of cameras, and each camera acquires shooting information of a monitoring target based on different shooting angles;
the monitoring dispatching center is in communication connection with the cameras and executes the monitoring method of the robot provided by any embodiment of the invention.
In a third aspect, an embodiment of the present invention further provides a monitoring apparatus for a robot, where the apparatus includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the robot monitoring method according to any embodiment of the present invention.
In a fourth aspect, embodiments of the present invention further provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the method for monitoring a robot provided in any of the embodiments of the present invention.
According to the technical scheme of the embodiment of the invention, the plurality of cameras with different positions and angles are arranged around the monitored target for monitoring, so that the monitored target is monitored in an all-around manner, and the pose parameters of the monitored target are determined according to the shooting information and the positions acquired by the cameras, so that the robot is monitored in multiple angles and all-around manner, the monitoring range of the robot is enlarged, the timeliness of the robot work problem discovery is improved, and the normal operation of the robot is ensured.
Drawings
Fig. 1 is a flowchart of a monitoring method for a robot according to a first embodiment of the present invention;
fig. 2 is a flowchart of a monitoring method for a robot according to a second embodiment of the present invention;
fig. 3A is a schematic structural diagram of a monitoring system of a robot according to a third embodiment of the present invention;
fig. 3B is a schematic structural diagram of a camera array in the third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a monitoring device of a robot according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a monitoring method for a robot according to an embodiment of the present invention, where the embodiment is applicable to a situation of monitoring a robot, and the method may be executed by a monitoring system of the robot, as shown in fig. 1, the method specifically includes the following steps:
and step 110, acquiring shooting information of the monitoring target sent by each camera in the camera array.
The camera array comprises a preset number of cameras, the position information of each camera is different, and shooting information of the monitoring target is collected based on different shooting angles. The monitoring target may be a robot in a working state, and specifically may be a construction robot, an industrial robot, a service robot, or the like.
Specifically, each camera in the camera array can monitor the monitoring target in real time, generate shooting information, which can be a picture or a video, and send each shooting information to the next link for processing in real time, that is, obtain the shooting information of the monitoring target sent by each camera in the camera array in real time. Of course, the shooting information of the monitoring target sent by each camera may also be obtained according to a set period, where the set period may be 1min, 5min, or other numerical values.
Further, the number of cameras in the camera array, that is, the preset number, may be determined according to the field of view of the cameras and the operation range of the monitoring target.
Exemplarily, the operation range of the monitoring target is assumed to be 360 degrees, the visual field range of each camera is 45 degrees, all-round information is acquired for more comprehensively monitoring the monitoring target, redundant acquisition can be performed, the monitoring cost is considered, and the number of the cameras can be 10-12.
Specifically, when the shooting information obtaining instruction is detected, the shooting information of the monitoring target sent by each camera in the camera array may be obtained.
Further, the shooting information of each camera may be at the same time, or the shooting information shot in sequence according to a set time interval, that is, the times corresponding to the shooting information collected by each camera are different, and there is a time difference.
And 120, determining the pose parameters of the monitoring target according to the position information of each camera and the acquired shooting information.
Wherein the pose parameters include at least one of a robot angle and a robot position of the monitoring target. The position of the robot, i.e. the geographical position of the monitored target, can be represented by three-dimensional coordinates, and the position of the robot can also be the position information of the monitored target in the work site. The robot angle includes angles of various portions of the robot. The position information of the camera may be relative position information of the camera and the monitoring target. The shooting information can be pictures or videos and can also comprise camera numbers.
Specifically, the shooting angle and distance of each camera are determined according to the position information of the camera, and the pose parameter of the monitoring target is determined by combining the shooting parameters, such as the focal length, of each camera and the shooting information, so that related personnel can judge whether the working state of the monitoring target is normal or not according to the pose parameter.
Further, for each shooting information, the image coordinates can be converted into world coordinates based on coordinate transformation, the geographic position of the monitoring target is determined according to target identification, and the angle information of the monitoring target is determined according to the position information of each camera and the geographic position of the monitoring target.
Specifically, the correspondence between the camera number and the position information of each camera may be generated in advance, the position information of each camera may be determined according to the camera number and the correspondence in the shooting information collected by each camera, and then the robot angle, the position information, and the like of the monitoring target may be calculated according to the position information of each camera and the collected shooting information.
Optionally, determining the pose parameters of the monitoring target according to the position information of each camera and the acquired shooting information, including:
receiving shooting information of each camera, and determining position information of the corresponding camera according to the shooting information; determining the shooting angle of the camera relative to the monitoring target according to the shooting information and the position information of the camera; and determining the pose parameters of the monitoring target according to the shooting information and the shooting angle of each camera.
Specifically, the shooting information may include a camera code, the shooting information may be a picture or a video, the camera code may be located at the upper right corner, the upper left corner, or other positions of the picture or the video, the camera code may also be set in the name of the shooting information, and of course, the camera code may also be stored in other forms, and then the position information of the camera may be determined according to the correspondence between the camera code and the camera code, the camera, and the position information that are stored in advance.
Optionally, the method for monitoring a robot, before acquiring shooting information of a monitored target sent by each camera in the camera array, after acquiring pose parameters of the monitored target, further includes:
and sending a shooting instruction to each camera to control the camera to acquire shooting information of the monitoring target according to the shooting instruction, wherein the shooting instruction comprises shooting time of the camera.
The shooting instruction indicates that each camera in the camera array is instructed to shoot according to shooting time and a shooting mode contained in the instruction, wherein the shooting mode comprises shooting images or videos and parameters such as focal length and resolution adopted when the cameras shoot.
Specifically, the shooting time corresponding to each camera may be different, for example, the shooting time interval of adjacent cameras is 1s, so that parameters related to each working state of the monitoring target, such as a position, a motion trajectory, angle information of each part, a working arm operation curve, and the like, can be calculated according to each shooting information and corresponding position information and shooting time thereof.
According to the technical scheme of the embodiment of the invention, the plurality of cameras with different positions and angles are arranged around the monitored target for monitoring, so that the monitored target is monitored in an all-around manner, and the pose parameters of the monitored target are determined according to the shooting information and the positions acquired by the cameras, so that the robot is monitored in multiple angles and all-around manner, the monitoring range of the robot is enlarged, the timeliness of the robot work problem discovery is improved, and the normal operation of the robot is ensured.
Example two
Fig. 2 is a flowchart of a monitoring method for a robot according to a second embodiment of the present invention, which is a further refinement and supplement to the first embodiment, and the monitoring method for a robot according to the second embodiment of the present invention further includes: generating and displaying a simulation digital model of the monitoring target according to the shooting information, the shooting time period and the position information of each camera; receiving a control instruction of the simulation digital model; adjusting the angle of the simulation digital model according to the control instruction; and adjusting shooting parameters of the camera according to the control instruction, wherein the shooting parameters comprise at least one of a shooting mode, shooting time, a shooting sequence and a shooting angle.
As shown in fig. 2, the monitoring method of the robot includes the following steps:
and step 210, acquiring shooting information of the monitoring target sent by each camera in the camera array.
The camera array comprises a preset number of cameras, the position information of each camera is different, and shooting information of the monitoring target is collected based on different shooting angles. The photographing time of the photographing information may be different.
Step 220, receiving the shooting information of each camera, and determining the position information of the corresponding camera according to the shooting information.
The shooting information comprises a camera code, and the position information of the camera can be determined according to the camera code.
And step 230, determining the shooting angle and the shooting distance of the camera relative to the monitoring target according to the shooting information and the position information of the camera.
Specifically, the robot position of the monitored target can be identified according to the shooting information, and then the shooting distance is determined according to the position information of the camera, wherein the monitored target can be identified based on any one of the existing image identification algorithms. The shooting angle can be determined according to the position information of the camera. Specifically, the correspondence between the position information of each camera, the camera code, the shooting angle, and the like may be stored in advance, and the shooting angle may be determined according to the camera code or the position information.
And 240, determining the pose parameters of the monitoring target according to the shooting information, the shooting angle and the shooting distance of each camera.
Wherein the pose parameters include at least one of a robot angle and a robot position of the monitoring target.
And 250, generating and displaying a simulation digital model of the monitoring target according to the shooting information, the shooting time period and the position information of each camera.
The simulation digital model is a virtual model of the monitoring target and is used for describing the working state of the monitoring target.
Specifically, the shooting information is a shot image or a video, the shooting time of each piece of shooting information is determined firstly, the shooting information is spliced or synthesized according to the time sequence, image alignment and splicing can be performed according to the repetitive characteristics of adjacent shot images or videos and the shooting angle of the camera during splicing or synthesis, a monitoring image of the monitoring target in a period of time is formed by splicing or synthesizing the shot images or videos shot by each camera in each period of time, the monitoring influence is the simulation digital model, and the monitoring influence comprises the state corresponding to each shooting time in the shooting period of time of the monitoring target.
Specifically, the simulated digital model can be presented in a video mode, a dynamic webpage mode, a mobile terminal application mode or other modes.
Specifically, assuming that the visible range and the installation position of the cameras are fixed, the position information of the cameras can be directly determined according to the camera codes, and the shooting information acquired by the cameras is spliced according to the position information, so that the simulation digital model of the monitoring target is generated and displayed.
For example, assuming that the monitoring target is a construction robot, the working range of the construction robot is 360 °, the camera array includes 6 cameras circumferentially and uniformly distributed around the construction robot, that is, the linear distance between each camera and the construction robot is the same, the visual field range of each camera is 90 °, 15 ° overlapping regions exist in the visual fields of two adjacent cameras, the camera numbers of the cameras are 01, 02, 03, 04, 05 and 06, the shooting sequence is that the 01 camera shoots a monitoring image (shooting information) of the construction robot first, the 02 camera shoots after 1s, and the monitoring image shoots are carried out at intervals of 1s in sequence, so that 6 monitoring images with different positions, angles and shooting times can be obtained. Then, the 6 monitor images can be synthesized into the simulation digital model of the construction robot according to the camera numbers in the monitor images, the installation positions of the cameras and the shooting time. The specific process is as follows: firstly, acquiring the monitoring images of the 01 and 02 cameras, splicing the two monitoring images according to the position relation between the two monitoring images and the 15-degree repeated visual field, certainly determining the same characteristics according to image identification, aligning and splicing the two monitoring images according to the same characteristics, and repeating the steps in the same way, and after all the monitoring images are spliced, obtaining the omnibearing 6s monitoring simulation model.
And step 260, receiving a control instruction of the simulation digital model.
The control instruction can include a rotation instruction, an enlargement instruction, a reduction instruction, a translation instruction, a shooting parameter changing instruction and the like, and is mainly used for changing a display picture of the simulation digital model and controlling shooting parameters of the camera.
Specifically, the control instruction may be input through a keyboard or a mouse, or input through a touch screen, or input in the form of voice, gesture, or the like, and the input mode of the control instruction is not limited in the embodiment of the present invention.
For example, the manipulation instruction may be input through a touch screen, if the touch screen slides up, down, left and right, it indicates that the translation is performed up, down, left and right, and a double click may correspond to an enlargement instruction. Of course, a voice command, such as a "rotate 90 °" rotation command or other command, is also possible. The input can also be in a code form so as to control a display interface of the simulation model and shooting parameters of a camera, such as focal length, exposure power, resolution ratio and the like.
And 270, adjusting the angle of the simulation digital model according to the control instruction.
And step 280, adjusting shooting parameters of the camera according to the control instruction.
Wherein the shooting parameters comprise at least one of shooting mode, shooting time, shooting sequence and shooting angle.
According to the technical scheme of the embodiment of the invention, the monitoring images of the monitored target are acquired by the camera arrays with different shooting positions, angles and time, the monitoring images shot at all the time and positions are synthesized or spliced, the pose parameters of the monitored target are calculated, and the simulation digital model is generated at the same time, so that the working state of the monitored target can be conveniently checked from all the angles, the multi-angle and all-around monitoring of the robot is realized, the monitoring range of the robot is improved, the simulation digital model is adopted for displaying, and the simulation digital model is visual and vivid, is convenient for learning the working state of the robot and finding the problems existing in the robot in time, and improves the working safety of the robot.
EXAMPLE III
Fig. 3A is a schematic structural diagram of a monitoring system of a robot according to a third embodiment of the present invention, and as shown in fig. 3A, the monitoring system includes: a camera array 310 and a monitoring dispatch center 320.
The camera array 310 comprises a preset number of cameras, and each camera acquires shooting information of the monitoring target based on different shooting angles; and the monitoring dispatching center 320 is in communication connection with each camera and executes the monitoring method of the robot provided by any embodiment of the invention.
According to the technical scheme of the embodiment of the invention, the plurality of cameras with different positions and angles are arranged around the monitored target for monitoring, so that the monitored target is monitored in an all-around manner, and the pose parameters of the monitored target are determined according to the shooting information and the positions acquired by the cameras, so that the robot is monitored in multiple angles and all-around manner, the monitoring range of the robot is enlarged, the timeliness of the robot work problem discovery is improved, and the normal operation of the robot is ensured.
Specifically, the monitoring and scheduling center 320 may be in communication connection with each camera of the camera array 310 through a wired or wireless network.
Optionally, the cameras in the camera array 310 are circumferentially and uniformly distributed with the monitoring target as a center.
For example, fig. 3B is a schematic structural diagram of a camera array in the third embodiment of the present invention, and as shown in fig. 3B, the camera array of the monitoring system is composed of 12 cameras 311, and the 12 cameras are uniformly distributed around the monitoring target 330 and have the same linear distance with the monitoring target 330.
Optionally, the number of cameras in the camera array 310 is determined by the shooting angle range of the monitoring target and the shooting range of each camera.
Optionally, the camera array 310 is further configured to:
receiving a shooting instruction from a monitoring scheduling center, and acquiring shooting information of the monitoring target according to the shooting instruction, wherein the shooting instruction comprises shooting time of the camera.
Optionally, the monitoring and scheduling center 320 is further configured to:
and generating a shooting instruction and sending the shooting instruction to the camera array.
Optionally, the monitoring and scheduling center 320 includes:
the position information determining module is used for receiving the shooting information of each camera and determining the position information of the corresponding camera according to the shooting information; the shooting angle and distance determining module is used for determining the shooting angle and the shooting distance of the camera relative to the monitoring target according to the shooting information and the position information of the camera; and the robot pose determining module is used for determining pose parameters of the monitoring target according to the shooting information, the shooting angle and the shooting distance of each camera.
Optionally, the monitoring and scheduling center 320 further includes:
and the simulation model generation module is used for generating and displaying a simulation digital model of the monitoring target according to the shooting information, the shooting time period and the position information of each camera.
Optionally, the monitoring and scheduling center 320 is further configured to:
receiving a control instruction of the simulation digital model;
adjusting the angle of the simulation digital model according to the control instruction; and/or the presence of a gas in the gas,
and adjusting shooting parameters of the camera according to the control instruction, wherein the shooting parameters comprise at least one of a shooting mode, shooting time, a shooting sequence and a shooting angle.
The robot monitoring system provided by the embodiment of the invention can execute the robot monitoring method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a monitoring apparatus of a robot according to a fourth embodiment of the present invention, as shown in fig. 4, the monitoring apparatus includes a processor 410, a memory 420, an input device 430, and an output device 440; the number of the device processors 410 may be one or more, and one processor 410 is taken as an example in fig. 4; the processor 410, the memory 420, the input device 430 and the output device 440 in the apparatus may be connected by a bus or other means, for example, in fig. 4.
The memory 420 serves as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the robot monitoring method according to the embodiment of the present invention. The processor 410 executes various functional applications of the device and data processing by executing software programs, instructions and modules stored in the memory 420, that is, implements the robot monitoring method described above.
The memory 420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 420 may further include memory located remotely from the processor 410, which may be connected to the device/terminal/server via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the apparatus. The output device 440 may include a display device such as a display screen.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method of monitoring a robot, the method including:
acquiring shooting information of a monitoring target sent by each camera in a camera array, wherein the camera array comprises a preset number of cameras, the position information of each camera is different, and the shooting information of the monitoring target is acquired based on different shooting angles;
and determining pose parameters of the monitoring target according to the position information of each camera and the acquired shooting information, wherein the pose parameters comprise at least one of a robot angle and a robot position of the monitoring target.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the monitoring method for a robot provided by any embodiments of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the technical solutions of the embodiments of the present invention can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions to make a computer device (which may be a personal computer, a server, or a network device) execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the monitoring system for a robot, the units and modules included in the monitoring system are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (9)

1. A method of monitoring a robot, comprising:
acquiring shooting information of a monitoring target sent by each camera in a camera array, wherein the camera array comprises a preset number of cameras, the position information of each camera is different, and the shooting information of the monitoring target is acquired based on different shooting angles; shooting information of each camera is obtained by shooting according to a set time interval sequence;
determining pose parameters of the monitoring target according to the position information of each camera and the acquired shooting information, wherein the pose parameters comprise at least one of a robot angle and a robot position of the monitoring target;
wherein, according to the position information of each camera and the acquired shooting information, determining the pose parameters of the monitoring target comprises: receiving shooting information of each camera, and determining position information of the corresponding camera according to the shooting information; determining the shooting angle and the shooting distance of the camera relative to the monitoring target according to the shooting information and the position information of the camera; determining pose parameters of the monitoring target according to the shooting information, the shooting angle and the shooting distance of each camera; the method further comprises the following steps: and determining working state parameters of the monitoring target according to the position information, the shooting time and the collected shooting information of each camera, wherein the working state parameters comprise a motion track or a working arm operation curve.
2. The method of claim 1, further comprising:
and sending a shooting instruction to each camera to control the camera to acquire shooting information of the monitoring target according to the shooting instruction, wherein the shooting instruction comprises shooting time of the camera.
3. The method of claim 1, further comprising:
and generating and displaying a simulation digital model of the monitoring target according to the shooting information, the shooting time period and the position information of each camera.
4. The method of claim 3, further comprising:
receiving a control instruction of the simulation digital model;
adjusting the angle of the simulation digital model according to the control instruction; and/or the presence of a gas in the gas,
and adjusting shooting parameters of the camera according to the control instruction, wherein the shooting parameters comprise at least one of a shooting mode, shooting time, a shooting sequence and a shooting angle.
5. The monitoring system of the robot is characterized by comprising a camera array and a monitoring scheduling center, wherein the camera array comprises a preset number of cameras, and each camera acquires shooting information of a monitoring target based on different shooting angles; shooting information of each camera is obtained by shooting according to a set time interval sequence;
wherein, control dispatch center includes: the position information determining module is used for receiving the shooting information of each camera and determining the position information of the corresponding camera according to the shooting information; the shooting angle and distance determining module is used for determining the shooting angle and the shooting distance of the camera relative to the monitoring target according to the shooting information and the position information of the camera; the robot pose determining module is used for determining pose parameters of the monitoring target according to the shooting information, the shooting angle and the shooting distance of each camera; the monitoring and scheduling center is in communication connection with each camera and executes the method according to any one of claims 1 to 4.
6. The system of claim 5, wherein the cameras in the array of cameras are evenly circumferentially distributed about the monitored target.
7. The system according to claim 5, wherein the number of cameras in the camera array is determined by the shooting angle range of the monitoring target and the shooting range of each camera.
8. A monitoring device of a robot, characterized in that the device comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of monitoring a robot as claimed in any one of claims 1-4.
9. A storage medium containing computer-executable instructions for performing a method of monitoring a robot according to any one of claims 1-4 when executed by a computer processor.
CN202010093396.4A 2020-02-14 2020-02-14 Robot monitoring method, system, equipment and storage medium Active CN111246181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010093396.4A CN111246181B (en) 2020-02-14 2020-02-14 Robot monitoring method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010093396.4A CN111246181B (en) 2020-02-14 2020-02-14 Robot monitoring method, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111246181A CN111246181A (en) 2020-06-05
CN111246181B true CN111246181B (en) 2021-08-10

Family

ID=70878277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010093396.4A Active CN111246181B (en) 2020-02-14 2020-02-14 Robot monitoring method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111246181B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113159022B (en) * 2021-03-12 2023-05-30 杭州海康威视系统技术有限公司 Method and device for determining association relationship and storage medium
CN113573021A (en) * 2021-07-26 2021-10-29 嘉应学院 Method for monitoring surrounding conditions of orchard transport vehicle
CN114384568A (en) * 2021-12-29 2022-04-22 达闼机器人有限公司 Position measuring method and device based on mobile camera, processing equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120162411A1 (en) * 2010-12-23 2012-06-28 Electronics And Telecommunications Research Institute Method and apparatus for operation of moving object in unstructured environment
CN105307115A (en) * 2015-08-07 2016-02-03 浙江海洋学院 Distributed vision positioning system and method based on action robot
CN105425791A (en) * 2015-11-06 2016-03-23 武汉理工大学 Swarm robot control system and method based on visual positioning
CN106652021A (en) * 2016-12-09 2017-05-10 南京理工大学 3D reconstruction method for work environment of hot-line robot
CN206326604U (en) * 2016-12-26 2017-07-14 东莞理工学院 Robot motion's update the system based on computer vision
CN110719392A (en) * 2019-11-08 2020-01-21 广州酷狗计算机科技有限公司 Movable image pickup apparatus, image pickup control method, control apparatus, and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110267007A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120162411A1 (en) * 2010-12-23 2012-06-28 Electronics And Telecommunications Research Institute Method and apparatus for operation of moving object in unstructured environment
CN105307115A (en) * 2015-08-07 2016-02-03 浙江海洋学院 Distributed vision positioning system and method based on action robot
CN105425791A (en) * 2015-11-06 2016-03-23 武汉理工大学 Swarm robot control system and method based on visual positioning
CN106652021A (en) * 2016-12-09 2017-05-10 南京理工大学 3D reconstruction method for work environment of hot-line robot
CN206326604U (en) * 2016-12-26 2017-07-14 东莞理工学院 Robot motion's update the system based on computer vision
CN110719392A (en) * 2019-11-08 2020-01-21 广州酷狗计算机科技有限公司 Movable image pickup apparatus, image pickup control method, control apparatus, and storage medium

Also Published As

Publication number Publication date
CN111246181A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111246181B (en) Robot monitoring method, system, equipment and storage medium
JP6725727B2 (en) Three-dimensional robot work cell data display system, display method, and display device
CN110587600A (en) Point cloud-based autonomous path planning method for live working robot
CN105096382A (en) Method and apparatus for associating actual object information in video monitoring image
CN109032348A (en) Intelligence manufacture method and apparatus based on augmented reality
US20110109628A1 (en) Method for producing an effect on virtual objects
CN111429518B (en) Labeling method, labeling device, computing equipment and storage medium
CN109816730A (en) Workpiece grabbing method, apparatus, computer equipment and storage medium
CN106326678A (en) Sample room experiencing method, equipment and system based on virtual reality
CN111696216A (en) Three-dimensional augmented reality panorama fusion method and system
JP6430079B1 (en) Monitoring system and monitoring method
CN105427338A (en) Moving object tracking method and device
CN110134117A (en) A kind of mobile robot method for relocating, mobile robot and electronic equipment
CN113436311A (en) House type graph generation method and device
CN110740545B (en) On-site light spot arrangement method and system, storage medium and lamp control equipment
CN111710032B (en) Method, device, equipment and medium for constructing three-dimensional model of transformer substation
US11176705B2 (en) Method for optimizing camera layout for area surveillance and apparatus employing the method
CN114092646A (en) Model generation method and device, computer equipment and storage medium
CN115982824A (en) Construction site worker space management method and device, electronic equipment and storage medium
Yan et al. Intergrating UAV development technology with augmented reality toward landscape tele-simulation
CN114048541B (en) Asset space marking method and system based on digital twins
CN115147356A (en) Photovoltaic panel inspection positioning method, device, equipment and storage medium
Huang et al. Design and application of intelligent patrol system based on virtual reality
Liu et al. System development of an augmented reality on-site BIM viewer based on the integration of SLAM and BLE indoor positioning
KR20170012717A (en) Method and apparatus for generating location information based on video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant