CN108781258B - Environment information determination method, device, robot and storage medium - Google Patents

Environment information determination method, device, robot and storage medium Download PDF

Info

Publication number
CN108781258B
CN108781258B CN201880001148.3A CN201880001148A CN108781258B CN 108781258 B CN108781258 B CN 108781258B CN 201880001148 A CN201880001148 A CN 201880001148A CN 108781258 B CN108781258 B CN 108781258B
Authority
CN
China
Prior art keywords
robot
image
video device
video
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880001148.3A
Other languages
Chinese (zh)
Other versions
CN108781258A (en
Inventor
骆磊
于智远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Publication of CN108781258A publication Critical patent/CN108781258A/en
Application granted granted Critical
Publication of CN108781258B publication Critical patent/CN108781258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to the field of robotics, and in particular, to a method and an apparatus for determining environmental information, a robot, and a storage medium. The environment information determination method comprises the following steps: acquiring an image of the surrounding environment shot by at least one video device, wherein the distance between the video device and the robot is not more than a preset value; and expanding the peripheral environment information of the robot according to the images of the peripheral environment shot by each video device.

Description

Environment information determination method, device, robot and storage medium
Technical Field
The present application relates to the field of robotics, and in particular, to a method and an apparatus for determining environmental information, a robot, and a storage medium.
Background
At present, most mobile robots have visual ability, can move, distinguish objects, search paths and the like by themselves, and can react and give early warning immediately even in the face of possible dangers. However, the robot vision is often limited to a certain viewing angle, and robots having a 360 ° viewing angle are still relatively few.
In the process of implementing the present application, the inventors found that even if the robot itself has a 360 ° view angle, the effective view angle is limited when the robot is shielded, and the capability of the robot may be defective or may not be fully exhibited in some scenes. Therefore, how to enhance the visual ability of the robot is a problem to be considered.
Disclosure of Invention
The technical problem to be solved by some embodiments of the present application is how to enhance the visual ability of a robot.
An embodiment of the present application provides an environment information determination method, including: acquiring an image of the surrounding environment shot by at least one video device, wherein the distance between the video device and the robot is not more than a preset value; and expanding the peripheral environment information of the robot according to the images of the peripheral environment shot by each video device.
An embodiment of the present application also provides an environment information determination apparatus, including: the device comprises an acquisition module and a processing module; the robot comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring images of the surrounding environment shot by at least one video device, and the distance between the video device and the robot does not exceed a preset value; the processing module is used for expanding the peripheral environment information of the robot according to the images of the peripheral environment shot by each video device.
An embodiment of the present application also provides a robot comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of determining environmental information as in the above embodiments.
An embodiment of the present application further provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the environment information determination method in any of the above embodiments.
Compared with the prior art, in the environment information determining method provided by the embodiments of the present application, the image taken by at least one video device around the robot is obtained, and the environment information around the robot is expanded based on the image taken by the video device, so that the effective visual angle of the robot is enlarged, the visual ability of the robot is enhanced, and the robot can obtain more comprehensive environment information.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a flowchart of an environment information determination method in a first embodiment of the present application;
fig. 2 is a flowchart of a processing method of an image of the surrounding environment taken by the robot for each video device in the first embodiment of the present application;
FIG. 3 is a schematic diagram of the positional relationship between the robot and the video device in the first embodiment of the present application;
fig. 4 is a top view of an imaging angle of view of a camera of the video apparatus in the first embodiment of the present application;
fig. 5 is a front view of each frame of a picture taken by a camera of the video apparatus in the first embodiment of the present application;
fig. 6 is a flowchart of an environment information determination method in a second embodiment of the present application;
FIG. 7 is a schematic illustration of the positions of robots, natural persons and other objects in a second embodiment of the present application;
fig. 8 is a schematic structural diagram of an environment information determination apparatus in a third embodiment of the present application;
fig. 9 is a schematic configuration diagram of an environment information determination apparatus in a fourth embodiment of the present application;
fig. 10 is a schematic structural view of a robot in a fifth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, some embodiments of the present application will be described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. However, it will be appreciated by those of ordinary skill in the art that in the various embodiments of the present application, numerous technical details are set forth in order to provide a better understanding of the present application. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The environmental information determining method provided by the following embodiments of the application utilizes the strong processing capability of the robot, for example, multiple groups of image information can be processed in parallel, and combines the strong interconnection capability of the robot, so that the robot can obtain a visual angle beyond the limit of the self shooting capability, the self visual capability of the robot is greatly extended, the robot is more likely to be in the corresponding places under the support of more data, the advantages of the robot relative to human are exerted to the greatest extent, the deficiency of human is made up, and better user experience is obtained.
The video device referred to in the following embodiments of the present application may be any device with specific image capturing capabilities, such as a robot with vision capabilities, a monitor, etc.
A first embodiment of the present application relates to an environment information determination method, and an execution subject of the environment information determination method may be a robot or another device that establishes a communication connection with the robot. Wherein, the robot possesses the smart machine of autonomic action ability. In this embodiment, an example in which the execution body is the robot itself will be described. The specific flow of the environment information determining method is shown in fig. 1, and includes the following steps:
step 101: an image of a surrounding environment captured by at least one video device is acquired.
In a specific implementation, the distance between the video equipment and the robot does not exceed a preset value. The preset value may be determined according to the image processing capability of the robot, or may be determined according to other information such as the communication capability of the robot.
It should be noted that the robot may directly establish a communication connection with at least one video device to obtain an image of the surrounding environment captured by the robot, or may establish a connection with at least one video device through a cloud or other video devices. The present embodiment does not limit the way in which the robot acquires the image of the surrounding environment taken by the at least one video device.
Step 102: and expanding the peripheral environment information of the robot according to the images of the peripheral environment shot by each video device.
In specific implementation, the peripheral environment information obtained by the robot through shooting is expanded, and a specific processing process is shown in fig. 2.
Step 201: processing images of the surrounding environment captured by each video device respectively: the method comprises the steps of determining the position of the robot in an image of the surrounding environment shot by video equipment, taking the determined position as image positioning information of the robot, and acquiring the surrounding environment information of the robot monitored by the video equipment according to the image positioning information.
In specific implementations, there are various ways to determine the positioning information of the robot, including but not limited to the following two specific implementations:
firstly, according to the physical position of the robot, the physical position of the video equipment and the parameters of the camera with the video equipment opening authority, the position of the robot in the image of the surrounding environment shot by the camera with the video equipment opening authority is determined, and the determined position is used as the image positioning information of the robot.
It should be noted that, in the case that there are multiple cameras, the video device may open the rights of some or all of the cameras.
Specifically, when the video device has no height difference with the robot and the video device is in a head-up state, the robot acquires the parameters of the camera for opening the authority of the video device, including: a direction of a horizontal plane of an optical axis of the camera and lateral perspective information of the camera. After the parameters of the camera with the video equipment opening authority are acquired, the robot determines the value of the deviation angle of the robot relative to the optical axis in the horizontal plane in the visual angle of the camera with the video equipment opening authority according to the physical position of the robot, the physical position of the video equipment and the horizontal plane direction of the optical axis of the camera with the video equipment opening authority.
For example, the robot may be formulated
Figure BDA0001778757390000041
Determining the value of the deviation angle of the camera with the opening authority of the video equipment relative to the optical axis in the horizontal plane; wherein alpha represents the included angle between the horizontal plane direction of the optical axis of the camera with the video equipment opening authority and the abscissa axis, x represents the difference value between the abscissa of the physical position of the video equipment and the abscissa of the physical position of the robot, and y represents the difference value between the physical position of the video equipmentThe difference between the ordinate and the ordinate of the physical position of the robot, β represents the value of the deviation angle in the horizontal plane.
Then, the robot determines first position reference information in the image of the surrounding environment shot by the camera with the open authority according to the value of the deviation angle on the horizontal plane and the transverse visual angle information of the camera with the open authority of the video equipment. Wherein the first position reference information is: the proportion of the amount laterally shifted from the center of the image to the left or right in the lateral direction of the image.
For example, the robot determines the first position reference information of the robot in the image of the surrounding environment captured by the camera of which the video device has the right to open, according to the formula M ═ tan β/(2 × tan (γ/2)). Where β denotes a value of the off-angle in the horizontal plane, γ denotes a lateral angle of view of a camera of which the video device has an opening right, and M denotes first position reference information.
And finally, the robot determines the position of the robot in the image of the surrounding environment shot by the camera with the open authority of the video equipment according to the first position reference information. For example, if the first location reference information is: if the ratio of the amount of the left lateral deviation from the center of the image to the lateral deviation in the image is larger than the ratio of the amount of the left lateral deviation from the center of the image to the lateral deviation in the image, the robot starts to obtain a first reference line by the left lateral deviation from the center line of the image according to first position reference information, and if the first position reference information is: the proportion of the amount of the right lateral deviation from the center of the image in the lateral direction of the image is that the robot starts to perform the right lateral deviation from the center line of the image according to the first position reference information to obtain a first reference line; the first reference line is used as a center, a preset value is horizontally shifted leftwards to obtain a second reference line, and the preset value is horizontally shifted rightwards to obtain a third reference line; detecting features of the robot in an image area defined by the second reference line and the third reference line; and determining the position of the robot in the image according to the detected characteristics of the robot.
For example, in the process of positioning the robot a, in order to avoid a recognition error due to the existence of many other robots similar to the robot a in the vicinity of the robot a, the robot a can be quickly and accurately recognized in combination with the following ways:
in the method a, the robot a intentionally performs a specific motion, and a robot corresponding to the specific motion in the image area defined by the second reference line and the third reference line is recognized as the robot a.
In the mode b, the robot A flashes or sequentially lights up signal lamps on the head or the body according to a random light and dark sequence and/or a color sequence, and the robot in the same lighting mode as the robot A in the image area defined by the second reference line and the third reference line is identified as the robot A. For example, robot a lights up according to 0.2S red light, lights out 0.1S, 0.1S red light, lights out 0.5S, and so on; or, the lamp is lighted in the modes of 0.1s red light, 0.15s green light, 0.1s orange light and the like; there are infinite possibilities for setting the specific lighting manner as long as the robot a can be distinguished.
Mode c, a combination of mode a and mode b.
It should be noted that the physical position of the robot or the physical position of the video device can be determined by a Global Positioning System (GPS) or a beidou of the robot or the video device itself. In the process of determining the physical position of the robot or the physical position of the video device, the robot or the video device may further combine base station positioning information or Wireless-Fidelity (WIFI) positioning information. The present embodiment does not limit the way in which the robot or video device acquires the physical location.
The following describes an example of a process for determining positioning information of a robot by using the first implementation manner, with reference to an actual scenario.
The positional relationship between the robot and the video device is shown in fig. 3, the top view of the imaging angle of view of the camera with the video device open authority is shown in fig. 4, and the front view of each frame of picture shot by the camera with the video device open authority is shown in fig. 5. In fig. 3, 4 and 5, the robot is indicated by the letter a and the video device by the letter B. In fig. 3, the robot a uses its own physical position as the origin O, the east-right direction of the robot a is the positive direction of the abscissa axis (X), the north-right direction is the positive direction of the ordinate axis (Y), the physical position of the robot a at which the video device B is obtained is (X, Y), and the distance between the robot a and the video device B is d. And isThe included angle between the horizontal plane direction of the optical axis of the camera with the current video equipment B opening authority and the abscissa axis is alpha, and the value of the deviation angle of the robot A relative to the optical axis in the horizontal plane in the visual angle of the camera with the current video equipment B opening authority is beta. As can be seen from fig. 3, the relationship among α, β, y, and d is: sin (alpha + beta) y/d, x, y
Figure BDA0001778757390000051
From the above two relations, we can get:
Figure BDA0001778757390000052
the value of the off-angle of the robot a in the horizontal plane with respect to the optical axis in the angle of view of the camera in which the video device B opens the right is determined.
In fig. 4, in the imaged picture of the video apparatus B, a straight line l passing through the robot a in the lateral direction of the imaged picture1Intersecting the optical axis at a point P in the horizontal plane, Q being a straight line l1The intersection point of the imaging frame longitudinal boundary, beta, PA and PB satisfies the following relation: tan beta is PA/PB, and the following relational expression is satisfied among gamma, PB and PQ: tan (γ/2 ═ PQ/PB), which can be obtained according to the above two relations: pZ/(2 × PQ) ═ tan β/[2 × tan (γ/2)]. And PZ/(2 × PQ) is actually the first position reference information M, i.e., the ratio of the amount of lateral shift from the center of the image to the left (or right) in the lateral direction of the image, so M is tan β/(2 × tan (γ/2)). After obtaining the first position reference information, the robot a follows the first position reference information from the center line l of the image2Starting to shift laterally leftwards (or rightwards) to obtain a first reference line l3Then with the first reference line l3A second reference line l is obtained by laterally shifting the center by a preset value to the left4And shifting the preset value to the right to obtain a third reference line l5As shown in fig. 5. Wherein l6To l9The boundary of a frame of picture front view shot by a camera with an open authority of the video equipment. At a second reference line l3And a third reference line l4In the defined image area, the characteristics of the robot a are detected, and based on the detected characteristics of the robot a,the position of robot a in the image is determined.
It should be noted that, when there is a height difference between the robot and the video device, and/or the video device is not in a head-up state (e.g., a top-down state or a top-up state), the robot may determine a longitudinal offset of the robot in an image of the surrounding environment captured by the video device by using a similar method according to one or more of information such as depression angle information, elevation angle information, longitudinal viewing angle information, and height information of a camera in which the video device has an open right, or determine a longitudinal offset of the robot in an image of the surrounding environment captured by the video device by combining a distance measurement and the like, and determine a position of the image of the robot in the image by combining the lateral offset and the longitudinal offset, which is not described herein again.
Secondly, the robot detects the characteristics of the robot in the image of the surrounding environment shot by the video equipment, and the position where the characteristics of the robot are located is determined as the image positioning information of the robot. Specifically, the robot locks the position of the feature of the robot in the image of the surrounding environment shot by the camera of the video device opening authority through an image tracking technology, and uses the position as the image positioning information of the robot.
The feature of the robot may be any one or any combination of a contour feature of the robot, an action feature of the robot (e.g., a current action or a deliberate action of the robot such as a head-up action), a bright-dark feature of a signal light of the robot (e.g., a signal light of the robot flashes according to a random bright-dark sequence and/or a color sequence of the signal light of the robot, and/or a signal light of the robot lights up in sequence), and other features.
It should be noted that, in practical applications, if the processing capability of the robot is not strong, the robot may be set to only acquire an image including the robot captured by the video device; if the robot has strong processing capability and wants to obtain more surrounding environment information, the robot may be set to obtain all images captured by the video device, and the embodiment does not limit the type of the images captured by the video device.
Step 202: and expanding the peripheral environment information of the robot according to the peripheral environment information of the robot monitored by each video device.
The peripheral environment information of the robot may be obtained by photographing the image of the peripheral environment by the robot itself and then directly using the image of the peripheral environment as the peripheral environment information, or may be obtained by extracting a specific environment parameter from the image of the peripheral environment and using the extracted environment parameter as the peripheral environment information. Of course, the peripheral environment information of the robot may be acquired by other means than shooting the robot, for example, information described in an electronic map.
Specifically, after the position of the robot in the image of the surrounding environment captured by the video device is determined, the image of the surrounding environment at the corresponding position can be selectively retrieved, and the surrounding environment information of the robot can be obtained.
For example, when the line of sight of the robot is blocked by an obstacle, environment information that the robot is blocked by the obstacle is acquired from the position of the robot in the image of the surrounding environment captured by the video device.
Compared with the prior art, the method and the device have the advantages that the images shot by at least one video device around the robot are obtained, and the surrounding environment information of the robot is expanded based on the images shot by the video device, so that the effective visual angle of the robot is enlarged, the visual ability of the robot is enhanced, and the robot can obtain more comprehensive surrounding environment information.
The second embodiment of the present application relates to an environment information determining method, and the present embodiment is a further improvement of the first embodiment, and the specific improvement is as follows: other relevant steps are added before step 101 and step 102, respectively.
As shown in fig. 6, the present embodiment includes steps 301 to 307, wherein steps 304 and 306 are substantially the same as steps 101 and 102, respectively, in the first embodiment, and are not described in detail herein, and the following differences are mainly described:
step 301: it is determined that the line of sight of the robot is obstructed by an obstacle.
It is worth mentioning that the environment information determining method is executed under the condition that the robot determines that the surrounding environment information is blocked, so that the resource waste caused by the fact that the robot still executes the method under the condition that the robot can obtain enough surrounding environment information by self capacity is avoided.
It should be noted that the blocking of the line of sight of the robot by the obstacle is only a specific triggering manner, and in application, other triggering manners may also be adopted, for example, a subsequent process is triggered according to a user operation.
Step 302: and respectively establishing connection with each video device, and acquiring the physical position of each video device.
It should be noted that, in the case that the triggering condition of step 301 is not needed, it may be set that, in the process of the robot traveling, if it is detected that the distance from the robot to the video device is smaller than or equal to the preset value, the robot directly establishes a point-to-point connection with the video device, or establishes a connection with the video device through a cloud, and reads the physical location of the video device that establishes the connection (determined in a GPS or beidou manner, etc.), parameters of the camera that opens the right, and the like. Currently, the physical location of the video device may also be determined by the robot, for example, by using a base station positioning method, a WIFI positioning method, or the like.
The robots or video devices with video capturing capability all have a uniform interface, and under the condition that the self condition and the permission permit, other robots or video devices can read partial or all visual information.
Step 303: and determining that the robot is in the sight line range of the video equipment.
Specifically, whether the robot is in the sight line range of the camera with the open authority of the video equipment is determined according to the physical position of the video equipment and the parameters of the camera with the open authority of the video equipment.
For example, according to the optical axis direction and the view angle (FOV) information of the camera with open authority, the robot is determined to be within the field of view of the video equipment. Here, it is only necessary to determine that the robot is within the field of view of the video device, and the influence of the problems such as the occlusion of obstacles and the height difference on the actual line of sight is not considered.
It is worth mentioning that before the image of the surrounding environment shot by at least one video device is obtained, the robot is determined to be in the sight line range of the video device, the image information provided by the video device which is not in the sight line range of the robot is discarded, and the robot is prevented from receiving too much useless information and occupying storage space.
Step 304: an image of a surrounding environment captured by at least one video device is acquired.
It should be noted that, in order to ensure real-time control of the robot over the surrounding environment, it is necessary to acquire an image of the surrounding environment captured by at least one video device in real time.
Step 305: and determining that the image of the surrounding environment shot by the video equipment contains the image of the robot.
Specifically, the view of the video device may also be blocked, resulting in no image of the robot in the image of the surrounding environment captured by the video device, so that the robot cannot expand the surrounding environment information based on the image of the surrounding environment.
In a specific implementation, the robot may detect features of the robot in an image of the surrounding environment captured by the video device, and determine whether the robot is in the image of the surrounding environment captured by the video device.
In practical applications, the robot may also determine whether the image of the surrounding environment captured by the video device includes its own video in other manners, and the embodiment is not limited to a specific determination manner.
In application, if it is determined in step 303 that the image taken by the video device includes the image of the robot, but the image of the robot is not found in the image, it can be determined that the line of sight of the video device is blocked, in this case, the following processing methods may be adopted: continuously acquiring an image of the surrounding environment shot by the video equipment within a period of time, and judging whether the image contains the image of the robot or not until the image contains the image of the robot; or after the preset time length is exceeded, the image of the robot is not recognized from the image of the surrounding environment shot by the video equipment, the image acquisition from the video equipment is stopped, and the image acquired from the video equipment is discarded.
It is worth mentioning that the robot processes only the image of the surrounding environment including the image of the robot itself, reducing the amount of calculation.
Step 306: and expanding the peripheral environment information of the robot according to the images of the peripheral environment shot by each video device.
Step 307: and adopting matched processing measures according to the expanded surrounding environment information.
For example, the robot follows a natural person and executes an instruction to detect surrounding risk factors. Fig. 7 is a schematic position diagram of a robot, a natural person, and other objects, and a vehicle D travels right in front of the robot a and the natural person C. However, the other objects F in front of the robot a block the view of the robot a, and the surrounding environment information obtained from the image of the surrounding environment taken by the robot a itself cannot display dangerous information that the vehicle D is coming in front. However, since the robot a can obtain the image of the surrounding environment captured by the other video device (as shown in E) through the above steps, and the other video device captures the vehicle D, the robot a can determine the risk factor in front (the vehicle D coming) based on the expanded surrounding environment information and transmit the determination result to the natural person through an alarm or other behavioral actions.
It should be noted that the robot may also execute different processing measures according to other instructions (e.g., determining whether there is a risk factor, searching for a suspect, etc.), which is not described in detail herein.
It should be noted that step 301, step 302, step 303, step 305, and step 307 are not necessarily steps to be performed, and may be selectively performed, or any one or any combination of the above steps.
Compared with the prior art, the method for determining the environmental information is executed only when the robot determines that the surrounding environmental information is blocked, so that resource waste caused by the fact that the robot still executes the method under the condition that the robot can obtain enough surrounding environmental information by means of self capacity is avoided. Before the image of the surrounding environment shot by at least one video device is obtained, the robot is determined to be in the sight line range of the video device, and the robot is prevented from receiving too much useless information and wasting storage and computing resources. Moreover, only the image including the robot is processed, and the processing load is reduced.
In a specific implementation, in the above two embodiments, the robot and the video device may be configured to cooperate in a manner of sharing a shot image, and specifically, when the robot obtains the shot image from the video device, the robot may also send the image obtained by shooting by the robot itself to the video device to expand the view angle of the video device, where a process of expanding the view angle of the video device by the video device is the same as the process of expanding the view angle of the robot described above, and is not described here again.
The third embodiment of the present application relates to an environment information determination apparatus, as shown in fig. 8, including an acquisition module 401 and a processing module 402.
The acquiring module 401 is configured to acquire an image of a surrounding environment captured by at least one video device, where a distance between the video device and the robot does not exceed a preset value.
The processing module 402 is configured to expand the peripheral environment information of the robot according to the images of the peripheral environment captured by each video device.
It should be understood that this embodiment is a device embodiment corresponding to the first embodiment, and the embodiment can be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
A fourth embodiment of the present application relates to an environment information determining apparatus, and as shown in fig. 9, the present embodiment is further improved based on the third embodiment, and the specific improvements are as follows: a determination module, a communication module and an operation module are added.
The determining module 403 is configured to determine that the line of sight of the robot is blocked by an obstacle and that the robot is within the range of the line of sight of the video device before the acquiring module 401 acquires the image of the surrounding environment captured by the at least one video device;
the processing module 402 is further configured to determine, according to the image of the surrounding environment captured by each of the video devices, that the image of the surrounding environment captured by the video device includes the image of the robot before expanding the information of the surrounding environment of the robot;
the communication module 404 is configured to establish connection with each video device and acquire a physical location of each video device before the acquisition module 401 acquires an image of the surrounding environment captured by at least one video device;
the operation module 405 is configured to take a matching processing measure according to the expanded ambient environment information.
It should be understood that the present embodiment is a device embodiment corresponding to the second embodiment, and the present embodiment and the second embodiment can be implemented in cooperation. The related technical details mentioned in the second embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the second embodiment.
It should be noted that each module involved in the third embodiment and the fourth embodiment is a logic module, and in practical application, one logic unit may be one physical unit, may be a part of one physical unit, and may also be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, the third embodiment and the fourth embodiment do not introduce elements that are not so closely related to solve the technical problems proposed by the present invention, but this does not indicate that there are no other elements in the third embodiment and the fourth embodiment.
A fifth embodiment of the present application relates to a robot, as shown in fig. 10, comprising at least one processor 501; and a memory 502 communicatively coupled to the at least one processor 501; the memory 502 stores instructions executable by the at least one processor 501, and the instructions are executed by the at least one processor 501, so that the at least one processor 501 can execute the environment information determination method in the above embodiments.
It should be noted that in a specific implementation, the robot may further include a communication component. The communication component receives and/or transmits data, such as images of the surrounding environment captured by the video device, under the control of the processor 501.
In this embodiment, the processor 501 is a Central Processing Unit (CPU), and the Memory 502 is a Random Access Memory (RAM). The processor 501 and the memory 502 may be connected by a bus or other means, and fig. 10 illustrates the connection by the bus as an example. The memory 502 is a non-volatile computer-readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as the programs that implement the environment information determination method in the embodiments of the present application, in the memory 502. The processor 501 executes various functional applications and data processing of the device by executing nonvolatile software programs, instructions, and modules stored in the memory 502, that is, implements the above-described environment information determination method.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 502 may optionally include memory located remotely from processor 501, which may be connected to an external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 502, and when executed by the one or more processors 501, perform the method of determining environmental information in any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
A sixth embodiment of the present application relates to a computer-readable storage medium storing a computer program. The computer program, when executed by a processor, implements the environment information determination method described in any of the above embodiments.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.

Claims (13)

1. An environmental information determination method, comprising:
acquiring an image of a surrounding environment shot by at least one video device, wherein the distance between the video device and a robot does not exceed a preset value, and the preset value is determined according to information of the robot;
expanding the peripheral environment information of the robot according to the images of the peripheral environment shot by each video device;
before acquiring an image of the surrounding environment captured by at least one video device, the method further comprises:
and determining that the robot is positioned in the sight range of the camera with the open authority of the video equipment according to the physical position of the video equipment and the parameters of the camera with the open authority of the video equipment.
2. The environmental information determination method according to claim 1, wherein expanding the peripheral environment information of the robot based on the image of the peripheral environment captured by each of the video devices includes:
performing the following processing for each video device respectively: determining the position of the robot in an image of the surrounding environment shot by the video equipment, taking the determined position as image positioning information of the robot, and acquiring the surrounding environment information of the robot monitored by the video equipment according to the image positioning information;
and expanding the peripheral environment information of the robot according to the peripheral environment information of the robot monitored by each video device.
3. The environmental information determination method according to claim 2, wherein determining a position of the robot in the image of the surrounding environment captured by the video device, the determining the position being used as image positioning information of the robot, comprises:
determining the position of the robot in an image of the surrounding environment shot by the camera with the open authority of the video equipment according to the physical position of the robot, the physical position of the video equipment and the parameters of the camera with the open authority of the video equipment, and taking the determined position as the image positioning information of the robot;
alternatively, the first and second electrodes may be,
and detecting the characteristics of the robot in the image of the surrounding environment shot by the video equipment, and determining the position of the characteristics of the robot as the image positioning information of the robot.
4. The method of claim 3, wherein determining the position of the robot in the image of the surrounding environment captured by the camera of the video device open authority based on the physical position of the robot, the physical position of the video device, and the parameters of the camera of the video device open authority comprises:
determining a value of a deviation angle of the robot relative to an optical axis in a horizontal plane in a visual angle of a camera of the video equipment opening authority according to the physical position of the robot, the physical position of the video equipment and the horizontal plane direction of the optical axis of the camera of the video equipment opening authority;
according to the value of the deviation angle on the horizontal plane and the transverse visual angle information of the camera with the open authority of the video equipment, determining first position reference information of the robot in an image of the surrounding environment shot by the camera with the open authority, wherein the first position reference information is as follows: the proportion of the amount of lateral shift in the image lateral direction from the center of the image to the left or right;
and determining the position of the robot in the image of the surrounding environment shot by the camera with the open authority of the video equipment according to the first position reference information.
5. The environmental information determination method according to claim 4, wherein determining the position of the robot in the image of the surrounding environment taken by the camera of the video device having the open right based on the first position reference information includes:
obtaining a first reference line by starting to transversely shift from the center line of the image according to the first position reference information;
a preset value is horizontally shifted leftwards by taking the first reference line as a center to obtain a second reference line, and the preset value is horizontally shifted rightwards to obtain a third reference line;
detecting a feature of the robot in an image area defined by the second reference line and the third reference line;
determining the position of the robot in the image according to the detected characteristics of the robot.
6. The environmental information determination method according to claim 5, wherein the characteristics of the robot include: the robot comprises a contour characteristic of the robot, and/or an action characteristic of the robot, and/or a light and dark characteristic of a signal lamp of the robot.
7. The environmental information determination method according to claim 1, wherein before the expanding the peripheral environment information of the robot based on the image of the peripheral environment taken by each of the video devices, the method further comprises:
and determining that the image of the surrounding environment shot by the video equipment contains the image of the robot.
8. The environmental information determination method according to claim 1, wherein after expanding the peripheral environment information of the robot based on the image of the peripheral environment taken by each of the video devices, the method further comprises:
and adopting matched processing measures according to the expanded surrounding environment information.
9. The environmental information determination method of claim 1, wherein prior to said obtaining the image of the surrounding environment captured by the at least one video device, the method further comprises:
determining that a line of sight of the robot is obstructed by an obstacle.
10. The environmental information determination method of claim 1, wherein prior to said obtaining the image of the surrounding environment captured by the at least one video device, the method further comprises:
and respectively establishing connection with each video device, and acquiring the physical position of each video device.
11. An environmental information determination apparatus, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image of the surrounding environment shot by at least one video device, the distance between the video device and a robot does not exceed a preset value, and the preset value is determined according to the information of the robot;
the processing module is used for expanding the peripheral environment information of the robot according to the images of the peripheral environment shot by each video device;
wherein, before obtaining the image of the surrounding environment shot by at least one video device, further comprising: and determining that the robot is positioned in the sight range of the camera with the open authority of the video equipment according to the physical position of the video equipment and the parameters of the camera with the open authority of the video equipment.
12. A robot comprising at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of determining environmental information according to any one of claims 1 to 10.
13. A computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the environment information determination method according to any one of claims 1 to 10.
CN201880001148.3A 2018-02-12 2018-02-12 Environment information determination method, device, robot and storage medium Active CN108781258B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/076503 WO2019153345A1 (en) 2018-02-12 2018-02-12 Environment information determining method, apparatus, robot, and storage medium

Publications (2)

Publication Number Publication Date
CN108781258A CN108781258A (en) 2018-11-09
CN108781258B true CN108781258B (en) 2021-05-28

Family

ID=64029058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880001148.3A Active CN108781258B (en) 2018-02-12 2018-02-12 Environment information determination method, device, robot and storage medium

Country Status (2)

Country Link
CN (1) CN108781258B (en)
WO (1) WO2019153345A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494848B (en) * 2021-12-21 2024-04-16 重庆特斯联智慧科技股份有限公司 Method and device for determining vision path of robot
CN114666476B (en) * 2022-03-15 2024-04-16 北京云迹科技股份有限公司 Intelligent video recording method, device, equipment and storage medium for robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102914303A (en) * 2012-10-11 2013-02-06 江苏科技大学 Navigation information acquisition method and intelligent space system with multiple mobile robots
CN103389699A (en) * 2013-05-09 2013-11-13 浙江大学 Robot monitoring and automatic mobile system operation method based on distributed intelligent monitoring controlling nodes
CN105307115A (en) * 2015-08-07 2016-02-03 浙江海洋学院 Distributed vision positioning system and method based on action robot
JP2016152003A (en) * 2015-02-19 2016-08-22 Jfeスチール株式会社 Self-position estimation method for autonomous mobile robot, autonomous mobile robot, and landmark for self-position estimation
CN107076557A (en) * 2016-06-07 2017-08-18 深圳市大疆创新科技有限公司 Mobile robot recognition positioning method, device, system and mobile robot
CN107368074A (en) * 2017-07-27 2017-11-21 南京理工大学 A kind of autonomous navigation method of robot based on video monitoring

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3841621B2 (en) * 2000-07-13 2006-11-01 シャープ株式会社 Omnidirectional visual sensor
JP4933354B2 (en) * 2007-06-08 2012-05-16 キヤノン株式会社 Information processing apparatus and information processing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102914303A (en) * 2012-10-11 2013-02-06 江苏科技大学 Navigation information acquisition method and intelligent space system with multiple mobile robots
CN103389699A (en) * 2013-05-09 2013-11-13 浙江大学 Robot monitoring and automatic mobile system operation method based on distributed intelligent monitoring controlling nodes
JP2016152003A (en) * 2015-02-19 2016-08-22 Jfeスチール株式会社 Self-position estimation method for autonomous mobile robot, autonomous mobile robot, and landmark for self-position estimation
CN105307115A (en) * 2015-08-07 2016-02-03 浙江海洋学院 Distributed vision positioning system and method based on action robot
CN107076557A (en) * 2016-06-07 2017-08-18 深圳市大疆创新科技有限公司 Mobile robot recognition positioning method, device, system and mobile robot
CN107368074A (en) * 2017-07-27 2017-11-21 南京理工大学 A kind of autonomous navigation method of robot based on video monitoring

Also Published As

Publication number Publication date
WO2019153345A1 (en) 2019-08-15
CN108781258A (en) 2018-11-09

Similar Documents

Publication Publication Date Title
US10652452B2 (en) Method for automatic focus and PTZ camera
JP6775263B2 (en) Positioning method and equipment
EP1981278B1 (en) Automatic tracking device and automatic tracking method
CN107710283B (en) Shooting control method and device and control equipment
US9651384B2 (en) System and method for indoor navigation
CN103726879B (en) Utilize camera automatic capturing mine ore deposit to shake and cave in and the method for record warning in time
CN110866480A (en) Object tracking method and device, storage medium and electronic device
US9077907B2 (en) Image processing apparatus
JP5484184B2 (en) Image processing apparatus, image processing method, and program
CN106780550B (en) Target tracking method and electronic equipment
CN112215037B (en) Object tracking method and device, electronic equipment and computer readable storage medium
CN104103030A (en) Image analysis method, camera apparatus, control apparatus and control method
CN109218598B (en) Camera switching method and device and unmanned aerial vehicle
CN108781258B (en) Environment information determination method, device, robot and storage medium
CN111161202A (en) Vehicle behavior information acquisition method and device, computer equipment and storage medium
CN112911249B (en) Target object tracking method and device, storage medium and electronic device
CN107710736A (en) Aid in the method and system of user's capture images or video
KR20150075505A (en) Apparatus and method for providing other ship information based on image
CN109073398B (en) Map establishing method, positioning method, device, terminal and storage medium
CN115527189B (en) Parking space state detection method, terminal device and computer readable storage medium
CN113518174A (en) Shooting method, device and system
US10857979B2 (en) Security device, security control method, program, and storage medium
CN109547705B (en) Photo information acquisition method and device for vehicle
JP5911623B1 (en) Detection device and detection method for road joints
CN111127362A (en) Video dedusting method, system and device based on image enhancement and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210210

Address after: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.

CP03 Change of name, title or address