WO2017210866A1 - 可移动机器人识别定位方法、装置、系统及可移动机器人 - Google Patents

可移动机器人识别定位方法、装置、系统及可移动机器人 Download PDF

Info

Publication number
WO2017210866A1
WO2017210866A1 PCT/CN2016/085140 CN2016085140W WO2017210866A1 WO 2017210866 A1 WO2017210866 A1 WO 2017210866A1 CN 2016085140 W CN2016085140 W CN 2016085140W WO 2017210866 A1 WO2017210866 A1 WO 2017210866A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
movable robot
robot
image information
led
Prior art date
Application number
PCT/CN2016/085140
Other languages
English (en)
French (fr)
Inventor
包玉奇
贝世猛
戚晓林
苗向鹏
梁博
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201680002465.8A priority Critical patent/CN107076557A/zh
Priority to PCT/CN2016/085140 priority patent/WO2017210866A1/zh
Publication of WO2017210866A1 publication Critical patent/WO2017210866A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements

Definitions

  • Embodiments of the present invention relate to the field of robots, and in particular, to a mobile robot identification and positioning method, device, system, and movable robot.
  • each mobile robot can accept wireless remote control, and each robot has combat weapons, such as BB bombers, beam launchers, and the like.
  • each robot needs to be identified and positioned.
  • different identification cards are set on each robot, and one identification card is used to identify a robot.
  • each robot is respectively equipped with a positioning device, which can receive a plurality of wireless signals, and send the strong and weak information of each wireless signal to the server, and the server according to the strength information of each wireless signal Determining the positioning information of the positioning device, that is, the positioning information of the robot corresponding to the positioning device.
  • the identification plate on the robot is easy to fall, which makes the identity of the robot difficult to identify; in addition, the positioning is determined according to the strength information of the plurality of wireless signals received by the positioning device.
  • the positioning information of the device has a large deviation from the actual position of the positioning device, resulting in low positioning accuracy of the robot.
  • Embodiments of the present invention provide a mobile robot identification and positioning method, device, system, and mobile robot to improve the recognition and positioning accuracy of the mobile robot.
  • An aspect of the embodiments of the present invention provides a mobile robot identification and positioning method, including:
  • the movable robot is identified and located according to image information of the light source.
  • a mobile robot identification and positioning system including:
  • One or more processors working individually or collectively, are used to:
  • the movable robot is identified and located according to image information of the light source.
  • Another aspect of the present invention provides a camera provided with a lens module, the camera further comprising:
  • a housing having a light window on an outer surface thereof
  • a plurality of LED lamps mounted in the housing and emitting light through the window;
  • a controller electrically connected to the plurality of LED lamps
  • controller drives the LED lamp to emit light and controls an operating state of the plurality of LED lamps.
  • Another aspect of the embodiments of the present invention is to provide a mobile robot, including:
  • a mobile device coupled to the body for providing power to move the body
  • a housing having a light window on an outer surface thereof
  • a plurality of LED lamps mounted in the housing and emitting light through the window;
  • a controller electrically connected to the plurality of LED lamps
  • controller drives the LED lamp to emit light and controls an operating state of the plurality of LED lamps.
  • the mobile robot recognition and positioning method, device, system and movable robot provided by the embodiment of the invention determine the color, shape, and position of the light source relative to the image information in the image information by using image information of the light source carried by the movable robot According to the color and shape of the light source in the image information, the robot can be identified, and the position of the movable robot in the field can be determined according to the position of the light source relative to the image information, thereby improving the accuracy of identifying the movable robot and improving the positioning of the movable robot. Precision.
  • the camera provided by the embodiment of the invention is provided with a plurality of LED lights, the controller of the camera drives the LED lights to emit light, and controls the working states of the plurality of LED lights so that the plurality of LED lights emit
  • the preset pattern of light can identify the movable robot loaded with the camera by recognizing the image information of the preset pattern, thereby conveniently identifying the movable robot.
  • the camera is a module of the identification system, which is convenient to install and disassemble.
  • FIG. 1 is a flowchart of a method for recognizing and positioning a mobile robot according to Embodiment 1 of the present invention
  • FIG. 1A is a network structure diagram of a mobile robot identification and positioning method according to Embodiment 1 of the present invention.
  • FIG. 1B is a schematic diagram of image information according to Embodiment 1 of the present invention.
  • FIG. 1C is a schematic diagram of image information provided by Embodiment 1 of the present invention.
  • FIG. 2 is a flowchart of a method for recognizing and positioning a mobile robot according to Embodiment 2 of the present invention
  • FIG. 3 is a flowchart of a method for recognizing and positioning a mobile robot according to Embodiment 3 of the present invention.
  • FIG. 3A is a schematic diagram of an editable LED array according to Embodiment 3 of the present invention.
  • FIG. 3B is a schematic diagram of first image information according to Embodiment 3 of the present invention.
  • FIG. 3C is a schematic diagram of first image information according to Embodiment 3 of the present invention.
  • FIG. 3D is a schematic diagram of first image information according to Embodiment 3 of the present invention.
  • FIG. 4 is a flowchart of a method for recognizing and positioning a mobile robot according to Embodiment 4 of the present invention.
  • FIG. 4A is a network structure diagram of a mobile robot identification and positioning method according to Embodiment 4 of the present invention.
  • FIG. 5 is a structural diagram of a mobile robot identification and positioning system according to Embodiment 5 of the present invention.
  • FIG. 6 is a structural diagram of a mobile robot identification and positioning system according to Embodiment 8 of the present invention.
  • FIG. 7 is a structural diagram of a camera according to Embodiment 9 of the present invention.
  • FIG. 8 is a structural diagram of a camera according to Embodiment 10 of the present invention.
  • FIG. 9A is an exploded view of a camera according to Embodiment 11 of the present invention.
  • FIG. 9B is a left side view of the camera according to Embodiment 11 of the present invention.
  • FIG. 9C is a front elevational view of the camera according to Embodiment 11 of the present invention.
  • FIG. 9D is a top plan view of a camera according to Embodiment 11 of the present invention.
  • FIG. 9E is a perspective view of a camera according to Embodiment 11 of the present invention.
  • FIG. 10 is a structural diagram of a mobile robot according to Embodiment 12 of the present invention.
  • a component when referred to as being "fixed” to another component, it can be directly on the other component or the component can be present. When a component is considered to "connect” another component, it can be directly connected to another component or possibly a central component.
  • Embodiment 1 of the present invention provides a mobile robot identification and positioning method.
  • FIG. 1 is a flowchart of a method for recognizing and positioning a mobile robot according to Embodiment 1 of the present invention. As shown in FIG. 1, the method in this embodiment may include:
  • Step S101 Acquire image information of a light source carried by the movable robot.
  • the movable robot carries a light source, and the light source includes at least one of the following: a plurality of LED lights, a fluorescent lamp, and infrared rays.
  • a plurality of LED lights are disposed on the body of the movable robot, and the LED lights that are illuminated in the plurality of LED lights can be displayed in different colors and arrangements; or different movable colors of the movable robot are disposed on the body and A fluorescent lamp of a shape; or a plurality of infrared light-emitting points are disposed on the body of each movable robot, and the plurality of infrared light-emitting points are arranged in different manners and have different light-emitting shapes.
  • FIG. 1A is a network structural diagram of a mobile robot identification and positioning method according to Embodiment 1 of the present invention.
  • the two movable robots carry any one of the light sources as described above, and the photographing device 21 is provided with a photosensitive material on the lens.
  • the photosensitive material can be connected to the processor of the photographing device 21, and when the photosensitive material senses the light emitted by the light source, an electrical signal is sent to the processor, so that the processor controls the photographing button of the photographing device 21, thereby automatically capturing the light source of the light,
  • the image information of the light source is transmitted to the server 22.
  • the embodiment of the present invention does not limit the number of robots within the shooting range of the photographing device 21.
  • Step S102 Identify and locate the movable robot according to image information of the light source.
  • FIG. 1B is image information provided by Embodiment 1 of the present invention.
  • the image information is image information of a light source photographed by the photographing device 21, and the image information includes a red number 1 and a blue number 2, wherein the red number 1 and the blue number are 2 is composed of light-emitting LED lights.
  • the server 22 can determine two according to the color and arrangement manner of the LED lights that are illuminated in the image information.
  • Mobile robot Don't be the Red Team's No. 1 and the Blue Team's No. 2, thus achieving the identification of the enemy and the enemy.
  • the photographing device 21 photographs only one movable robot, a movable robot can be identified by the method of this step.
  • FIG. 1C is the image information provided by the first embodiment of the present invention. schematic diagram. As shown in FIG.
  • the image information includes a circle and a square, wherein the circle and the square are both formed by a plurality of infrared emission points, and if the circle corresponds to a red team and a square corresponding to the blue team, Then, based on the specific shape formed by the plurality of infrared emission points in the image information, the server 22 can determine that the two movable robots are respectively from the red team and the blue team, thereby realizing the identification of the enemy and the enemy. In addition, if the photographing device 21 photographs only one movable robot, a movable robot can be identified by the method of this step.
  • the server 22 recognizes the movable robot according to the color and the color of the fluorescent light presented in the image information captured by the photographing device 21.
  • the image information shown in FIG. 1B or FIG. 1C may specifically be image information of a venue where the mobile robot is located, and the server 22 determines the position of the movable robot in the field according to the position of the light source in the image, for example, The position of the number 2 shown in FIG. 1B in the image may indicate the position of the blue team No. 2 movable robot in the field photographed by the photographing device 21.
  • the position of the movable robot changes in real time
  • the photographing device 21 can acquire image information of the light source carried by the movable robot in real time, so that the server 22 determines the position of the movable robot in real time.
  • the server 22 is also connected with a display screen 23, which displays position information, identity information, motion track information, and the like of the movable robot.
  • FIG. 1B and FIG. 1C are only examples of image information of a light source carried by the movable robot, and do not limit the specific form of the light source.
  • the image information of the light source carried by the movable robot is used to determine the color, shape, and position of the light source relative to the image information in the image information, and the robot can be identified according to the color and shape of the light source in the image information, according to the light source relative to The position of the image information can determine the position of the movable robot in the field, improve the accuracy of recognizing the movable robot, and improve the positioning accuracy of the movable robot.
  • Embodiment 2 of the present invention provides a method for recognizing and positioning a mobile robot.
  • the light source is a plurality of LED lights
  • the image information of the light source includes at least one of the following: color information of the plurality of LED lights, the plurality of Arrangement information of LED lights, position information of the plurality of LED lights.
  • FIG. 2 is a flowchart of a method for recognizing and positioning a mobile robot according to Embodiment 2 of the present invention. As shown in FIG. 2, the method in this embodiment may include:
  • Step S201 Acquire image information of a plurality of LED lights carried by the movable robot.
  • the light source carried by the movable robot is specifically a plurality of LED lights
  • the image information of the plurality of LED lights carried by each movable robot photographed by the photographing device 21 includes at least one of the following: the plurality of LED lights The color information, the arrangement information of the plurality of LED lights, the position information of the plurality of LED lights, wherein the position information of the plurality of LED lights is specifically position information of the plurality of LED lights in the image information.
  • the execution body of the embodiment may be the server 22 in FIG. 1A.
  • the server 22 acquires image information of the plurality of LED lights carried by the mobile robot from the photographing device 21, and the specific acquisition process is consistent with the method in the first embodiment. I won't go into details here.
  • Step S202 Identify the movable robot according to color information and arrangement information of the plurality of LED lights.
  • Step S203 Position the movable robot according to position information of the plurality of LED lights.
  • the image information of the light source carried by the movable robot is used to determine the color, shape, and position of the light source relative to the image information in the image information, and the robot can be identified according to the color and shape of the light source in the image information, according to the light source relative to The position of the image information can determine the position of the movable robot in the field, improve the accuracy of recognizing the movable robot, and improve the positioning accuracy of the movable robot.
  • Embodiment 3 of the present invention provides a mobile robot identification and positioning method.
  • the embodiment is based on the technical solution provided in the second embodiment, the plurality of LED lights constitute an editable LED array.
  • Column. 3 is a flowchart of a method for recognizing and positioning a mobile robot according to Embodiment 3 of the present invention. As shown in FIG. 3, the method in this embodiment may include:
  • Step S301 Acquire first image information of the mobile robot, where the first image information includes image information of the LED array;
  • the first image information is image information of the movable robot photographed by the photographing device 21, and each movable robot carries an editable LED array, and the first image information includes not only the movable robot but also Editable LED array.
  • FIG. 3A is a schematic diagram of an editable LED array according to Embodiment 3 of the present invention.
  • the editable LED array includes a plurality of rows and columns, and each row has a LED light at the intersection of each column.
  • the LEDs on the first row and the first column are recorded as 11, and each LED is turned on and off.
  • Controllable, color controllable, and the arrangement of multiple LEDs that illuminate in the LED array can also be controlled.
  • a white circle indicates an extinguished LED light
  • a black circle indicates a light-emitting LED light.
  • the illuminated multiple LED lights can exhibit different shapes, such as an Arabic numeral 1 or other characters and numbers, wherein a plurality of LEDs are illuminated.
  • the lights can display the same color or different colors. Additionally, embodiments of the invention do not limit the size of the LED array.
  • FIG. 3B is a schematic diagram of first image information according to Embodiment 3 of the present invention.
  • the first image information of the movable robot photographed by the photographing device 21 includes an image 71 of the robot B and an image 72 of the robot A, and the image 71 and the image 72 respectively include an editable LED array, and each LED array The rendered color information is different from the arrangement information.
  • the editable LED array in image 71 is presented with a blue number 2
  • the editable LED array in image 72 is presented with a red number 1.
  • Step S302 Identify the movable robot according to color information and arrangement information of the LED array corresponding to the first image information.
  • the embodiment of the present invention can determine the group to which the mobile robot belongs according to the color of the LED light that is illuminated in the LED array. For example, if the LED lights that are illuminated in the LED array are all displayed in red, the mobile robot carrying the LED array belongs to the red team. Or, if the LED lights that are illuminated in the LED array all display blue, it means that the mobile robot carrying the LED array belongs to the blue team. When there are a large number of participating teams, it is also possible to determine the group to which the movable robot belongs according to the combination of the colors of the LED lights that are illuminated in the LED array, for example, a combination of red and blue is used to represent the group 1, and a combination of red and yellow is used. Group 2, and so on.
  • LED lights that are illuminated in the LED array There are various ways of arranging, for example, the illuminated LED lights exhibit Arabic numerals, specific graphics or characters, etc., the number of the movable robot is determined according to the Arabic numerals, or the movable robot is determined according to the specific graphics or characters. identity of. According to the color information and the arrangement information of the LED lights that are illuminated in the LED array, it is possible to determine which member of the participating group the movable robot is, thereby achieving accurate recognition of the movable robot.
  • the first image information shown in FIG. 3B includes a blue number 2 and a red number 1. Since the robot B corresponds to the image 71 and the robot A corresponds to the image 72, it can be determined that the robot B is the blue team's No. 2 robot, and the robot A is the No. 1 robot of the Red Team.
  • Step S303 Determine, according to location information of the LED array in the first image information, a location of the mobile robot in a venue corresponding to the first image information.
  • the server 22 may also determine the position of the robot A in the field photographed by the photographing device 21 according to the position information of the red number 1 in the first image information, that is, the position of the red numeral 1 relative to the position of the first image information indicates the robot A location in the venue. Similarly, the server 22 determines the position of the robot B in the field photographed by the photographing device 21 based on the position information of the blue number 2 in the first image information, that is, the position indication of the blue number 2 with respect to the first image information. The position of the robot B in the field.
  • Step S304 Determine, according to position information of each LED lamp in the first image information in the LED array, an orientation of the movable robot in a field corresponding to the first image information.
  • each LED light in the LED array has two pieces of information: position information and color information. Therefore, each LED light in the LED array can be represented as (X, Y, C), where X represents the LED light. In the abscissa of the LED array, Y represents the ordinate of the illuminating LED lamp in the LED array, and C represents the color of the illuminating LED lamp. It is reasonable to assume that the size of the LED array carried by each mobile robot is the same.
  • the position of each LED light in the LED array in the first image information may be determined according to the position of the LED lamp 11 in the first row and the first image information in the first row, and the offset of each LED lamp relative to the LED lamp 11 The position and orientation of the graphic formed by the illuminated LED light in the first image information is determined.
  • FIG. 3C is a schematic diagram of first image information according to Embodiment 3 of the present invention.
  • the orientation of the movable robot is determined according to the direction of the pattern formed by the LED lights that are illuminated in the LED array, according to the north, south, south, and west, as shown in FIG. 3C, the blue number 2 is north. , indicating machine The person B is facing north in the field; the blue-red number 1 is north, indicating that the robot A is facing north in the field.
  • FIG. 3D is a schematic diagram of first image information according to Embodiment 3 of the present invention.
  • the positions of the LED lamps 11 in the first row and the first column of each LED array in the first image information are changed, indicating that the LED lamps of the LED arrays are illuminated.
  • the direction of the graphic changes in the first image information. Specifically, the blue number 2 is eastward, and the blue-red number 1 is westward, indicating that the robot B is facing east in the field, and the robot A is oriented in the field. oo.
  • the movable robot is loaded with the LED array, and the image information of the movable robot carrying the LED array is obtained, and the movable machine can be accurately identified according to the color and arrangement of the LED light in the LED array in the image information, according to The position information of the LED array in the image information can accurately locate the movable robot.
  • the orientation of the movable robot in the image information can also be determined according to the direction of the graphic formed by the LED lights that are illuminated in the LED array.
  • Embodiment 4 of the present invention provides a mobile robot identification and positioning method.
  • the embodiment is based on the technical solution provided by the third embodiment, and the site where the mobile robot is located is divided into a plurality of sub-sites, and each of the sub-sites corresponds to a first camera.
  • 4 is a flowchart of a method for recognizing and positioning a mobile robot according to Embodiment 4 of the present invention. As shown in FIG. 4, the method in this embodiment may include:
  • Step S401 Acquire first image information of the movable robot in the sub-field through a first camera above each sub-site;
  • FIG. 4A is a network structure diagram of a mobile robot identification and positioning method according to Embodiment 4 of the present invention.
  • the field 80 where the mobile robot is located can be divided into six sub-fields, such as the area indicated by the dotted line, and each sub-site corresponds to a first camera, for example, the first cameras 81-86 respectively correspond to one.
  • the imaging range of each first camera is the sub-field pointed by the dotted arrow.
  • the photographing device 21 involved in the above embodiment may be any one of the first cameras 81-86, and the image information related to the above embodiment may be an image of the sub-field below it photographed by any one of the first cameras 81-86.
  • the LED array is disposed on the top of each movable robot, and when each of the first cameras captures an image downwardly above the sub-field, the LED array on the top of the movable robot can be directly captured, thereby The mobile robot performs full field positioning.
  • This embodiment is not limited to the method of dividing the stator site, and does not limit the number of sub-sites.
  • Step S402 identifying the movable robot according to color information and arrangement information of the LED array corresponding to the first image information
  • Step S403 determining, according to the location information of the LED array in the first image information, a location of the mobile robot in a venue corresponding to the first image information;
  • Step S404 determining, according to position information of each LED lamp in the first image information in the LED array, an orientation of the movable robot in a field corresponding to the first image information;
  • Step S402 to step S404 are respectively consistent with the foregoing steps S302-S304, and the specific method is not described herein again.
  • Step S405 acquiring second image information captured by the second camera carried by the movable robot;
  • each movable robot further carries a second camera
  • the second camera may be disposed together with the light source, or may be separately disposed from the light source, that is, the second camera may be disposed at the top of the movable robot. It can also be set in the middle or bottom.
  • the second camera is configured to capture an image of the environment surrounding the movable robot to generate second image information.
  • the second image information is uncompressed image information
  • the second camera is connected to the wireless transmitting device, and the second camera directly transmits the second image information in an uncompressed manner after capturing the second image information of the surrounding environment.
  • the server If the distance between the mobile robot and the server exceeds the wireless transmission distance of the wireless transmitting device, the mobile robot can transmit the second image information to the server through the relay device.
  • Step S406 Determine, according to the second image information, surrounding environment information of the movable robot;
  • the server After receiving the second image information, the server performs image processing on the second image information to determine surrounding environment information of the movable robot, for example, whether there is an obstacle around the movable robot, and whether there is an enemy robot around the movable robot. Whether the mobile robot has gone out of the field and so on.
  • Step S407 Control a moving direction of the movable robot according to the surrounding environment information of the movable robot.
  • the server receives the surrounding environment information of the movable robot, displays the surrounding environment information of the movable robot on the display screen 23, and the display screen 23 can also display the field 80, the divided sub-sites, and the movable robot on each sub-field.
  • the user can input a control command by operating the movable robot on the display screen 23, and the server sends a control command input by the user to the movable robot to control the moving direction of the movable robot.
  • the display 23 displays an obstacle around a certain robot, and the user inputs a control command by operating the robot on the display screen 23 to cause the robot to bypass the obstacle, and the server sends the control command to the site.
  • the robot enables the robot in the field to bypass the actual obstacles, that is, the method of remotely controlling the robot by the user.
  • Step S408 receiving electrical parameter information and/or remaining life information of the power source sent by the mobile robot;
  • the power source continuously consumes power
  • the electrical parameter information of the power source includes at least one of the following: current, voltage, power, and remaining power.
  • the movable robot may be provided with a pressure sensor.
  • a pressure sensor When the magnitude of the pressure sensed by the pressure sensor exceeds a threshold value, the movable robot is subjected to a large impact force from the outside, and the movable robot may be heavily hit by the opponent robot.
  • the processor inside the movable robot determines the lethality of the movable robot according to the position of the pressure sensor and the pressure sensed by the pressure sensor, and determines the remaining life information of the movable robot according to the lethality.
  • the movable robot may be provided with a photosensitive material, and the robot of each team may hold an infrared beam gun.
  • the photosensitive material senses the intensity of the infrared light or the irradiation time exceeds the threshold, the infrared beam held by the movable robot is indicated by the other robot.
  • the processor inside the movable robot determines the degree of injury or fatality of the movable robot based on the portion of the photosensitive material that senses the infrared ray, the intensity of the infrared ray, and the irradiation time of the infrared ray, depending on the degree of injury or fatality. Determine the remaining life information of the mobile robot.
  • the mobile robot can transmit the electrical parameter information and/or the remaining life information of the power source to the server in real time.
  • Step S409 displaying an environment image of the movable robot, and the movable robot Location information and status information of the movable robot.
  • the server 22 receives the surrounding environment information of the movable robot, displays the surrounding environment information of the movable robot on the display screen 23, and the server 22 can also display the position information of the movable robot and the state information of the movable robot in the Display 23.
  • the position information of the movable robot and the state information of the movable robot are displayed in an embedded form in an environment image of the movable robot.
  • the position information of the movable robot includes at least one of: positioning information of the movable robot, motion track information of the movable robot.
  • the state information of the movable robot includes at least one of: identification information of the movable robot, orientation information of the movable robot, electrical parameter information of a power source of the movable robot, and the movable robot Remaining life information.
  • the second image information captured by the second camera carried by the mobile robot is received by the server, and the second image information is uncompressed image information, which reduces the transmission delay of the image information, and ensures that the server can receive the data quickly.
  • the second image information, the server may further control the moving direction of the movable robot according to the surrounding environment information in the second image information; in addition, the server is connected with the display, and the server displays the environment image, the position information, and the status information of the movable robot On the display, the user can view the positioning information, the motion track information, and the state information of the movable robot in real time, thereby remotely controlling the moving direction of the movable robot.
  • Embodiment 5 of the present invention provides a mobile robot identification and positioning system.
  • FIG. 5 is a structural diagram of a mobile robot identification and positioning system according to Embodiment 5 of the present invention.
  • the mobile robot identification and positioning system 50 includes one or more processors 51 and one or more processors 51.
  • the processor 51 is configured to: acquire image information of a light source carried by the movable robot; and identify and locate the movable robot according to image information of the light source.
  • the mobile robot identification and positioning system 50 further includes an image sensor 52 communicatively coupled to the processor 51.
  • the image sensor 52 is configured to capture image information of the light source and transmit image information of the light source to The processor 51.
  • the light source includes at least one of a plurality of LED lamps, a fluorescent lamp, and infrared rays.
  • the image information of the light source carried by the movable robot is used to determine the color, shape, and position of the light source relative to the image information in the image information, and the robot can be identified according to the color and shape of the light source in the image information, according to the light source relative to The position of the image information can determine the position of the movable robot in the field, improve the accuracy of recognizing the movable robot, and improve the positioning accuracy of the movable robot.
  • Embodiment 6 of the present invention provides a mobile robot identification and positioning system.
  • the light source is a plurality of LED lights
  • the image information of the light source includes at least one of the following: color information of the plurality of LED lights, and the plurality of LED lights Arranging information, location information of the plurality of LED lights.
  • the processor 51 is specifically configured to identify the movable robot according to color information and arrangement information of the plurality of LED lamps.
  • the processor 51 is specifically configured to locate the movable robot according to position information of the plurality of LED lights.
  • the image information of the light source carried by the movable robot is used to determine the color, shape, and position of the light source relative to the image information in the image information, and the robot can be identified according to the color and shape of the light source in the image information, according to the light source relative to The position of the image information can determine the position of the movable robot in the field, improve the accuracy of recognizing the movable robot, and improve the positioning accuracy of the movable robot.
  • Embodiment 7 of the present invention provides a mobile robot identification and positioning system. Based on the technical solution provided in Embodiment 6, the plurality of LED lamps constitute an editable LED array.
  • the image sensor 52 is specifically configured to capture first image information of the movable robot, wherein the first image information includes image information of the LED array.
  • the processor 51 identifies the movable robot according to the color information and the arrangement information of the LED array corresponding to the first image information.
  • the processor 51 is further specifically configured to: according to the location information of the LED array in the first image information And determining a location of the movable robot in a venue corresponding to the first image information.
  • the processor 51 is further configured to determine, according to position information of each LED lamp in the first image information in the LED array, an orientation of the movable robot in a field corresponding to the first image information.
  • the movable robot is loaded with the LED array, and the image information of the movable robot carrying the LED array is obtained, and the movable machine can be accurately identified according to the color and arrangement of the LED light in the LED array in the image information, according to The position information of the LED array in the image information can accurately locate the movable robot.
  • the orientation of the movable robot in the image information can also be determined according to the direction of the graphic formed by the LED lights that are illuminated in the LED array.
  • Embodiment 8 of the present invention provides a mobile robot identification and positioning system.
  • the site is divided into a plurality of sub-sites, and each sub-site corresponds to a first camera; correspondingly, the processor 51 acquires the first camera through each sub-site. Determining the first image information of the movable robot in the subfield.
  • the mobile robot identification positioning system 50 further includes a wireless receiving device 53 communicably connected to the processor 51, and a wireless receiving device. 53 is configured to receive second image information that is captured and transmitted by a second camera carried by the movable robot.
  • the processor 51 is further configured to determine surrounding environment information of the movable robot according to the second image information, and generate a control command for controlling a moving direction of the movable robot according to the surrounding environment information of the movable robot. .
  • the mobile robot identification positioning system 50 further includes a wireless transmitting device 54 communicatively coupled to the processor 51, and the wireless transmitting device 54 is configured to send the control command to the mobile robot to enable the The mobile robot changes the direction of motion in accordance with the control command.
  • the second image information is uncompressed image information.
  • the wireless receiving device 53 is further configured to receive electrical parameter information and/or remaining vital information of the power source transmitted by the movable robot.
  • the mobile robot identification positioning system 50 further includes a display 55 communicatively coupled to the processor 51, the display 55 is configured to display an environment image of the movable robot, position information of the movable robot, and the Status information of the mobile robot.
  • position information of the movable robot and the state information of the movable robot are displayed in an embedded form in an environment image of the movable robot.
  • the position information of the movable robot includes at least one of: positioning information of the movable robot, motion track information of the movable robot.
  • the state information of the movable robot includes at least one of: identification information of the movable robot, orientation information of the movable robot, electrical parameter information of a power source of the movable robot, and the movable robot Remaining life information.
  • the second image information captured by the second camera carried by the mobile robot is received by the server, and the second image information is uncompressed image information, which reduces the transmission delay of the image information, and ensures that the server can receive the data quickly.
  • the second image information, the server may further control the moving direction of the movable robot according to the surrounding environment information in the second image information; in addition, the server is connected with the display, and the server displays the environment image, the position information, and the status information of the movable robot On the display, the user can view the positioning information, the motion track information, and the state information of the movable robot in real time, thereby remotely controlling the moving direction of the movable robot.
  • Embodiment 9 of the present invention provides a camera.
  • FIG. 7 is a structural diagram of a camera according to Embodiment 9 of the present invention.
  • the camera is provided with a lens module 89, and further includes: a housing 90, a plurality of LED lights 91, and a controller 92, wherein The outer surface of the casing 90 is provided with a lamp window 93; a plurality of LED lamps 91 are mounted in the casing 90, and the emitted light passes through the lamp window 93; the controller 92 is electrically connected to the plurality of LED lamps 91; and the controller 92 drives the LEDs The lamp emits light and controls the operating state of the plurality of LED lamps 91.
  • the working state includes at least one of a light emitting color of the LED lamp, and an arrangement manner of the plurality of LED lights that emit light.
  • the LED light is an RGB LED light
  • the controller controls a light color of the RGB LED light.
  • the camera provided by the embodiment of the invention is provided with a plurality of LED lights, the controller of the camera drives the LED lights to emit light, and controls the working states of the plurality of LED lights, so that the plurality of LED lights emit a preset pattern of lights.
  • the movable robot loaded with the camera can be identified, thereby conveniently identifying the movable robot.
  • the camera is a module of the identification system, which is convenient to install and disassemble.
  • Embodiment 10 of the present invention provides a camera.
  • FIG. 8 is a structural diagram of a camera according to Embodiment 10 of the present invention. As shown in FIG. 8 , on the basis of the embodiment shown in FIG. 7 , the camera further includes an image transmission device 94 connected to the lens module 89 for image transmission. The device 94 is configured to transmit the original image captured by the lens module 89 directly in an uncompressed manner.
  • the internal cavity of the housing 90 is partitioned into a lamp cavity 100 and a lens cavity 101.
  • a plurality of LED lamps 91 are mounted in the lamp cavity 100, and a lens module 89 is mounted in the lens cavity 101.
  • the original image captured by the lens module is directly transmitted in an uncompressed manner, which can save the time of image compression and decompression, reduce the image transmission delay, and improve the image transmission efficiency.
  • Embodiment 11 of the present invention provides a camera.
  • 9A is an exploded view of a camera according to Embodiment 11 of the present invention.
  • the camera includes a top cover 43, an upper cover 37, and a base 38.
  • the upper cover 37 is located between the top cover 43 and the base 38.
  • the cover 37 and the top cover 43 are spliced together to form a lamp cavity 100 as shown in FIG. 8.
  • the upper cover 37 and the base 38 are spliced together to form a lens cavity 101 as shown in FIG.
  • the lens protection glass 30 and the lens 31 constitute the lens module 89 of FIG. 7 or FIG. 8;
  • the LED lamp window 32 is specifically the light window 93 of FIG. 7 or FIG. 8, and one LED lamp 33 corresponds to one.
  • LED light window 32, a plurality of LED lights constitute an editable LED array;
  • the map transfer board 34 is specifically the image transfer device 94 of FIG.
  • connection 35 can be used to fix the camera on the body of the movable robot;
  • the slot card 36 can be used to connect the upper cover 37 and the base 38;
  • the camera board 39 can be used to fix the lens 31;
  • the 5.8G antenna 40 can be used to wirelessly transmit image information to the server, position information of the movable robot, status information, etc.; It can be used to power the camera;
  • the fan 42 can be used to cool the LED lamp 33 to prevent the LED lamp 33 from being burnt out due to permanent illumination in order to accurately identify and position the movable robot.
  • FIG. 9B is a left side view of the camera according to Embodiment 11 of the present invention. As shown in FIG. 9B, the left side view of the camera includes the lens protection glass 30 shown in FIG. 9A.
  • FIG. 9C is a camera according to Embodiment 11 of the present invention.
  • FIG. 9D is a plan view of a camera according to Embodiment 11 of the present invention;
  • FIG. 9E is a perspective view of the camera according to Embodiment 11 of the present invention.
  • the internal cavity of the camera casing is divided into a lamp cavity and a lens cavity, a plurality of LED lamps are installed in the lamp cavity, and the lens module is installed in the lens cavity, and the plurality of LEDs and the lens module are adopted. Separate placement prevents the light from the lens module from affecting the light emitted by multiple LEDs, further improving the accuracy of recognizing the mobile robot.
  • Embodiment 12 of the present invention provides a movable robot.
  • FIG. 10 is a structural diagram of a mobile robot according to Embodiment 12 of the present invention.
  • the mobile robot is described by taking a remote control chassis as an example.
  • the mobile robot 1 includes a body 1003, a mobile device 1001, and a camera 1002, wherein the mobile device 1001 is coupled to the body for providing power for movement of the body; the camera 1002 is mounted to the body.
  • the camera 1002 may specifically be the camera described in any of the embodiments of the ninth embodiment, the tenth embodiment, and the eleventh embodiment.
  • the camera is provided with a lens module 89.
  • the camera further includes: a housing 90, a plurality of LED lights 91 and a controller 92, wherein the outer surface of the housing 90 is provided with a light window 93;
  • the LED lamp 91 is mounted in the housing 90, and the emitted light passes through the window 93;
  • the controller 92 is electrically connected to the plurality of LED lamps 91;
  • the controller 92 drives the LED lamps to emit light, and controls the plurality of LED lamps 91.
  • the working state includes at least one of a light emitting color of the LED lamp, and an arrangement manner of the plurality of LED lights that emit light.
  • the LED light is an RGB LED light
  • the controller controls a light color of the RGB LED light.
  • the camera further includes an image transmission device 94 coupled to the lens module 89.
  • the image transmission device 94 is configured to directly transmit the original image captured by the lens module 89 in an uncompressed manner.
  • the internal cavity of the housing 90 is partitioned into a lamp cavity 100 and a lens cavity 101.
  • a plurality of LED lamps 91 are mounted in the lamp cavity 100, and a lens module 89 is mounted in the lens cavity 101.
  • the camera includes a top cover, an upper cover, and a base, and the upper cover is located at the Between the top cover and the base, the upper cover and the top cover are spliced together to form a lamp cavity 100 as shown in FIG. 8. The upper cover and the base are spliced together to form a lens cavity as shown in FIG. 101.
  • the lens protection glass 30 and the lens 31 constitute the lens module 89 of FIG. 7 or FIG. 8;
  • the LED lamp window 32 is specifically the light window 93 of FIG. 7 or FIG. 8, and one LED lamp 33 corresponds to one.
  • LED light window 32, a plurality of LED lights constitute an editable LED array;
  • the image transfer plate 34 is specifically the image transfer device 94 in FIG.
  • connection 35 can be used to fix the camera on the body of the movable robot 1;
  • the slot card 36 can be used to connect the upper cover 37 and the base 38;
  • the camera board 39 can be used to fix the lens 31;
  • the 5.8G antenna 40 can be used to wirelessly transmit image information to the server, position information of the movable robot 1, status information, etc.;
  • the board 41 can be used to power the camera;
  • the fan 42 can be used to cool the LED lamp 33 to prevent the LED lamp 33 from being burned out due to permanent illumination in order to accurately identify and position the movable robot 1.
  • the image information of the light source carried by the movable robot 1 determines the color, shape, and position of the light source relative to the image information in the image information, and the robot can be identified according to the color and shape of the light source in the image information, according to the light source.
  • the position of the movable robot 1 in the field can be determined at the position of the image information, the accuracy of recognizing the movable robot 1 is improved, and the positioning accuracy of the movable robot 1 is improved.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or It is implemented in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种可移动机器人(1)识别定位方法、装置、系统及可移动机器人(1),该方法包括:获取可移动机器人(1)承载的光源(91)的图像信息(S101);根据光源(91)的图像信息,识别、定位可移动机器人(1)(S102)。通过可移动机器人(1)承载的光源(91)的图像信息,确定图像信息中光源(91)的颜色、形状、以及光源(91)相对于图像信息的位置,根据图像信息中光源(91)的颜色和形状可识别机器人(1),根据光源(91)相对于图像信息的位置可确定可移动机器人(1)在场地(80)中的位置,提高了识别可移动机器人(1)的精度,同时提高了可移动机器人(1)的定位精度。

Description

可移动机器人识别定位方法、装置、系统及可移动机器人 技术领域
本发明实施例涉及机器人领域,尤其涉及一种可移动机器人识别定位方法、装置、系统及可移动机器人。
背景技术
机器人对战竞赛中,多个移动机器人分成敌我双方进行对抗。每个移动机器人可以接受无线遥控控制,并且每个机器人带有战斗武器,例如,BB弹发射器,光束发射枪等。
然而,在同一场地同时出现多个机器人时,需要对各机器人进行识别和定位,现有技术中,为了区分各机器人,在各机器人身上设置有不同的标识牌,一个标识牌用于标识一个机器人。另外,为了对各机器人进行定位,各机器人分别安装有定位设备,该定位设备可以接收多个无线信号,并将各无线信号的强弱信息发送给服务器,该服务器根据各无线信号的强弱信息确定该定位设备的定位信息,即该定位设备对应的机器人的定位信息。
当机器人在运动过程中或者与其他机器人进行对抗时,机器人身上的标识牌容易跌落,导致机器人的身份难以识别;另外,根据定位设备接收到的多个无线信号的强弱信息确定出的该定位设备的定位信息与该定位设备的实际位置之前存在较大偏差,导致机器人的定位精准度较低。
发明内容
本发明实施例提供一种可移动机器人识别定位方法、装置、系统及可移动机器人,以提高可移动机器人的识别和定位精准度。
本发明实施例的一个方面是提供一种可移动机器人识别定位方法,包括:
获取可移动机器人承载的光源的图像信息;
根据所述光源的图像信息,识别、定位所述可移动机器人。
本发明实施例的另一个方面是提供一种可移动机器人识别定位系统,包括:
一个或多个处理器,单独地或共同地工作,所述处理器用于:
获取可移动机器人承载的光源的图像信息;
根据所述光源的图像信息,识别、定位所述可移动机器人。
本发明实施例的另一个方面是提供一种相机,设有镜头模组,所述相机还包括:
壳体,所述壳体外表面设有灯窗;
多个LED灯,安装在所述壳体内,并且发出的光线透过所述灯窗;
控制器,与所述多个LED灯电连接;
其中,所述控制器驱动所述LED灯发光,并且控制所述多个LED灯的工作状态。
本发明实施例的另一个方面是提供一种可移动机器人,包括:
机身;
移动装置,与机身连接,用于提供所述机身移动的动力;
相机,安装在所述机身的顶部,所述相机设有镜头模组,还包括:
壳体,所述壳体外表面设有灯窗;
多个LED灯,安装在所述壳体内,并且发出的光线透过所述灯窗;
控制器,与所述多个LED灯电连接;
其中,所述控制器驱动所述LED灯发光,并且控制所述多个LED灯的工作状态。
本发明实施例提供的可移动机器人识别定位方法、装置、系统及可移动机器人,通过可移动机器人承载的光源的图像信息,确定图像信息中光源的颜色、形状、以及光源相对于图像信息的位置,根据图像信息中光源的颜色和形状可识别机器人,根据光源相对于图像信息的位置可确定可移动机器人在场地中的位置,提高了识别可移动机器人的精度,同时提高了可移动机器人的定位精度。
本发明实施例提供的相机上设有多个LED灯,相机的控制器驱动所述LED灯发光,并且控制所述多个LED灯的工作状态,以使多个LED灯发出 预设图案的灯光,通过对预设图案的图像信息的识别,能够识别装载有该相机的可移动机器人,从而方便地识别可移动机器人。同时,相机作为识别系统的模块,其安装与拆卸较为方便。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例一提供的可移动机器人识别定位方法的流程图;
图1A为本发明实施例一提供的可移动机器人识别定位方法适用的网络结构图;
图1B为本发明实施例一提供的图像信息的示意图;
图1C为本发明实施例一提供的图像信息的示意图;
图2为本发明实施例二提供的可移动机器人识别定位方法的流程图;
图3为本发明实施例三提供的可移动机器人识别定位方法的流程图;
图3A为本发明实施例三提供的可编辑的LED阵列的示意图;
图3B为本发明实施例三提供的第一图像信息的示意图;
图3C为本发明实施例三提供的第一图像信息的示意图;
图3D为本发明实施例三提供的第一图像信息的示意图;
图4为本发明实施例四提供的可移动机器人识别定位方法的流程图;
图4A本发明实施例四提供一种可移动机器人识别定位方法适用的网络结构图;
图5为本发明实施例五提供的可移动机器人识别定位系统的结构图;
图6为本发明实施例八提供的可移动机器人识别定位系统的结构图;
图7为本发明实施例九提供的相机的结构图;
图8为本发明实施例十提供的相机的结构图;
图9A为本发明实施例十一提供的相机的爆炸图;
图9B为本发明实施例十一提供的相机的左视图;
图9C为本发明实施例十一提供的相机的正视图;
图9D为本发明实施例十一提供的相机的俯视图;
图9E为本发明实施例十一提供的相机的轴测图;
图10为本发明实施例十二提供的可移动机器人的结构图。
附图标记:
1-可移动机器人   21-拍摄设备     22-服务器
23-显示屏      51-处理器         30-镜头保护玻璃
31-镜头       32-LED灯窗口   33-LED灯
34-图传板     35-连线        36-线槽卡板
37-上盖        38-底座        39-相机板
40-5.8G天线   41-电源板      42-风扇
43-顶盖      52-图像传感器      53-无线接收装置
54-无线发送装置   55-显示器     71-机器人B的图像
72-机器人A的图像   81至86-第一摄像头  89-镜头模组
90-壳体           91-多个LED灯         92-控制器
93-灯窗       94-图像传输装置       100-灯腔
101-镜头腔   1001-移动装置         1002-相机
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组 合。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
实施例一
本发明实施例一提供一种可移动机器人识别定位方法。图1为本发明实施例一提供的可移动机器人识别定位方法的流程图。如图1所示,本实施例中的方法,可以包括:
步骤S101、获取可移动机器人承载的光源的图像信息。
在本发明实施例中,可移动机器人承载有光源,该光源包括如下至少一种:多个LED灯,荧光灯,红外线。例如,在可移动机器人的机身上设置有多个LED灯,多个LED灯中发光的LED灯可呈现不同的颜色和排布方式;或者不同可移动机器人的机身上设置有不同颜色及形状的荧光灯;再或者每个可移动机器人的机身上设置有多个红外线发光点,多个红外线发光点的排布方式不同、呈现的发光形状不同。
图1A为本发明实施例一提供的可移动机器人识别定位方法适用的网络结构图。如图1A所示,拍摄设备21的拍摄范围内有两个可移动机器人A和B,两个可移动机器人承载有如上所述的任一种光源,拍摄设备21的镜头上设置有感光材质,感光材质可连接到拍摄设备21的处理器,当感光材质感测到光源发出的光线时,向处理器发送电信号,以使处理器控制拍摄设备21的拍摄按钮,从而自动拍摄发光的光源,并将光源的图像信息发送给服务器22。本发明实施例不限定拍摄设备21的拍摄范围内机器人的个数。
步骤S102、根据所述光源的图像信息,识别、定位所述可移动机器人。
若图1A所示的两个可移动机器人分别承载有多个LED灯,多个LED灯中发光的LED灯可呈现不同的颜色和排布方式,图1B为本发明实施例一提供的图像信息的示意图。如图1B所示,该图像信息是拍摄设备21拍摄的光源的图像信息,该图像信息中包括有一个红色的数字1和一个蓝色的数字2,其中,红色的数字1和蓝色的数字2均是由发光的LED灯构成的,若预先规定红色对应红队、蓝色对应蓝队,则服务器22根据该图像信息中发光的LED灯所呈现的颜色和排布方式,可确定出两个可移动机器人分 别是红队的1号和蓝队的2号,从而实现了敌我识别。另外,若拍摄设备21只拍摄到一个可移动机器人时,可通过此步骤的方法对一个可移动机器人进行识别。
若图1A所示的两个可移动机器人分别承载有多个红外线发射点,多个红外线发光点的排布方式不同、呈现的发光形状不同,图1C为本发明实施例一提供的图像信息的示意图。如图1C所示,该图像信息中包括有一个圆形和一个正方形,其中,圆形和正方形均是由多个红外线发射点构成的,若预先规定圆形对应红队、正方形对应蓝队,则服务器22根据该图像信息中多个红外线发射点构成的特定形状,可确定出两个可移动机器人分别是来自红队和蓝队,从而实现了敌我识别。另外,若拍摄设备21只拍摄到一个可移动机器人时,可通过此步骤的方法对一个可移动机器人进行识别。
若图1A所示的两个可移动机器人分别承载有不用颜色、不同形状的荧光灯,则服务器22根据拍摄设备21拍摄的图像信息中荧光灯呈现的颜色和形成识别可移动机器人。
另外,如图1B或图1C所示的图像信息,该图像信息具体可以是可移动机器人所在场地的图像信息,服务器22根据光源在图像中位置,确定可移动机器人在场地中的位置,例如图1B所示数字2在图像中的位置,可表示蓝队2号可移动机器人在拍摄设备21拍摄的场地中的位置。
此外,可移动机器人的位置实时变动,拍摄设备21可实时获取可移动机器人承载的光源的图像信息,以使服务器22实时确定可移动机器人的位置。如图1A所示,服务器22还连接有显示屏23,显示屏23显示有可移动机器人的位置信息、身份信息、以及运动轨迹信息等。
值得注意的是,图1B、图1C只是举例说明可移动机器人承载的光源的图像信息,并不限定发光光源呈现的具体形式。
本实施例通过可移动机器人承载的光源的图像信息,确定图像信息中光源的颜色、形状、以及光源相对于图像信息的位置,根据图像信息中光源的颜色和形状可识别机器人,根据光源相对于图像信息的位置可确定可移动机器人在场地中的位置,提高了识别可移动机器人的精度,同时提高了可移动机器人的定位精度。
实施例二
本发明实施例二提供一种可移动机器人识别定位方法。本实施例是在实施例一提供的技术方案的基础上,所述光源为多个LED灯,所述光源的图像信息包括如下至少一种:所述多个LED灯的颜色信息,所述多个LED灯的排布信息,所述多个LED灯的位置信息。图2为本发明实施例二提供的可移动机器人识别定位方法的流程图。如图2所示,本实施例中的方法,可以包括:
步骤S201、获取可移动机器人承载的多个LED灯的图像信息。
在本实施例中,可移动机器人承载的光源具体为多个LED灯,拍摄设备21拍摄的每个可移动机器人承载的多个LED灯的图像信息包括如下至少一种:所述多个LED灯的颜色信息,所述多个LED灯的排布信息,所述多个LED灯的位置信息,其中,多个LED灯的位置信息具体为多个LED灯在图像信息中的位置信息。
本实施例的执行主体可以是图1A中的服务器22,服务器22从拍摄设备21获取可移动机器人承载的多个LED灯的图像信息,具体的获取过程与上述实施例一中的方法一致,此处不再赘述。
步骤S202、根据所述多个LED灯的颜色信息和排布信息,识别所述可移动机器人。
步骤S203、根据所述多个LED灯的位置信息,定位所述可移动机器人。
服务器22根据所述多个LED灯的颜色信息和排布信息,识别所述可移动机器人的方法、以及根据所述多个LED灯的位置信息,定位所述可移动机器人的方法与上述实施例一中的方法一致,此处不再赘述。
本实施例通过可移动机器人承载的光源的图像信息,确定图像信息中光源的颜色、形状、以及光源相对于图像信息的位置,根据图像信息中光源的颜色和形状可识别机器人,根据光源相对于图像信息的位置可确定可移动机器人在场地中的位置,提高了识别可移动机器人的精度,同时提高了可移动机器人的定位精度。
实施例三
本发明实施例三提供一种可移动机器人识别定位方法。本实施例是在实施例二提供的技术方案的基础上,所述多个LED灯构成可编辑的LED阵 列。图3为本发明实施例三提供的可移动机器人识别定位方法的流程图。如图3所示,本实施例中的方法,可以包括:
步骤S301、获取所述可移动机器人的第一图像信息,其中,所述第一图像信息包括所述LED阵列的图像信息;
在本实施例中,第一图像信息是拍摄设备21拍摄的可移动机器人的图像信息,每个可移动机器人承载有一个可编辑的LED阵列,第一图像信息中不仅包括可移动机器人,也包括可编辑的LED阵列。
图3A为本发明实施例三提供的可编辑的LED阵列的示意图。如图3A所示,可编辑的LED阵列包括多行多列,每行每列的交叉点有一个LED灯,第一行第一列上的LED灯记为11,每个LED灯的亮灭可控制、颜色可控制,并且LED阵列中发光的多个LED灯的排布方式也可控制。比如用白色的圈表示灭的LED灯,用黑色的圈表示发光的LED灯,发光的多个LED灯可呈现不同的形状,例如阿拉伯数字1或其他字符、数字,其中,发光的多个LED灯可显示同一种颜色,也可以显示不同种颜色。另外,本发明实施例不限制LED阵列的大小。
图3B为本发明实施例三提供的第一图像信息的示意图。如图3B所示,拍摄设备21拍摄的可移动机器人的第一图像信息包括机器人B的图像71和机器人A的图像72,图像71和图像72中分别包括一个可编辑的LED阵列,各LED阵列呈现的颜色信息和排布信息不同,具体的,图像71中的可编辑的LED阵列呈现有一个蓝色的数字2,图像72中的可编辑的LED阵列呈现有一个红色的数字1。
步骤S302、根据所述第一图像信息对应的所述LED阵列的颜色信息和排布信息,识别所述可移动机器人;
本发明实施例可以根据LED阵列中发光的LED灯的颜色,确定可移动机器人所属的小组,例如,LED阵列中发光的LED灯均显示红色,则表示承载该LED阵列的可移动机器人属于红队,或者,LED阵列中发光的LED灯均显示蓝色,则表示承载该LED阵列的可移动机器人属于蓝队。当参赛的小组较多时,也可以根据LED阵列中发光的LED灯的颜色的组合,确定可移动机器人所属的小组,例如,用红色和蓝色的组合表示小组1、用红色和黄色的组合表示小组2,依次类推。另外,LED阵列中发光的LED灯 的排布方式可以有多种,例如,发光的LED灯呈现出阿拉伯数字、特定的图形或字符等,根据阿拉伯数字确定出可移动机器人的编号,或者根据特定的图形或字符确定出可移动机器人的身份。根据LED阵列中发光的LED灯的颜色信息和排布信息,即可确定出可移动机器人是哪个参赛小组中的哪个成员,从而实现了对可移动机器人的精确识别。
如图3B所示的第一图像信息中包括蓝色的数字2和红色的数字1,由于机器人B对应图像71、机器人A对应图像72,则可确定机器人B是蓝队的2号机器人,机器人A是红队的1号机器人。
步骤S303、根据所述LED阵列在所述第一图像信息中的位置信息,确定所述可移动机器人在所述第一图像信息对应的场地中的位置。
另外,服务器22还可根据红色的数字1在第一图像信息中的位置信息,确定机器人A在拍摄设备21拍摄的场地中的位置,即红色的数字1相对于第一图像信息的位置表示机器人A在场地中的位置。同理,服务器22根据蓝色的数字2在第一图像信息中的位置信息,确定机器人B在拍摄设备21拍摄的场地中的位置,即蓝色的数字2相对于第一图像信息的位置表示机器人B在场地中的位置。
步骤S304、根据所述LED阵列中各LED灯在所述第一图像信息中的位置信息,确定所述可移动机器人在所述第一图像信息对应的场地中的朝向。
LED阵列中每个发光的LED灯都具有两个信息:位置信息和颜色信息,因此,LED阵列中每个发光的LED灯可表示为(X,Y,C),其中,X表示发光LED灯在LED阵列中的横坐标,Y表示发光LED灯在LED阵列中的纵坐标,C表示发光LED灯的颜色。合理假设各个可移动机器人承载的LED阵列的大小相同。LED阵列中各LED灯在第一图像信息中的位置可根据第一行第一列的LED灯11在第一图像信息中的位置、以及各LED灯相对于LED灯11的偏移确定,从而确定发光的LED灯构成的图形在第一图像信息中的位置和方向。
图3C为本发明实施例三提供的第一图像信息的示意图。如图3C所示,根据LED阵列中发光的LED灯构成的图形的方向确定可移动机器人的朝向,根据上北下南,左西右东,如图3C所示,蓝色的数字2向北,表示机 器人B在场地中朝向北;蓝红色的数字1向北,表示机器人A在场地中朝向北。
可移动机器人在场地中的位置可能变化,在场地中的朝向也可能变化,例如可移动机器人站在同一点转身,改变自身的朝向。图3D为本发明实施例三提供的第一图像信息的示意图。如图3D所示,在图3C的基础上,各LED阵列中第一行第一列的LED灯11在第一图像信息中的位置均发生了改变,说明各LED阵列中发光的LED灯构成的图形在第一图像信息中的方向均发生了改变,具体地,蓝色的数字2向东,蓝红色的数字1向西,则表示机器人B在场地中朝向东,机器人A在场地中朝向西。
本发明实施例通过可移动机器人承载有LED阵列,获取承载有LED阵列的可移动机器人的图像信息,根据图像信息中LED阵列里发光LED灯的颜色和排布可精确的识别可移动机器,根据LED阵列在图像信息中的位置信息,可精确的定位可移动机器人,另外,根据LED阵列中发光的LED灯构成的图形的方向,还可确定可移动机器人在图像信息中的朝向。
实施例四
本发明实施例四提供一种可移动机器人识别定位方法。本实施例是在实施例三提供的技术方案的基础上,可移动机器人所处的场地划分为多个子场地,每个子场地的上方对应有一个第一摄像头。图4为本发明实施例四提供的可移动机器人识别定位方法的流程图。如图4所示,本实施例中的方法,可以包括:
步骤S401、通过每个子场地上方的第一摄像头获取所述子场地中所述可移动机器人的第一图像信息;
图4A本发明实施例四提供一种可移动机器人识别定位方法适用的网络结构图。如图4A所示,可移动机器人所处的场地80可划分为6个子场地,如虚线所示的区域,每个子场地的上方对应有一个第一摄像头,例如第一摄像头81-86分别对应一个子场地,各第一摄像头的摄像范围是虚线箭头指向的子场地。上述实施例涉及到的拍摄设备21可以是第一摄像头81-86中的任意一个,上述实施例涉及到的图像信息可以是第一摄像头81-86中的任意一个拍摄的其下方子场地的图像,如图4A所示,每个子场地上有可移动机器人。
另外,在本发明实施例中,LED阵列设置在每个可移动机器人的顶部,各第一摄像头在子场地上方向下拍摄图像时,可直接拍摄到可移动机器人顶部上的LED阵列,从而对可移动机器人进行全场定位。本实施例不限定子场地的划分方法,也不限定子场地的个数。
步骤S402、根据所述第一图像信息对应的所述LED阵列的颜色信息和排布信息,识别所述可移动机器人;
步骤S403、根据所述LED阵列在所述第一图像信息中的位置信息,确定所述可移动机器人在所述第一图像信息对应的场地中的位置;
步骤S404、根据所述LED阵列中各LED灯在所述第一图像信息中的位置信息,确定所述可移动机器人在所述第一图像信息对应的场地中的朝向;
步骤S402-步骤S404分别与上述步骤S302-步骤S304一致,具体方法此处不再赘述。
步骤S405、获取所述可移动机器人承载的第二摄像头拍摄的第二图像信息;
在本发明实施例中,各可移动机器人还承载有第二摄像头,该第二摄像头可以和光源设置在一起,也可以和光源分离设置,即第二摄像头即可设置在可移动机器人的顶部,也可以设置在中部或底部。该第二摄像头用于拍摄可移动机器人周围环境的图像生成第二图像信息。优选的,所述第二图像信息是无压缩的图像信息,第二摄像头连接有无线发送装置,第二摄像头拍摄到周围环境的第二图像信息后直接以无压缩的方式将第二图像信息发送给服务器。若可移动机器人与服务器之间的距离超出了无线发送装置的无线传输距离,则可移动机器人可通过中继设备将第二图像信息发送给服务器。
步骤S406、根据所述第二图像信息,确定所述可移动机器人的周围环境信息;
服务器接收到第二图像信息后,对第二图像信息进行图像处理,确定可移动机器人的周围环境信息,例如可移动机器人的周围是否有障碍物,可移动机器人的周围是否有敌方的机器人,可移动机器人是否跑出了场地外等。
步骤S407、根据所述可移动机器人的周围环境信息,控制所述可移动机器人的运动方向。
服务器接收到可移动机器人的周围环境信息,将可移动机器人的周围环境信息显示在显示屏23,显示屏23还可显示有场地80、划分后的子场地,以及每个子场地上的可移动机器人,用户可通过操作显示屏23上的可移动机器人输入控制指令,服务器将用户输入的控制指令发送给可移动机器人,以控制可移动机器人的运动方向。显示屏23上显示某一机器人的周围有障碍物,则用户通过在显示屏23上操作该机器人,以使该机器人绕过障碍物的方式输入控制指令,服务器将该控制指令发送给场地中的机器人,使场地中的机器人绕过实际的障碍物,即实现了用户远程控制机器人的方法。
步骤S408、接收所述可移动机器人发送的动力电源的电参数信息及/或剩余生命信息;
另外,可移动机器人在场地作战或运动时,动力电源不断的消耗电量,动力电源的电参数信息包括如下至少一种:电流、电压、功率、剩余电量。
可移动机器人可设置有压力传感器,当该压力传感器感测到的压力的大小超过了阈值,说明可移动机器人受到外界的冲击力较大,该可移动机器人可能受到了对方机器人沉重的打击,该可移动机器人内部的处理器根据压力传感器所处的位置、以及压力传感器感测到的压力的大小,确定可移动机器人受到的致命程度,根据致命程度确定可移动机器人的剩余生命信息。
或者可移动机器人可设置有感光材料,各队的机器人可持有红外线光束枪,当感光材料感测到红外线的照射强度或照射时间超出了阈值,说明该可移动机器人被对方机器人所持的红外线光束枪击中了,可移动机器人内部的处理器根据感光材料感测到红外线的部位、红外线的照射强度、以及红外线的照射时间,确定该可移动机器人的受伤程度或致命程度,根据受伤程度或致命程度确定可移动机器人的剩余生命信息。
可移动机器人可将动力电源的电参数信息及/或剩余生命信息实时发送给服务器。
步骤S409、显示所述可移动机器人的环境图像、所述可移动机器人的 位置信息以及所述可移动机器人的状态信息。
服务器22接收到可移动机器人的周围环境信息,将可移动机器人的周围环境信息显示在显示屏23,另外,服务器22还可以将可移动机器人的位置信息以及所述可移动机器人的状态信息显示在显示屏23。
其中,所述可移动机器人的位置信息以及所述可移动机器人的状态信息以嵌入的形式显示在所述所述可移动机器人的环境图像中。
所述可移动机器人的位置信息包括如下至少一种:所述可移动机器人的定位信息,所述可移动机器人的运动轨迹信息。
所述可移动机器人的状态信息包括如下至少一种:所述可移动机器人的标识信息,所述可移动机器人的朝向信息,所述可移动机器人的动力电源的电参数信息,所述可移动机器人的剩余生命信息。
本发明实施例通过服务器接收移动机器人承载的第二摄像头拍摄的第二图像信息,第二图像信息是无压缩的图像信息,减小了图像信息的传出延时,保证服务器可快速的接收到第二图像信息,服务器还可根据第二图像信息中的周围环境信息,控制可移动机器人的运动方向;另外,服务器连接有显示器,服务器将可移动机器人的环境图像、位置信息、以及状态信息显示在显示器上,以供用户实时查看可移动机器人的定位信息、运动轨迹信息、以及状态信息,进而远程控制可移动机器人的运动方向。
实施例五
本发明实施例五提供一种可移动机器人识别定位系统。图5为本发明实施例五提供的可移动机器人识别定位系统的结构图,如图5所示,可移动机器人识别定位系统50包括一个或多个处理器51、一个或多个处理器51可单独地或共同地工作,处理器51用于:获取可移动机器人承载的光源的图像信息;根据所述光源的图像信息,识别、定位所述可移动机器人。
在本发明实施例中,可移动机器人识别定位系统50还包括与所述处理器51通讯连接的图像传感器52,图像传感器52用于捕捉光源的图像信息,并将所述光源的图像信息传送给所述处理器51。
另外,所述光源包括如下至少一种:多个LED灯,荧光灯,红外线。
本发明实施例五提供的可移动机器人识别定位系统的具体原理和实现 方式均与实施例一类似,此处不再赘述。
本实施例通过可移动机器人承载的光源的图像信息,确定图像信息中光源的颜色、形状、以及光源相对于图像信息的位置,根据图像信息中光源的颜色和形状可识别机器人,根据光源相对于图像信息的位置可确定可移动机器人在场地中的位置,提高了识别可移动机器人的精度,同时提高了可移动机器人的定位精度。
实施例六
本发明实施例六提供一种可移动机器人识别定位系统。在实施例五提供的技术方案的基础上,所述光源为多个LED灯,所述光源的图像信息包括如下至少一种:所述多个LED灯的颜色信息,所述多个LED灯的排布信息,所述多个LED灯的位置信息。
处理器51具体用于根据所述多个LED灯的颜色信息和排布信息,识别所述可移动机器人。
处理器51具体用于根据所述多个LED灯的位置信息,定位所述可移动机器人。
本发明实施例六提供的可移动机器人识别定位系统的具体原理和实现方式均与实施例二类似,此处不再赘述。
本实施例通过可移动机器人承载的光源的图像信息,确定图像信息中光源的颜色、形状、以及光源相对于图像信息的位置,根据图像信息中光源的颜色和形状可识别机器人,根据光源相对于图像信息的位置可确定可移动机器人在场地中的位置,提高了识别可移动机器人的精度,同时提高了可移动机器人的定位精度。
实施例七
本发明实施例七提供一种可移动机器人识别定位系统。在实施例六提供的技术方案的基础上,所述多个LED灯构成可编辑的LED阵列。
图像传感器52具体用于捕捉所述可移动机器人的第一图像信息,其中,所述第一图像信息包括所述LED阵列的图像信息。
相应的,处理器51具体根据所述第一图像信息对应的所述LED阵列的颜色信息和排布信息,识别所述可移动机器人。
处理器51还具体根据所述LED阵列在所述第一图像信息中的位置信 息,确定所述可移动机器人在所述第一图像信息对应的场地中的位置。
处理器51还用于根据所述LED阵列中各LED灯在所述第一图像信息中的位置信息,确定所述可移动机器人在所述第一图像信息对应的场地中的朝向。
本发明实施例七提供的可移动机器人识别定位系统的具体原理和实现方式均与实施例三类似,此处不再赘述。
本发明实施例通过可移动机器人承载有LED阵列,获取承载有LED阵列的可移动机器人的图像信息,根据图像信息中LED阵列里发光LED灯的颜色和排布可精确的识别可移动机器,根据LED阵列在图像信息中的位置信息,可精确的定位可移动机器人,另外,根据LED阵列中发光的LED灯构成的图形的方向,还可确定可移动机器人在图像信息中的朝向。
实施例八
本发明实施例八提供一种可移动机器人识别定位系统。在实施例七提供的技术方案的基础上,所述场地划分为多个子场地,每个子场地的上方对应有一个第一摄像头;相应的,处理器51通过每个子场地上方的第一摄像头获取所述子场地中所述可移动机器人的第一图像信息。
图6为本发明实施例八提供的可移动机器人识别定位系统的结构图,如图6所示,可移动机器人识别定位系统50还包括与处理器51通讯连接的无线接收装置53,无线接收装置53用于接收所述可移动机器人承载的第二摄像头拍摄并发送的第二图像信息。
处理器51还用于根据所述第二图像信息,确定所述可移动机器人的周围环境信息;根据所述可移动机器人的周围环境信息,生成用于控制所述可移动机器人运动方向的控制命令。
如图6所示,可移动机器人识别定位系统50还包括与处理器51通讯连接的无线发送装置54,无线发送装置54用于向所述可移动机器人发送所述控制命令,以使所述可移动机器人根据所述控制命令改变运动方向。
进一步地,所述第二图像信息是无压缩的图像信息。
无线接收装置53还用于接收所述可移动机器人发送的动力电源的电参数信息及/或剩余生命信息。
如图6所示,可移动机器人识别定位系统50还包括与处理器51通讯连接的显示器55,显示器55用于显示所述可移动机器人的环境图像、所述可移动机器人的位置信息以及所述可移动机器人的状态信息。
进一步地,所述可移动机器人的位置信息以及所述可移动机器人的状态信息以嵌入的形式显示在所述所述可移动机器人的环境图像中。
所述可移动机器人的位置信息包括如下至少一种:所述可移动机器人的定位信息,所述可移动机器人的运动轨迹信息。
所述可移动机器人的状态信息包括如下至少一种:所述可移动机器人的标识信息,所述可移动机器人的朝向信息,所述可移动机器人的动力电源的电参数信息,所述可移动机器人的剩余生命信息。
本发明实施例八提供的可移动机器人识别定位系统的具体原理和实现方式均与实施例四类似,此处不再赘述。
本发明实施例通过服务器接收移动机器人承载的第二摄像头拍摄的第二图像信息,第二图像信息是无压缩的图像信息,减小了图像信息的传出延时,保证服务器可快速的接收到第二图像信息,服务器还可根据第二图像信息中的周围环境信息,控制可移动机器人的运动方向;另外,服务器连接有显示器,服务器将可移动机器人的环境图像、位置信息、以及状态信息显示在显示器上,以供用户实时查看可移动机器人的定位信息、运动轨迹信息、以及状态信息,进而远程控制可移动机器人的运动方向。
实施例九
本发明实施例九提供一种相机。图7为本发明实施例九提供的相机的结构图,如图7所示,该相机设有镜头模组89,此外还包括:壳体90、多个LED灯91和控制器92,其中,壳体90外表面设有灯窗93;多个LED灯91安装在壳体90内,并且发出的光线透过灯窗93;控制器92与多个LED灯91电连接;控制器92驱动LED灯发光,并且控制所述多个LED灯91的工作状态。
所述工作状态包括如下至少一种:所述LED灯的发光颜色,发光的所述多个LED灯的排布方式。
所述LED灯为RGB LED灯,所述控制器控制所述RGB LED灯的发光颜色。
本发明实施例提供的相机上设有多个LED灯,相机的控制器驱动所述LED灯发光,并且控制所述多个LED灯的工作状态,以使多个LED灯发出预设图案的灯光,通过对预设图案的图像信息的识别,能够识别装载有该相机的可移动机器人,从而方便地识别可移动机器人。同时,相机作为识别系统的模块,其安装与拆卸较为方便。
实施例十
本发明实施例十提供一种相机。图8为本发明实施例十提供的相机的结构图,如图8所示,在图7所示实施例的基础上,该相机还包括与镜头模组89连接的图像传输装置94,图像传输装置94用于将所述镜头模组89捕捉的原始图像直接以无压缩的方式传输出去。
另外,壳体90的内部空腔被分隔为灯腔100以及镜头腔101,多个LED灯91安装在灯腔100内,镜头模组89安装在镜头腔101内。
本发明实施例通过将镜头模组捕捉的原始图像直接以无压缩的方式传输出去,可节省图像压缩与解压缩的时间,减小了图传延时,提高了图传效率。
实施例十一
本发明实施例十一提供一种相机。图9A为本发明实施例十一提供的相机的爆炸图,如图9A所示,相机包括顶盖43、上盖37、以及底座38,上盖37位于顶盖43与底座38之间,上盖37与顶盖43共同拼接形成如图8所示的灯腔100,上盖37与底座38共同拼接形成如图8所示的镜头腔101。
如图9A所示,镜头保护玻璃30和镜头31构成图7或图8中的镜头模组89;LED灯窗口32具体为图7或图8中的灯窗93,并且一个LED灯33对应一个LED灯窗口32,多个LED灯构成可编辑的LED阵列;图传板34具体为图8中的图像传输装置94;连线35可用于将该相机固定在可移动机器人的机身上;线槽卡板36可用于连接上盖37和底座38;相机板39可用于固定镜头31;5.8G天线40可用于向服务器无线传输图像信息、可移动机器人的位置信息、状态信息等;电源板41可用于给该相机供电;风扇42可用于给LED灯33制冷,避免LED灯33因持久发光而烧坏,以便精准的识别和定位可移动机器人。
图9B为本发明实施例十一提供的相机的左视图,如图9B所示,相机的左视图包括图9A中所示的镜头保护玻璃30;图9C为本发明实施例十一提供的相机的正视图;图9D为本发明实施例十一提供的相机的俯视图;图9E为本发明实施例十一提供的相机的轴测图。
本发明实施例将相机的壳体内部空腔分为灯腔和镜头腔,将多个LED灯安装在灯腔内,将镜头模组安装在镜头腔内,通过将多个LED和镜头模组分开放置,避免镜头模组发出的灯光影响多个LED发出的灯光,进一步提高了识别可移动机器人的精确度。
实施例十二
本发明实施例十二提供一种可移动机器人。图10为本发明实施例十二提供的可移动机器人的结构图,在本实施例中,可移动机器人以遥控底盘车为例进行说明。
如图10所示,可移动机器人1包括机身1003、移动装置1001、相机1002,其中,移动装置1001与机身连接,用于提供所述机身移动的动力;相机1002安装在机身的顶部,相机1002具体可以是实施例九、实施例十、实施例十一中任一实施例所述的相机。如图7所示,相机设有镜头模组89,此外,该相机还包括:壳体90、多个LED灯91和控制器92,其中,壳体90外表面设有灯窗93;多个LED灯91安装在壳体90内,并且发出的光线透过灯窗93;控制器92与多个LED灯91电连接;控制器92驱动LED灯发光,并且控制所述多个LED灯91的工作状态。
所述工作状态包括如下至少一种:所述LED灯的发光颜色,发光的所述多个LED灯的排布方式。
所述LED灯为RGB LED灯,所述控制器控制所述RGB LED灯的发光颜色。
如图8所示,该相机还包括与镜头模组89连接的图像传输装置94,图像传输装置94用于将所述镜头模组89捕捉的原始图像直接以无压缩的方式传输出去。
另外,壳体90的内部空腔被分隔为灯腔100以及镜头腔101,多个LED灯91安装在灯腔100内,镜头模组89安装在镜头腔101内。
如图9A所示,相机包括顶盖、上盖、以及底座,所述上盖位于所述 顶盖与所述底座之间,所述上盖与所述顶盖共同拼接形成如图8所示的灯腔100,所述上盖与所述底座共同拼接形成如图8所示的镜头腔101。
如图9A所示,镜头保护玻璃30和镜头31构成图7或图8中的镜头模组89;LED灯窗口32具体为图7或图8中的灯窗93,并且一个LED灯33对应一个LED灯窗口32,多个LED灯构成可编辑的LED阵列;图传板34具体为图8中的图像传输装置94;连线35可用于将该相机固定在可移动机器人1的机身上;线槽卡板36可用于连接上盖37和底座38;相机板39可用于固定镜头31;5.8G天线40可用于向服务器无线传输图像信息、可移动机器人1的位置信息、状态信息等;电源板41可用于给该相机供电;风扇42可用于给LED灯33制冷,避免LED灯33因持久发光而烧坏,以便精准的识别和定位可移动机器人1。
本实施例通过可移动机器人1承载的光源的图像信息,确定图像信息中光源的颜色、形状、以及光源相对于图像信息的位置,根据图像信息中光源的颜色和形状可识别机器人,根据光源相对于图像信息的位置可确定可移动机器人1在场地中的位置,提高了识别可移动机器人1的精度,同时提高了可移动机器人1的定位精度。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以 采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (53)

  1. 一种可移动机器人识别定位方法,其特征在于,包括:
    获取可移动机器人承载的光源的图像信息;
    根据所述光源的图像信息,识别、定位所述可移动机器人。
  2. 根据权利要求1所述的方法,其特征在于,所述光源包括如下至少一种:多个LED灯,荧光灯,红外线。
  3. 根据权利要求2所述的方法,其特征在于,所述光源为多个LED灯,所述光源的图像信息包括如下至少一种:所述多个LED灯的颜色信息,所述多个LED灯的排布信息,所述多个LED灯的位置信息。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述光源的图像信息,识别、定位所述可移动机器人,包括:
    根据所述多个LED灯的颜色信息和排布信息,识别所述可移动机器人。
  5. 根据权利要求3所述的方法,其特征在于,所述根据所述光源的图像信息,识别、定位所述可移动机器人,包括:
    根据所述多个LED灯的位置信息,定位所述可移动机器人。
  6. 根据权利要求4或5所述的方法,其特征在于,所述多个LED灯构成可编辑的LED阵列。
  7. 根据权利要求6所述的方法,其特征在于,所述获取可移动机器人承载的光源的图像信息,包括:
    获取所述可移动机器人的第一图像信息,其中,所述第一图像信息包括所述LED阵列的图像信息;
    相应的,所述根据所述多个LED灯的颜色信息和排布信息,识别所述可移动机器人,包括:
    根据所述第一图像信息对应的所述LED阵列的颜色信息和排布信息,识别所述可移动机器人。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述多个LED灯的位置信息,定位所述可移动机器人,包括:
    根据所述LED阵列在所述第一图像信息中的位置信息,确定所述可移动机器人在所述第一图像信息对应的场地中的位置。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述LED阵列在所述第一图像信息中的位置信息,确定所述可移动机器人在所述第一图像信息对应的场地中的位置之后,还包括:
    根据所述LED阵列中各LED灯在所述第一图像信息中的位置信息,确定所述可移动机器人在所述第一图像信息对应的场地中的朝向。
  10. 根据权利要求8或9所述的方法,其特征在于,所述场地划分为多个子场地,每个子场地的上方对应有一个第一摄像头;
    所述获取所述可移动机器人的第一图像信息,包括:
    通过每个子场地上方的第一摄像头获取所述子场地中所述可移动机器人的第一图像信息。
  11. 根据权利要求1-5、8、9任一项所述的方法,其特征在于,还包括:
    获取所述可移动机器人承载的第二摄像头拍摄的第二图像信息;
    根据所述第二图像信息,确定所述可移动机器人的周围环境信息;
    根据所述可移动机器人的周围环境信息,控制所述可移动机器人的运动方向。
  12. 根据权利要求11所述的方法,其特征在于,所述第二图像信息是无压缩的图像信息。
  13. 根据权利要求12所述的方法,其特征在于,还包括:
    接收所述可移动机器人发送的动力电源的电参数信息及/或剩余生命信息。
  14. 根据权利要求13所述的方法,其特征在于,还包括:
    显示所述可移动机器人的环境图像、所述可移动机器人的位置信息以及所述可移动机器人的状态信息。
  15. 根据权利要求14所述的方法,其特征在于,所述可移动机器人的位置信息以及所述可移动机器人的状态信息以嵌入的形式显示在所述所述可移动机器人的环境图像中。
  16. 根据权利要求15所述的方法,其特征在于,所述可移动机器人的位置信息包括如下至少一种:所述可移动机器人的定位信息,所述可移动机器人的运动轨迹信息。
  17. 根据权利要求16所述的方法,其特征在于,所述可移动机器人的状态信息包括如下至少一种:所述可移动机器人的标识信息,所述可移动机器人的朝向信息,所述可移动机器人的动力电源的电参数信息,所述可移动机器人的剩余生命信息。
  18. 一种可移动机器人识别定位系统,其特征在于,包括:一个或多个处理器,单独地或共同地工作,所述处理器用于:
    获取可移动机器人承载的光源的图像信息;
    根据所述光源的图像信息,识别、定位所述可移动机器人。
  19. 根据权利要求18所述的可移动机器人识别定位系统,其特征在于,还包括:
    与所述处理器通讯连接的图像传感器,所述图像传感器用于捕捉光源的图像信息,并将所述光源的图像信息传送给所述处理器。
  20. 根据权利要求19所述的可移动机器人识别定位系统,其特征在于,所述光源包括如下至少一种:多个LED灯,荧光灯,红外线。
  21. 根据权利要求20所述的可移动机器人识别定位系统,其特征在于,所述光源为多个LED灯,所述光源的图像信息包括如下至少一种:所述多个LED灯的颜色信息,所述多个LED灯的排布信息,所述多个LED灯的位置信息。
  22. 根据权利要求21所述的可移动机器人识别定位系统,其特征在于,所述处理器具体用于根据所述多个LED灯的颜色信息和排布信息,识别所述可移动机器人。
  23. 根据权利要求21所述的可移动机器人识别定位系统,其特征在于,所述处理器具体用于根据所述多个LED灯的位置信息,定位所述可移动机器人。
  24. 根据权利要求22或23所述的可移动机器人识别定位系统,其特征在于,所述多个LED灯构成可编辑的LED阵列。
  25. 根据权利要求24所述的可移动机器人识别定位系统,其特征在于,所述图像传感器具体用于捕捉所述可移动机器人的第一图像信息,其中,所述第一图像信息包括所述LED阵列的图像信息;
    相应的,所述处理器具体根据所述第一图像信息对应的所述LED阵列 的颜色信息和排布信息,识别所述可移动机器人。
  26. 根据权利要求25所述的可移动机器人识别定位系统,其特征在于,所述处理器还具体根据所述LED阵列在所述第一图像信息中的位置信息,确定所述可移动机器人在所述第一图像信息对应的场地中的位置。
  27. 根据权利要求26所述的可移动机器人识别定位系统,其特征在于,所述处理器还用于根据所述LED阵列中各LED灯在所述第一图像信息中的位置信息,确定所述可移动机器人在所述第一图像信息对应的场地中的朝向。
  28. 根据权利要求26或27所述的可移动机器人识别定位系统,其特征在于,所述场地划分为多个子场地,每个子场地的上方对应有一个第一摄像头;
    相应的,所述处理器通过每个子场地上方的第一摄像头获取所述子场地中所述可移动机器人的第一图像信息。
  29. 根据权利要求18-23、26、27任一项所述的可移动机器人识别定位系统,其特征在于,还包括:
    与所述处理器通讯连接的无线接收装置,所述无线接收装置用于接收所述可移动机器人承载的第二摄像头拍摄并发送的第二图像信息;
    所述处理器还用于根据所述第二图像信息,确定所述可移动机器人的周围环境信息;根据所述可移动机器人的周围环境信息,生成用于控制所述可移动机器人运动方向的控制命令。
  30. 根据权利要求29所述的可移动机器人识别定位系统,其特征在于,还包括:
    与所述处理器通讯连接的无线发送装置,所述无线发送装置用于向所述可移动机器人发送所述控制命令,以使所述可移动机器人根据所述控制命令改变运动方向。
  31. 根据权利要求30所述的可移动机器人识别定位系统,其特征在于,所述第二图像信息是无压缩的图像信息。
  32. 根据权利要求31所述的可移动机器人识别定位系统,其特征在于,所述无线接收装置还用于接收所述可移动机器人发送的动力电源的电参数信息及/或剩余生命信息。
  33. 根据权利要求32所述的可移动机器人识别定位系统,其特征在于,还包括:
    与所述处理器通讯连接的显示器,所述显示器用于显示所述可移动机器人的环境图像、所述可移动机器人的位置信息以及所述可移动机器人的状态信息。
  34. 根据权利要求33所述的可移动机器人识别定位系统,其特征在于,所述可移动机器人的位置信息以及所述可移动机器人的状态信息以嵌入的形式显示在所述所述可移动机器人的环境图像中。
  35. 根据权利要求34所述的可移动机器人识别定位系统,其特征在于,所述可移动机器人的位置信息包括如下至少一种:所述可移动机器人的定位信息,所述可移动机器人的运动轨迹信息。
  36. 根据权利要求35所述的可移动机器人识别定位系统,其特征在于,所述可移动机器人的状态信息包括如下至少一种:所述可移动机器人的标识信息,所述可移动机器人的朝向信息,所述可移动机器人的动力电源的电参数信息,所述可移动机器人的剩余生命信息。
  37. 一种相机,设有镜头模组,其特征在于,所述相机还包括:
    壳体,所述壳体外表面设有灯窗;
    多个LED灯,安装在所述壳体内,并且发出的光线透过所述灯窗;
    控制器,与所述多个LED灯电连接;
    其中,所述控制器驱动所述LED灯发光,并且控制所述多个LED灯的工作状态。
  38. 根据权利要求37所述的相机,其特征在于,所述工作状态包括如下至少一种:所述LED灯的发光颜色,发光的所述多个LED灯的排布方式。
  39. 根据权利要求38所述的相机,其特征在于,所述LED灯为RGB LED灯,所述控制器控制所述RGB LED灯的发光颜色。
  40. 根据权利要求39所述的相机,其特征在于,还包括:
    与所述镜头模组连接的图像传输装置,所述图像传输装置用于将所述镜头模组捕捉的原始图像直接以无压缩的方式传输出去。
  41. 根据权利要求40所述的相机,其特征在于,所述壳体的内部空 腔被分隔为灯腔以及镜头腔,所述多个LED灯安装在所述灯腔内,所述镜头模组安装在所述镜头腔内。
  42. 根据权利要求41所述的相机,其特征在于,所述壳体包括顶盖、上盖、以及底座,所述上盖位于所述顶盖与所述底座之间,所述上盖与所述顶盖共同拼接形成所述灯腔,所述上盖与所述底座共同拼接形成所述镜头腔。
  43. 根据权利要求37-42任一项所述的相机,其特征在于,一个LED灯对应一个灯窗。
  44. 根据权利要求37-42任一项所述的相机,其特征在于,所述多个LED灯构成可编辑的LED阵列。
  45. 一种可移动机器人,其特征在于,包括:
    机身;
    移动装置,与机身连接,用于提供所述机身移动的动力;
    相机,安装在所述机身的顶部,所述相机设有镜头模组,还包括:
    壳体,所述壳体外表面设有灯窗;
    多个LED灯,安装在所述壳体内,并且发出的光线透过所述灯窗;
    控制器,与所述多个LED灯电连接;
    其中,所述控制器驱动所述LED灯发光,并且控制所述多个LED灯的工作状态。
  46. 根据权利要求45所述的可移动机器人,其特征在于,所述工作状态包括如下至少一种:所述LED灯的发光颜色,发光的所述多个LED灯的排布方式。
  47. 根据权利要求46所述的可移动机器人,其特征在于,所述LED灯为RGB LED灯,所述控制器控制所述RGB LED灯的发光颜色。
  48. 根据权利要求47所述的可移动机器人,其特征在于,所述相机还包括与所述镜头模组连接的图像传输装置,所述图像传输装置用于将所述镜头模组捕捉的原始图像直接以无压缩的方式传输出去。
  49. 根据权利要求48所述的可移动机器人,其特征在于,所述壳体的内部空腔被分隔为灯腔以及镜头腔,所述多个LED灯安装在所述灯腔内,所述镜头模组安装在所述镜头腔内。
  50. 根据权利要求49所述的可移动机器人,其特征在于,所述壳体包括顶盖、上盖、以及底座,所述上盖位于所述顶盖与所述底座之间,所述上盖与所述顶盖共同拼接形成所述灯腔,所述上盖与所述底座共同拼接形成所述镜头腔。
  51. 根据权利要求45-50任一项所述的可移动机器人,其特征在于,一个LED灯对应一个灯窗。
  52. 根据权利要求45-50任一项所述的可移动机器人,其特征在于,所述多个LED灯构成可编辑的LED阵列。
  53. 根据权利要求45-50任一项所述的可移动机器人,其特征在于,所述可移动机器人为遥控底盘车。
PCT/CN2016/085140 2016-06-07 2016-06-07 可移动机器人识别定位方法、装置、系统及可移动机器人 WO2017210866A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680002465.8A CN107076557A (zh) 2016-06-07 2016-06-07 可移动机器人识别定位方法、装置、系统及可移动机器人
PCT/CN2016/085140 WO2017210866A1 (zh) 2016-06-07 2016-06-07 可移动机器人识别定位方法、装置、系统及可移动机器人

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/085140 WO2017210866A1 (zh) 2016-06-07 2016-06-07 可移动机器人识别定位方法、装置、系统及可移动机器人

Publications (1)

Publication Number Publication Date
WO2017210866A1 true WO2017210866A1 (zh) 2017-12-14

Family

ID=59623424

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/085140 WO2017210866A1 (zh) 2016-06-07 2016-06-07 可移动机器人识别定位方法、装置、系统及可移动机器人

Country Status (2)

Country Link
CN (1) CN107076557A (zh)
WO (1) WO2017210866A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109008806A (zh) * 2018-06-25 2018-12-18 东莞市光劲光电有限公司 一种基于led智能灯定位的扫地机器人定位系统及方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107967437B (zh) * 2017-11-17 2021-04-20 深圳市易成自动驾驶技术有限公司 一种图像处理方法、装置及计算机可读存储介质
CN107958144A (zh) * 2017-12-18 2018-04-24 王军 无人机身份识别系统、识别方法以及控制装置
WO2019153345A1 (zh) * 2018-02-12 2019-08-15 深圳前海达闼云端智能科技有限公司 环境信息确定方法、装置、机器人及存储介质
JP6981332B2 (ja) * 2018-03-23 2021-12-15 トヨタ自動車株式会社 移動体
CN110352117A (zh) * 2018-04-25 2019-10-18 深圳市大疆创新科技有限公司 智能比赛场地及系统、系统服务器、机器人、控制方法
CN111844041B (zh) * 2020-07-23 2021-11-09 上海商汤临港智能科技有限公司 定位辅助装置、机器人和视觉定位系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090048772A1 (en) * 2007-08-13 2009-02-19 Samsung Electronics Co., Ltd. Method and apparatus for searching target location
CN103970134A (zh) * 2014-04-16 2014-08-06 江苏科技大学 多移动机器人系统协作实验平台及其视觉分割和定位方法
CN105527960A (zh) * 2015-12-18 2016-04-27 燕山大学 一种基于领航跟随的移动机器人编队控制方法
CN105573316A (zh) * 2015-12-01 2016-05-11 武汉科技大学 一种自主编队移动群体机器人
CN205660739U (zh) * 2016-06-07 2016-10-26 深圳市大疆创新科技有限公司 可移动机器人用的相机及可移动机器人

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6385506B1 (en) * 1999-03-24 2002-05-07 Sony Corporation Robot
CN2613661Y (zh) * 2003-03-17 2004-04-28 上海金星报警器材厂 Led诱导标志灯
CN101537618B (zh) * 2008-12-19 2010-11-17 北京理工大学 体育场捡球机器人视觉系统
CN102306026B (zh) * 2010-08-02 2013-02-20 重庆大学 足球机器人感知系统
CN102542294A (zh) * 2011-12-29 2012-07-04 河海大学常州校区 双视觉信息融合的集控式足球机器人识别系统及识别方法
CN102661796B (zh) * 2012-04-17 2014-11-05 中北大学 Mems红外光源阵列主动式光电标识方法
CN103413450B (zh) * 2013-08-12 2016-01-20 成都谱视科技有限公司 一种定位装置
CN103837147B (zh) * 2014-03-13 2017-01-18 北京理工大学 主动红外点阵式人工路标及智能体定位系统及方法
CN105352483A (zh) * 2015-12-24 2016-02-24 吉林大学 基于led阵列的汽车车身位姿参数检测系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090048772A1 (en) * 2007-08-13 2009-02-19 Samsung Electronics Co., Ltd. Method and apparatus for searching target location
CN103970134A (zh) * 2014-04-16 2014-08-06 江苏科技大学 多移动机器人系统协作实验平台及其视觉分割和定位方法
CN105573316A (zh) * 2015-12-01 2016-05-11 武汉科技大学 一种自主编队移动群体机器人
CN105527960A (zh) * 2015-12-18 2016-04-27 燕山大学 一种基于领航跟随的移动机器人编队控制方法
CN205660739U (zh) * 2016-06-07 2016-10-26 深圳市大疆创新科技有限公司 可移动机器人用的相机及可移动机器人

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109008806A (zh) * 2018-06-25 2018-12-18 东莞市光劲光电有限公司 一种基于led智能灯定位的扫地机器人定位系统及方法
CN109008806B (zh) * 2018-06-25 2023-06-20 东莞市光劲光电有限公司 一种基于led智能灯定位的扫地机器人定位系统及方法

Also Published As

Publication number Publication date
CN107076557A (zh) 2017-08-18

Similar Documents

Publication Publication Date Title
WO2017210866A1 (zh) 可移动机器人识别定位方法、装置、系统及可移动机器人
US8818083B2 (en) System of drones provided with recognition beacons
JP3779308B2 (ja) カメラ校正システム及び三次元計測システム
US10772171B2 (en) Directional lighting system and method
CN103019014B (zh) 投影装置以及投影控制方法
WO2017146429A1 (ko) 가상 현실 시뮬레이션 시스템
CN204156959U (zh) 深度摄影机装置
US10670687B2 (en) Visual augmentation system effectiveness measurement apparatus and methods
US10171735B2 (en) Panoramic vision system
JP7116526B2 (ja) 照明環境計測システム
WO2019214643A1 (zh) 通过光通信装置对能够自主移动的机器进行导引的方法
CN205660739U (zh) 可移动机器人用的相机及可移动机器人
JP2020532815A (ja) 1−d、2−d、およびdpmシンボル体系を、検出、読取り、および検証する装置
CN206399422U (zh) 多功能视觉传感器及移动机器人
CN206493318U (zh) 基于可见光定位导航的室内机器人
CN105517643A (zh) 用于提示实体游戏角色的信息的提示方法及装置、以及遥控战车
US20170168592A1 (en) System and method for optical tracking
WO2020244480A1 (zh) 一种用于实现相对定位的装置以及相应的相对定位方法
CN108731677B (zh) 一种机器人导航路标及识别方法
JP5545024B2 (ja) 発光システムおよびその制御方法
WO2019165647A1 (zh) 一种红外夜视设备
JP2016178614A (ja) 情報発信装置および情報取得装置
CN112051546B (zh) 一种用于实现相对定位的装置以及相应的相对定位方法
CN209787277U (zh) 相机模组、机器人及路标定位系统
CN216448707U (zh) 激光电子靶及打靶系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16904319

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16904319

Country of ref document: EP

Kind code of ref document: A1