CN116710975A - Method for providing navigation data for controlling a robot, method and device for manufacturing at least one predefined point-symmetrical area - Google Patents

Method for providing navigation data for controlling a robot, method and device for manufacturing at least one predefined point-symmetrical area Download PDF

Info

Publication number
CN116710975A
CN116710975A CN202180090220.6A CN202180090220A CN116710975A CN 116710975 A CN116710975 A CN 116710975A CN 202180090220 A CN202180090220 A CN 202180090220A CN 116710975 A CN116710975 A CN 116710975A
Authority
CN
China
Prior art keywords
symmetry
pattern
image
center
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180090220.6A
Other languages
Chinese (zh)
Inventor
S·西蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of CN116710975A publication Critical patent/CN116710975A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/529Depth or shape recovery from texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/68Analysis of geometric attributes of symmetry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/16Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a method for providing navigation data (135) for controlling a robot. The method comprises the step of reading in image data (105) provided by means of a camera (102) from the camera (102). -the image data (105) representing a camera image of at least one predefined point symmetry region (110), -a step of determining at least one center of symmetry (112) of the at least one point symmetry region (110) using the image data (105) and a determination rule (128), -a step of comparing a position of the center of symmetry (112) in the camera image with a predefined position of a reference center of symmetry in a reference image (115) to determine a positional deviation (131) between the center of symmetry (112) and the reference center of symmetry, and-optionally-the positional deviation (131) is additionally used to determine a displacement vector (133) of at least one subset of pixels of the camera image with respect to corresponding pixels of the reference image (115). -providing the navigation data (135) using the position deviation (131) and/or displacement vector (133).

Description

Method for providing navigation data for controlling a robot, method and device for manufacturing at least one predefined point-symmetrical area
Technical Field
The invention is based on a device or a method of the kind according to the independent claims. Computer programs are also the subject of the present invention.
Background
In the field of visual robotic control (visual servoing), it may be particularly important to efficiently and accurately determine the pose of a robot or the position and orientation of a robot based on image data.
DE 10 2020 202 160 A1, which is disclosed later, discloses a method for determining symmetry properties in image data and a method for controlling functions.
Disclosure of Invention
Against this background, a method according to the main claim, as well as a device using the method, and a corresponding computer program are proposed by the solutions presented herein. Advantageous extensions and improvements of the device described in the independent claims can be made by the measures listed in the dependent claims.
According to an embodiment, the following facts may be exploited in particular: the points or objects in the world are marked or will be marked by means of point-symmetrical areas, so that a system with an imaging sensor and the suitable method proposed herein can detect and locate these point-symmetrical areas with high accuracy to perform specific technical functions robustly and locally, optionally without humans or living beings regarding such marks as disturbances.
For example, it may happen that the symmetric region is not fully imaged into the camera image, for example because the symmetric region may be partially occluded by the object, or because the symmetric region may partially protrude from the image, or because the pattern may have been cropped. Advantageously, the positioning accuracy of the point symmetry center can still be maintained, since partial occlusion does not distort its position: the remaining point-symmetric pairs can still vote to support the correct center of symmetry. Partial occlusion can only reduce the intensity of the foci in the voting matrix, etc., but the location of the center of symmetry can be preserved and still be accurately and simply determined. This is a particular advantage of utilizing point symmetry.
A further advantage in case of finding a point-symmetry based region or pattern may particularly result from the fact that: the point symmetry is constant with respect to the rotation between the point symmetric region and the camera or image recording and largely constant with respect to the viewing angle. For example, the point symmetry plane may be invariant with respect to affine imaging. Imaging of arbitrarily oriented planes by a real camera is at least partially always well approximated by affine imaging. For example, if a circular point-symmetric region is observed at a squint angle, the circular shape becomes an ellipse in which the point-symmetric characteristic and the point-symmetric center are maintained. Thus, the at least one point-symmetrical region does not necessarily have to be observed from a frontal view-even a very oblique view does not cause difficulties and achievable accuracy can be maintained. This invariance, in particular with respect to rotation and with respect to viewing angle, makes it possible to dispense with precautions for properly aligning the camera in the symmetrical area or vice versa. Instead, it may be sufficient that a corresponding point-symmetric region is at least partially captured in the camera image, such that the point-symmetric region may be detected. The relative positional relationship or arrangement between the point-symmetrical region and the camera may be unimportant or hardly important in this case.
A method for providing navigation data for controlling a robot is proposed, wherein the method has the steps of:
reading in image data provided by means of a camera from an interface to the camera, wherein the image data represent camera images of at least one predefined even and/or odd point symmetric region in the environment of the camera;
determining at least one symmetry center of at least one even and/or odd point symmetry region using the image data and a determination rule;
comparing the position of the at least one center of symmetry in the camera image with a predefined position of at least one reference center of symmetry in a reference image with respect to a reference coordinate system to determine a positional deviation between the center of symmetry and the reference center of symmetry; and/or
-using the positional deviation to find displacement information of at least a subset of pixels of the camera image relative to corresponding pixels of the reference image, wherein the positional deviation and/or the displacement information is used to provide the navigation data.
The method may be implemented, for example, in software or hardware or in a hybrid form of software and hardware, for example, in a control device or device. In the reading step, image data from a plurality of cameras may also be read in, wherein the image data represent a plurality of camera images of the at least one region. Alternatively, multiple reference images may be used herein in the step of comparing. The reference image may also be replaced by reference data at least partially corresponding or equivalent to information that may be obtained from the reference image. The work of using the reference data may be advantageous, in particular in the sense of reducing the effort in case the information extractable from the reference image has been given in a more usable form in the form of the reference data. The reference data may represent the reference image in compressed form or representation, for example as a descriptor image, a signature image and/or a list of all coordinates and types with existing symmetry centers. The at least one predefined point-symmetrical region can be manufactured by performing a variant of the method for manufacturing described below. The determination rules may be similar or correspond to the procedure disclosed in DE 102020202160, which applicant later discloses. The reference image may represent at least one predefined point symmetric region. The optional step of finding may be performed using optical flow, in particular dense optical flow. The control robot may represent a pose, a position, a location, an orientation, etc. of the control or adjustment robot with respect to a reference point or reference object marked with at least one predefined point symmetry region. The control signals may be used to manipulate at least one actuator of the robot to control the robot. The displacement information may represent a displacement vector or absolute coordinates.
According to an embodiment, the determination rule used in the determining step may be configured such that a signature is generated for a plurality of pixels of at least one section of the camera image to obtain a plurality of signatures. In this case, each signature may be generated using a descriptor with a plurality of different filters. Each filter may be of at least one symmetric type. Each signature may have a sign for each filter of the descriptor. The determination rule may also be structured such that at least one mirror signature of at least one symmetry type for the filter is taken for the signature. The determination rule may also be configured such that it is checked whether a pixel with a signature has at least one further pixel in a search area in the surrounding of the pixel, the at least one further pixel having a signature corresponding to at least one mirror signature, to find the pixel coordinates of at least one symmetric signature pair from the pixel and the further pixel when the at least one further pixel is present. Additionally, the determination rule may be configured such that pixel coordinates of the at least one symmetric signature pair are evaluated to identify the at least one center of symmetry. The descriptor may describe the image content in a local environment around a pixel or reference pixel in a compact form. The signature may represent, for example, in binary, the value of a descriptor describing the pixel. Thus, the at least one mirror signature may be determined using a plurality of calculated signature images, e.g. one signature image having a normal filter, one signature image having an even point mirror filter, one signature image having an odd point mirror filter. Additionally or alternatively, at least one reflector may be applied to the sign of one of the signatures to derive at least one mirror signature. In this case, each reflector may have rules specific to the symmetric type and descriptor-dependent filter for modifying the symbol. The search area may depend on at least one of the reflectors used. Such an embodiment provides the advantage of enabling efficient and accurate detection of symmetric properties in image data. In this case, symmetry detection in the image can be achieved with minimal effort.
In this case, in the determining step, a transformation rule for transforming the pixel coordinates of the center of symmetry and/or of at least one predefined even and/or odd point symmetry region is determined for each determined center of symmetry using the pixel coordinates of each symmetric signature pair that help to correctly identify the center of symmetry. The transformation rules may be applied to the center of symmetry and/or to the pixel coordinates of at least one predefined even and/or odd point symmetric region to correct for distorted viewing angles of the camera image. Such an embodiment provides the advantage that a reliable and accurate reconstruction of the correct grid or correct topology of the plurality of point-symmetric regions can be achieved.
The type of symmetry of the at least one symmetry center may also be determined in the determining step. The symmetry type may represent even point symmetry and additionally or alternatively odd point symmetry. Additionally or alternatively, in the step of comparing in this case, a type of symmetry with at least one center of symmetry in the camera image may be compared with a predefined type of symmetry with at least one reference center of symmetry in the reference image to check for consistency between the at least one center of symmetry and the at least one reference center of symmetry. Odd point symmetry may be created by dot mirroring of gray or color value inversions. By using and identifying two different point symmetries, the information content of the point symmetric region and the pattern can be increased.
In this case, the image data read in the reading-in step may represent a camera image of at least one pattern composed of a plurality of predefined even and/or odd point-symmetrical regions. Here, in the determining step, a geometric arrangement of symmetry centers of the at least one pattern may be determined, a geometric sequence of symmetry types of the symmetry centers may be determined, and additionally or alternatively the sequence may be used to determine the pattern from a plurality of predefined patterns. The arrangement and/or the sequence may represent an identification code of a pattern. Such an embodiment provides the advantage that the reliability of identifying the center of symmetry can be increased and that further information can be obtained by identifying a specific pattern. Reliable identification of the center of symmetry can also be achieved for different distances between the camera and the pattern.
In this case, in the determining step, the implicit additional information of the at least one pattern or the readout rules for reading out explicit additional information in the camera image are determined using the arrangement of the symmetry center of the at least one pattern and additionally or alternatively using the sequence of symmetry types of the symmetry center. The arrangement and additionally or alternatively the sequence may represent the additional information in encoded form. The additional information may be related to controlling the robot. Such an embodiment provides the advantage that additional information may be conveyed by the topology of the at least one pattern.
In the step of comparing, the reference image may also be selected from a plurality of stored reference images or generated using stored generation rules, depending on the determined arrangement, the determined sequence and additionally or alternatively the determined pattern. In this way, the correct reference image can be reliably identified. Alternatively, when there is a link between the identified pattern and the production rule, the memory requirements for the reference image may also be minimized, since only the production rule needs to be stored.
Furthermore, the determination step and additionally or alternatively the comparison step may be performed jointly for all symmetry centers independently of the symmetry type of the symmetry center, or separately for symmetry centers of the same symmetry type depending on the symmetry type of the symmetry center. Thus, low memory and time requirements for accurately and reliably identifying centers of symmetry can be achieved by co-execution. Alternatively, confusion with randomly occurring patterns in the image may be minimized, in particular by performing it separately.
A method for controlling a robot is also proposed, wherein the method has the steps of:
Evaluating the navigation data provided according to an embodiment of the above method to generate a control signal dependent on the navigation data; and
and outputting the control signal to an interface to the robot to control the robot.
The method may be implemented, for example, in software or hardware or in a hybrid form of software and hardware, for example, in a control device or device. The method for controlling can be advantageously performed in combination with the embodiments of the method for providing described above.
Furthermore, a method for producing at least one predefined even and/or odd point-symmetrical region for use in an embodiment of the above method is proposed, wherein the method has the following steps:
generating design data representing a graphical representation of the at least one predefined even and/or odd point symmetric region; and
the at least one predefined even and/or odd point symmetric region is generated on, at or in the display medium using the design data to fabricate the at least one predefined even and/or odd point symmetric region.
The method may be implemented, for example, in software or hardware or in a hybrid form of software and hardware, for example, in a control device or device. By performing the manufacturing method, at least one predefined even and/or odd point symmetric region may be manufactured, which may be used within the scope of the embodiments of the method described above.
According to one embodiment, design data representing a graphical representation of at least one predefined even and/or odd point symmetric region as a circle, ellipse, square, rectangle, pentagon, hexagon, polygon or torus may be generated in the generating step. In this case, at least one predefined even and/or odd point symmetric region may have a regular or quasi-random content pattern. Additionally or alternatively, the first half of at least one predefined even and/or odd point-symmetrical region can be arbitrarily predefined, and the second half can be constructed by point mirroring and optionally an inversion of the gray values and additionally or alternatively the color values. Additionally or alternatively, in the generating step, the at least one predefined even and/or odd point symmetric region may be generated by an additive manufacturing process, separation, coating, shaping, primary shaping, or optical display. Additionally or alternatively, the display medium may have glass, stone, ceramic, plastic, rubber, metal, concrete, gypsum, paper, cardboard, food, or an optical display device. Thus, at least one predefined even and/or odd point-symmetrical region can be manufactured in a precisely suitable manner, depending on the specific application or depending on the specific application and the boundary conditions prevailing there.
Design data representing a graphical representation of at least one pattern of a plurality of predefined even and/or odd point symmetric regions may also be generated in the generating step. In this case, at least a subset of the even and/or odd point symmetric regions may be aligned on a regular or irregular grid, directly adjoined to each other and additionally or alternatively separated from at least one adjacent even and/or odd point symmetric region by a gap portion, may be identical to each other or different from each other in their size and/or their content pattern, and additionally or alternatively be arranged in a common plane or in different planes. Additionally or alternatively, in the generating step, design data representing a graphical representation of the at least one pattern having hierarchical symmetry may be generated. In this way, different patterns with specific information content and additionally or alternatively patterns with hierarchical symmetry can be produced for different distances from the patterns.
Even if the presence of the corresponding marks is known, in particular humans, it is difficult to perceive the symmetry hidden in the pattern. This also makes it possible to conceal such a marking, for example. This may be significant or desirable for various reasons, for example, particularly for aesthetic reasons, because technical indicia should not or are not desired to be seen, because, for example, attention should not be reduced due to indicia that are not important to humans, or because the indicia should be kept secret. Aesthetic reasons play an important role, especially in the design field. For example in the interior space of a vehicle, on the vehicle skin, on an aesthetically designed object or in the field of interior architecture or architectural architecture, conspicuous technical markings are not or hardly accepted. However, if the technical marking is to be hidden, for example in a textile pattern or in a plastic or ceramic relief or in a hologram or on a printed surface, as is possible according to an embodiment, the technical marking may be simultaneously attractive and useful, for example to provide one or more reference points for the camera, for example to thereby be able to determine the relative camera pose. Depending on the application, the hidden aspects may also be irrelevant or have little relevance. Thus, the robustness of the technique is still applicable to the use of such designed patterns. In particular, patterns with random or pseudo-random characters may provide a variety of possibilities to find as well-defined pairs of symmetry points as possible. Depending on the embodiment, this may be exploited, for example, in particular to favor the signal-to-noise ratio of the response measured at the center of symmetry and thus to favor robustness in the sense of error-free detection and accurate positioning of the center of symmetry. The pattern may in particular comprise one or more point symmetric regions with an odd or even point symmetry. These areas may be designed, for example, as circles, hexagons, squares, ovals, polygons or other shapes. The point-symmetric regions may be of the same type or of different shapes and sizes. The point-symmetrical regions may be connected to each other or spaced apart without gaps.
The solution presented here also creates a device configured to execute, manipulate or implement the steps of the variant of the method presented here in the corresponding apparatus. The task on which the invention is based can also be solved quickly and effectively by this embodiment variant of the invention in the form of a device.
To this end, the device may have at least one computing unit for processing signals or data, at least one memory unit for storing signals or data, at least one interface to a sensor or an actuator, at least one communication interface for reading in sensor signals from the sensor or for reading in or outputting data or control signals to the actuator and/or for reading in or outputting data embedded in a communication protocol. The computing unit may be, for example, a signal processor, a microcontroller, etc., wherein the memory unit may be a flash memory, an EEPROM or a magnetic memory unit. The communication interface may be configured to read in or output data wirelessly and/or wiredly, wherein the communication interface, which may read in or output wired data, may read in the data electrically or optically, for example, from a corresponding data transmission line or may output the data electrically or optically into a corresponding data transmission line.
In the present case, a device is understood to mean an electrical device that processes a sensor signal and outputs a control signal and/or a data signal as a function of the sensor signal. The device may have an interface that may be constructed as hardware and/or software. In the case of a hardware configuration, the interface may be, for example, part of a so-called system ASIC, which contains the various functions of the device. However, the interface may also be a separate integrated circuit or at least partly consist of discrete components. In the case of a software design, the interface may be a software module which is present on the microcontroller together with other software modules, for example.
A computer program product or a computer program having a program code which can be stored on a machine-readable carrier or a storage medium, such as a semiconductor memory, a hard disk memory or an optical memory, for performing, implementing and/or manipulating the steps of a method according to one of the embodiments described above is also advantageous, in particular when the program product or the program is run on a computer or a device. The method may be implemented here as a hardware accelerator on a SoC or ASIC.
Drawings
Embodiments of the solutions presented herein are illustrated in the accompanying drawings and explained in more detail in the following description.
Fig. 1 shows a schematic view of an embodiment of a device for providing, an embodiment of a device for controlling and a camera;
FIG. 2 shows a schematic diagram of an embodiment of an apparatus for manufacturing;
FIG. 3 shows a flow chart of an embodiment of a method for providing;
fig. 4 shows a flow chart of an embodiment of a method for control.
FIG. 5 shows a flow chart of an embodiment of a method for manufacturing;
FIG. 6 shows a schematic diagram of a display medium having a pattern of predefined point symmetric regions, according to an embodiment;
FIG. 7 shows a schematic diagram of a display medium having a pattern of predefined point symmetric regions, according to an embodiment;
FIG. 8 shows a schematic view of a display medium having a pattern from FIG. 7, with a graphic highlighting of the pattern or predefined point-symmetric region;
FIG. 9 shows a schematic diagram of a predefined point symmetric region according to an embodiment;
FIG. 10 shows a schematic diagram of a pattern of predefined point symmetric regions, according to an embodiment;
FIG. 11 illustrates a schematic diagram of the use of a lookup table according to an embodiment;
FIG. 12 shows a schematic diagram of a voting matrix according to an embodiment;
FIG. 13 shows a schematic diagram of an exemplary pattern arranged in a cube form in accordance with an embodiment with respect to the correct identification of a grid;
FIG. 14 shows a schematic view of the pattern shown in the first partial illustration of FIG. 6 in an oblique view;
FIG. 15 shows a pattern from the first partial illustration of FIG. 14, wherein predefined point symmetric regions are highlighted;
FIG. 16 shows a schematic diagram of the pattern of FIG. 15 after viewing angle correction, in accordance with an embodiment;
FIG. 17 shows a schematic diagram of an embodiment with a hierarchically symmetrical pattern;
FIG. 18 shows a schematic diagram of an embodiment with a hierarchically symmetrical pattern;
FIG. 19 shows a schematic diagram of an embodiment with a hierarchically symmetrical pattern;
FIG. 20 shows a schematic diagram of a pattern according to an embodiment;
fig. 21 shows a schematic diagram of an application of the control device of fig. 1 and/or the method for control of fig. 4;
FIG. 22 shows a schematic diagram of various display media having predefined point symmetric regions;
FIG. 23 shows a camera image of a conveyor belt as a display medium, the conveyor belt having an embodiment of a pattern of predefined point symmetric regions and objects placed on the conveyor belt; and
Fig. 24 shows the camera image of fig. 23 after processing using the method for providing of fig. 3.
Detailed Description
In the following description of advantageous embodiments of the present invention, the same or similar reference numerals are used for elements shown in different drawings and having similar effects, wherein repeated descriptions of these elements are omitted.
Fig. 1 shows a schematic diagram of an embodiment of a device 120 for providing, an embodiment of a device 140 for controlling, and, by way of example only, a camera 102. In the illustration of fig. 1, the device for providing 120 or the device for providing 120 and the device for controlling 140 or the control device 140 are shown separately or arranged externally to the camera 102. The providing device 120 and the control device 140 are connected to the camera 102 in a data-transmission-enabled manner. According to another embodiment, the providing device 120 and/or the control device 140 may also be part of the camera 102 and/or may be combined with each other.
The camera 102 is configured to record camera images of the environment of the camera 102. In the context of the camera 102, only predefined even and/or odd point symmetric regions 110 having a center of symmetry 112 are exemplarily arranged. The camera 102 is further configured to provide or generate image data 105 representing a camera image, wherein the camera image also shows predefined even and/or odd point symmetric regions 110.
The providing device 120 is configured to provide navigation data 135 for controlling the robot. In this case, the camera 102, the providing device 120 and/or the control device 140 may be implemented as part of the robot or separately from the robot. For this purpose, the provision device 120 comprises a reading device 124, a determination device 126, an execution device 130 and optionally a determination device 132. The reading-in device 124 is configured to read in the image data 105 from the providing apparatus 120 to the input interface 122 of the camera 102. Furthermore, the reading-in device 124 is also configured to forward the image data 105 representing the camera image to the determination device 126.
The determining means 126 of the providing device 120 is configured to determine the center of symmetry 112 of the at least one point-symmetrical region 110 using the image data 105 and the determining rules 128. The determination rules 128 will be discussed in more detail below. It should be pointed out here that the determination rules 128 are similar or correspond to the procedure disclosed in DE 102020202160 which the applicant has disclosed later. The determining means 126 are further configured to forward the determined at least one symmetry center 112 to the executing means 130.
The execution means 130 is configured to compare the position of the at least one center of symmetry 112 in the camera image with a predefined position of the at least one reference center of symmetry in the reference image 115 with respect to the reference coordinate system to determine a positional deviation 131 between the center of symmetry 112 and the reference center of symmetry. The execution means 130 is further configured to read in or receive the reference image 115 or the reference data 115 from the storage means 150. The storage 150 may be implemented as part of the providing device 120 or separate from the providing device 120. Furthermore, according to one embodiment, the execution means 130 is configured to forward the positional deviation 131 to the determination means 132.
According to one embodiment, the deriving means 132 is configured to subsequently use the positional deviation 131 to derive displacement information 133 of at least a subset of pixels of the camera image relative to corresponding pixels of the reference image 115.
The providing device 120 is configured to provide navigation data 135 using the positional deviation 131 and/or the displacement information 133. More precisely, the providing device 120 is configured to provide the navigation data 135 to the control device 140 via the output interface 138 of the providing device 120.
The control device 140 is configured to control the robot. To this end, the control device 140 comprises an evaluation means 144 and an output means 146. The control device 140 is configured to receive or read in navigation data 135 from the providing device 120 via an input interface 142 of the control device 140. The evaluation means 144 are configured to evaluate the navigation data 135 provided by the providing device 120 to generate a control signal 145 dependent on the navigation data 135. The evaluation means 144 are further configured to forward the control signal 145 to the output means 146. The output device 146 is configured to output a control signal 145 to an output interface 148 to the robot to control the robot.
In particular, the determination rule 128 is structured such that a signature is generated for a plurality of pixels of at least one section of the camera image to obtain a plurality of signatures. In this case, each signature is generated using a descriptor with a plurality of different filters. Each filter is of at least one symmetrical type. Each signature has a sign for each filter of the descriptor. The determination rule 128 may also be structured such that at least one reflector is applied to the sign of one of the signatures to determine at least one mirror signature for at least one symmetry type of filter for that signature. In this case, each reflector includes rules specific to the symmetric type and descriptor-dependent filter for modifying the symbol. The determination rule is further structured such that it is checked whether a pixel with a signature is present in a search area in the surroundings of the pixel, which search area is dependent on the at least one reflector applied, the at least one further pixel having a signature corresponding to at least one mirror signature, in order to find the pixel coordinates of at least one symmetric signature pair from the pixel and the further pixel when the at least one further pixel is present. In addition, the determination rule is structured such that pixel coordinates of the at least one symmetric signature pair are evaluated to identify the at least one center of symmetry.
According to one embodiment, the determining means 126 is configured to generate, for each determined symmetry center 112, a transformation rule for transforming the pixel coordinates of the symmetry center 112 and/or the point symmetry region 110 using the pixel coordinates of each symmetry signature pair that help to correctly identify the symmetry center 112. The transformation rules are applied to the pixel coordinates of the center of symmetry 112 and/or the point-symmetric region 110 to correct for distorted viewing angles of the camera image. Furthermore, it is advantageous to determine the transformation rules based on a plurality of, in particular adjacent, point-symmetric regions 110, as these are more robust, more accurate and less affected by noise, in particular if these point-symmetric regions lie on a common plane. The application of the transformation is particularly advantageous when looking at the arrangement of the plurality of symmetry centers 112.
According to one embodiment, the determining means 126 is further configured to determine a symmetry type of the at least one symmetry center 112. The symmetry type represents even point symmetry and additionally or alternatively represents odd point symmetry. Additionally or alternatively, the execution means 130 is in this case configured to compare the symmetry type of the at least one symmetry center 112 in the camera image with a predefined symmetry type of the at least one reference symmetry center in the reference image 115 to check for consistency between the at least one symmetry center 112 and the at least one reference symmetry center.
In particular, the image data 105 in this case represent a camera image of at least one pattern consisting of a plurality of predefined point-symmetrical regions 110. Here, the determining means 126 are configured to determine a geometrical arrangement of the symmetry center 112 of the at least one pattern, to determine a geometrical sequence of symmetry types of the symmetry center 112, and/or to use said sequence to determine a correct pattern from a plurality of predefined patterns, which is represented by the image data 105. The arrangement and/or the sequence may represent an identification code of the pattern. According to an embodiment, the determining means 126 are in this case configured to determine the implicit additional information of the at least one pattern or the readout rules for reading out explicit additional information in the camera image using the arrangement of the symmetry center 112 of the at least one pattern and/or the sequence of symmetry types of the symmetry center 112. The arrangement and/or the sequence represent the additional information in encoded form. The additional information is related to controlling the robot. Additionally or alternatively, the execution means 130 is in this case configured to select the reference image 115 from a plurality of stored reference images according to the determined arrangement, the determined sequence and/or the determined pattern or to generate the reference image 115 using stored generation rules.
Fig. 2 shows a schematic diagram of an embodiment of an apparatus 200 for manufacturing. The apparatus 200 for manufacturing is configured to manufacture at least one predefined even and/or odd point symmetric region 110 for use by the provision apparatus or the like in fig. 1 and/or the control apparatus or the like in fig. 1. To this end, the apparatus 200 for manufacturing comprises a generating means 202 and a generating means 206. The generating means 202 is configured to generate design data 204. Design data 204 represents a graphical representation of at least one predefined even and/or odd point symmetric region 110. The generating means 206 is configured to generate at least one predefined even and/or odd point symmetric region 110 on, at or in the display medium using the design data 204 to produce the at least one predefined even and/or odd point symmetric region 110.
According to one embodiment, the generating means 202 is configured to generate the design data 204 as a graphical representation of a circle, oval, square, rectangle, pentagon, hexagon, polygon or torus representing at least one predefined even and/or odd point symmetric region 110, wherein the at least one predefined even and/or odd point symmetric region 110 has a regular or quasi-random content pattern, and/or wherein the first half of the at least one predefined even and/or odd point symmetric region 110 is arbitrarily predefined, and the second half is configured by dot mirroring and/or grey value and/or inversion of colour value. Additionally or alternatively, the generating means 206 is configured to generate the at least one predefined even and/or odd point symmetric region 110 by an additive manufacturing process, separation, coating, shaping, primary shaping, or optical display. Additionally or alternatively, the display medium in this case has glass, stone, ceramic, plastic, rubber, metal, concrete, gypsum, paper, cardboard, food or optical display means.
According to one embodiment, the generating means 202 is configured to generate design data 204 representing a graphical representation of at least one pattern of a plurality of predefined even and/or odd point symmetric regions 110, wherein at least a subset of the point symmetric regions 110 are aligned on a regular or irregular grid, directly adjacent to each other and/or separated from at least one adjacent point symmetric region 110 by a gap portion, are identical to each other or different from each other in their size and/or their content pattern, and/or are arranged in a common plane or in different planes. Additionally or alternatively, the generating means 202 is configured to generate design data 204 representing a graphical representation of at least one pattern having hierarchical symmetry.
Fig. 3 shows a flow chart of an embodiment of a method 300 for providing navigation data to control a robot. The method 300 for providing may in this case be performed using the providing device in fig. 1 or a similar device. The method 300 for providing comprises a reading step 324, a determining step 326, an executing step 330 and optionally a solving step 332.
In a reading step 324, image data provided by means of the camera is read in from the interface to the camera. The image data represents a camera image of at least one predefined even and/or odd point symmetric region in a camera environment. The image data and the determination rules are then used in a determination step 326 to determine at least one center of symmetry of the at least one point-symmetric region. Subsequently, the position of the at least one center of symmetry in the camera image is compared with a predefined position of the at least one reference center of symmetry in the reference image with respect to the reference coordinate system in order to determine a positional deviation between the center of symmetry and the reference center of symmetry in performing step 330. Subsequently, according to an embodiment, the positional deviation is used in the solving step 332 to solve for displacement information of at least a subset of pixels of the camera image relative to corresponding pixels of the reference image. The positional deviation and/or the determined displacement information is used to provide navigation data.
According to one embodiment, the image data read in the reading step 324 represents a camera image of at least one pattern of a plurality of predefined point symmetric regions. Here, in a determining step 326, a geometric arrangement of symmetry centers of at least one pattern is determined, a geometric sequence of symmetry types of symmetry centers is determined, and/or the pattern is determined from a plurality of predefined patterns using the sequence. The arrangement and/or the sequence represents an identification code of the pattern. Optionally, the determining step 326 and/or the performing step 330 are performed together for all symmetry centers independently of the symmetry type of the symmetry center or individually for symmetry centers of the same symmetry type according to the symmetry type of the symmetry center.
Fig. 4 shows a flow chart of an embodiment of a method 400 for controlling a robot. The method 400 for controlling may be performed using the control device of fig. 1 or a similar device. Further, the method 400 for controlling may be performed in conjunction with the method for providing of fig. 3 or the like. The method 400 for controlling includes an evaluation step 444 and an output step 446.
In an evaluation step 444, the navigation data provided according to the method for providing of fig. 3 or a similar method is evaluated to generate a control signal dependent on the navigation data. Subsequently, in an output step 446, control signals are output to the interface to the robot to control the robot.
Fig. 5 shows a flow chart of an embodiment of a method 500 for manufacturing. The method 500 for manufacturing may be performed to manufacture at least one predefined point symmetric region for use with the method for providing of fig. 3 or the like and/or for use with the method for controlling of fig. 4 or the like. The method 500 for manufacturing may also be performed in conjunction with or using the apparatus for manufacturing of fig. 2 or similar apparatus. The method 500 for manufacturing includes a generating step 502 and a generating step 506.
In a generating step 502, design data representing a graphical representation of at least one predefined point symmetric region is generated. Subsequently, in a generating step 506, at least one predefined point-symmetric region is generated on, at or in the display medium using the design data to produce at least one predefined point-symmetric region.
Fig. 6 shows a schematic diagram of a display medium 600 with a pattern 610 of predefined point symmetric regions 110A and 110B, according to an embodiment. In this case, each of the predefined point-symmetric regions 110A and 110B corresponds to or is similar to the predefined point-symmetric region in fig. 1. In the first partial illustration a, a pattern 610 consisting of only exemplary 49 predefined point-symmetric regions 110A and 110B is shown, and in the second partial illustration B, a pattern 610 consisting of only exemplary eight predefined point-symmetric regions 110A and 110B is shown. In this case, the first predefined point-symmetric region 110A has odd point symmetry as a symmetric type, and the second predefined point-symmetric region 110B has even point symmetry as a symmetric type. In this case, a noise-like image pattern having a corresponding pattern 610 is printed on each display medium 600.
The use of symmetry in the machine vision field according to an embodiment can be illustrated based on fig. 6, wherein symmetry can be designed to be imperceptible or hardly perceptible to humans, but at the same time is robust, locally accurate for an embodiment and can be detected with minimal computational effort. In this case, point symmetry is more or less hidden in the pattern 610, and the observer hardly recognizes these point symmetries. By graphically highlighting the predefined point-symmetric regions 110A and 110B in FIG. 6, a human observer can identify these predefined point-symmetric regions in a noise-like image pattern on the display medium 600. 49 exemplary circularly symmetric regions 110A and 110B are included in the first partial illustration A, with only exemplary 25 first regions 110A having odd point symmetry and 24 second regions 110B having even point symmetry. In the second partial illustration B, the symmetric regions 110A and 110B are selected to be larger than in the first partial illustration a, with only exemplary five having odd point symmetry and only exemplary three having even point symmetry, and thus are particularly suitable for larger camera distances or lower image resolutions. Thus, circularly symmetric regions 110A and 110B are located on a display medium 600 designed as a plate, where in the case of odd or negative point symmetry the point mirror will image light to dark and vice versa, whereas in the case of even or positive point symmetry such inversion will not occur. If multiple patterns 610 are desired, these patterns may be designed to be distinguishable. This may be accomplished by an arrangement of the centers of symmetry of the regions 110A and 110B, as shown in fig. 6, wherein the first partial graphic a and the second partial graphic B are simply distinguishable, or based on a sequence of negative or odd point symmetry and positive or even point symmetry of the regions 110A and 110B within the respective pattern 610.
Fig. 7 shows a schematic diagram of a display medium 600 with a pattern 610 of predefined point symmetric regions according to an embodiment. The pattern 610 corresponds in this case to or is similar to one of the patterns from fig. 6, wherein the pattern 610 is shown in the illustration of fig. 7 without being highlighted graphically. Ten display mediums 600 similar to those in fig. 6 are shown by way of example only in fig. 7.
Fig. 8 shows a schematic diagram of a display medium 600 having a pattern 610 from fig. 7, wherein the pattern or predefined point symmetric regions 110A and 110B are graphically highlighted. By way of example only, a pattern 610 having predefined point symmetric regions 110A and 110B is arranged or graphically highlighted on ten display mediums 600 in this case.
Accordingly, fig. 7 and 8 show only ten patterns 610 optimized for distinguishability by way of example. Each pattern 610 has a separate arrangement of odd and even point symmetric regions 110A and 110B. The pattern 610 is thus encoded by this arrangement. Here the encodings are chosen and mutually coordinated and/or optimized by training such that even if ten patterns 610 are rotated or mirrored or partially hidden from capture by the camera, the ten patterns are still clearly identifiable and distinguishable. In the pattern 610 of fig. 7 and 8, the point-symmetric regions 110A and 110B in the four corners of each display medium 600, respectively, are intentionally designed to be slightly more pronounced. This is independent of the function itself, but provides practical advantages when manually assembling the display medium 600 with the pattern 610. The display medium 600 having the pattern 610 may be arbitrarily arranged within the scope of the manufacturing method already described, for example, three-dimensionally or planarly in series or as a surface. The center of point symmetry of the pattern 610 can be found correctly and precisely within the scope of the providing method already described and/or by means of the providing device already described. The pattern 610 may be printed, for example, on solid plates of any size, which may optionally be placed in a partially orthogonal arrangement relative to each other. In the event that the imaging of the pattern 610 by the camera is blurred, the center of symmetry may also be detected sufficiently well to thereby achieve the described functionality. Therefore, the detection of the point symmetry center is robust for blurred imaging. This expands the application range to situations where work with a shallow depth of field, such as scenes with weaker light, or situations where the focus or auto focus settings of the camera are incorrect or perfect clear imaging cannot be achieved, such as in liquid or turbid or moving media or in the edge area of the lens or during relative movement between the pattern 610 and the camera (movement blur, orientation blur). Even if point symmetry occurs naturally and in particular in an artificially designed environment, the possible false detections based on them differ in spatial distribution from the detection based on the correct pattern 610 and thus the two groups can be easily separated or distinguished from each other.
To demonstrate that the above-described method of providing is also applicable to non-planar and even elastic surfaces in motion, the pattern 610 of fig. 7 and 8 may be printed on, for example, paper and combined into a flexible box. The above-described provision method is applicable without any problem even for a non-flat or elastic surface (e.g., made of paper). This enables the movement of these surfaces to be determined. In contrast to many substances, the paper does not allow shearing, but point symmetry is also unchanged for shearing, so that shearing does not cause problems.
In particular, the orientation of the center of symmetry in the camera image can be precisely found. However, extending such accurate measurements to the entire face of the pattern 610 may also be of interest in various applications. I.e., each point or pixel of the pattern 610 describes where that point or pixel is located in the camera image. This then allows, for example, a minimum deviation between the truly observed pattern 610 and the ideal pattern to be found from the building (reference true value). For example, it is of interest to apply the pattern 610 in a printed manner on a non-smooth or non-rigid surface and thereby for example create variable folds or indentations in the pattern 610, the exact shape of which should be taken. In particular a pattern with random properties is very suitable for finding the corresponding points from the first image to the second image. Here, the first image and the second image may be recorded chronologically from different perspectives with the same camera or with two cameras.
In particular, it should now be examined when the first image is a real image from a camera and the second image is an artificially generated (stored) image of a given pattern (also referred to as a reference image), which is placed (e.g. scaled, rotated, affine mapped, projected) into the second image, for example based on the found centre of symmetry, so that it is as close as possible to the real (first) image. For the reference image, the processing steps required for the first image from the camera, such as the image preprocessing steps, are skipped or omitted as necessary. The following known methods, such as methods of optical flow or disparity estimation, may then be applied, for example, to find a correspondence in the reference image for each pixel in the camera image-or vice versa. Thus a two-step process is obtained: in a first step, the found center of symmetry and, if necessary, the contained code are used to register or coarsely align the real image with the known pattern. This then represents an initialization to accurately find again in a second step the minimum deviation in the sense of local displacement between the registered real image and pattern, for example using a light flow method, and if necessary for each point or pixel of the image or pattern 610. The smaller the search area, the less computational effort is required for the second step. Here the computational effort is typically not Chang Xiao-due to the good initialization from the first step. Since these two steps require little computational effort, high pixel throughput is achieved on commonly used computer platforms, defined as the product of the frame repetition rate [ image/second ] and the image size [ pixel/image ]. If no coincidence is found locally, it can be interpreted by the object generally blocking the line of sight to the pattern 610. From this, the shape or contour of the occluded object can be deduced.
The reference image should be provided for the two-step process described above. This can be solved by: the associated reference image is maintained in one memory for all patterns 610 in question. The memory effort involved can be reduced by storing only the corresponding parameters required for recalculating or generating the reference image when needed. For example, the pattern 610 may be generated according to simple rules by means of a quasi-random number generator. The term "quasi" here means that the random number generator actually works according to deterministic rules, so that its result is reproducible, which is advantageous here. The rule is understood here, for example, as to what diameter the symmetrical areas 110A and 110B have and how the mirroring should be performed and how the pattern 610 is composed of a plurality of patterns with different degrees of detail in a weighted manner, for example such that the pattern is well detected at short, medium and long distances. It is then sufficient to store only the initialization data (seed) of the quasi-random number generator and, if necessary, the selection of the rules for constructing the pattern 610. By means of this formation rule, the reference pattern can be generated again and identically (and then deleted again) if required.
In summary, the two-step process may be represented, for example, as follows. In a first step, the centers of symmetry are found and their signs are found. Here, the symbols represent case differences between odd and even symmetry. By comparing the symbol sequences, it is possible to determine which one of the plurality of patterns is involved. The symbol sequence of pattern 610 may also be referred to as a code. The code can be described in a compact manner and requires up to 64 bits for a pattern 610 having, for example, an 8 x 8 center of symmetry. For comparison purposes, all existing or contemplated codes should be stored. Code is searched from this set that is as contradictory as possible with observations. This result is generally clear. Even if the camera can only capture a portion of the pattern 610, e.g. due to occlusion, such a search is still possible, because in this example with 8 x 8 symmetry centers, the code provides a very large number of up to 2 64 The number of patterns 610 that have been completed will be much smaller, giving a high degree of redundancy. For each stored code, information required to generate the reference image, such as parameters and rule selections, should also be stored. The reference image is generated for the second step, for example, on demand, i.e. when needed, and is only temporary if necessary.
Based on the position of the center of symmetry found in the first step given by the camera image coordinates and the known position in the reference image corresponding thereto, transformation rules can be calculated that map these coordinates to each other as well as possible, for example using projection or affine mapping, which is optimized in the sense of a least squares method. Through such a transformation and appropriate filtering of the image data, the two images may be transformed (warped) into a common coordinate system, for example into the coordinate system of the camera image or into the coordinate system of the reference image or into any third coordinate system. Then, a more accurate comparison is made of the two images thus already aligned to each other, for example using the optical flow method. For example, for each pixel of the first image (preferably taking its environment into account) the best corresponding pixel of the second image with the environment is searched. The relative displacement of the corresponding position may be expressed as displacement information, in particular as absolute coordinates or displacement vectors. Such displacement vectors can be found with sub-pixel accuracy, so that the correspondence is typically not on the pixel grid, but between pixel grids. This information allows a highly accurate analysis of the entire face of the pattern 610 captured in the camera image, for example to analyze the deformation or distortion of the pattern 610 or its carrier/display medium 600 using an elastic pattern, or in the case of a rigid pattern, to analyze imaging aberrations in the optical path.
If the searched correspondence is not found in the expected area, a local occlusion of the pattern 610 may be inferred. The reason for the occlusion may be, for example, an object located on the pattern 610, or a second pattern that partially occludes the first pattern. Valuable information, such as a mask or outline of the object, can also be obtained from the occlusion analysis.
Fig. 9 shows a schematic diagram of predefined point symmetric regions 110A and 110B according to an embodiment. In this case, each of the predefined point-symmetric regions 110A and 110B corresponds to or is similar to the predefined point-symmetric region from one of the above-described figures. In the first part of the diagram a is shown a second or even point-symmetrical region 110B comprising its centre of symmetry 112 and in the second part of the diagram B is shown a first or odd point-symmetrical region 110A comprising its centre of symmetry 112. In this case, the predefined point-symmetric regions 110A and 110B represent regions formed by gray levels.
The use of point symmetry has the following advantages over other symmetry forms: point symmetry is preserved when the pattern and/or the at least one predefined point symmetry region is rotated about the viewing axis; when the pattern and/or the at least one predefined point-symmetrical area is tilted, i.e. at a tilted viewing angle, point symmetry is also preserved. Rotation and tilting of the pattern and/or the at least one predefined point symmetry region does not cause problems for the detection of odd and even point symmetries, as they are preserved in the process. Thus, the above-mentioned method for providing or the providing method is also applicable to oblique viewing angles for the pattern or the at least one predefined point-symmetric region. In the case of even point symmetry, for example, gray values or color values are preserved when the points are mirrored.
The same partner gray value g is found for each gray value g separately, point symmetrically to the center of symmetry 112 in the first part of the diagram a of fig. 9 PG =g. In the second part of the diagram B of fig. 9, odd point symmetry is shown, in which each gray value is inverted: for example, white turns black and vice versa, light grey turns dark grey and vice versa. In the example where the gradation value g is within the interval 0.ltoreq.g.ltoreq.1, from the original gradation value g, from among the half of the region 110A shown in the upper part in the diagram of FIG. 9, the gradation value g is based on g PU =1-g forms the gray value g through the point mirror image in the simplest possible manner PU . Nonlinearities can also be integrated into the inversion, such as gamma correction, to compensate for other nonlinearities in image display and image recording, for example. The formation of a suitable odd or even point symmetric pattern is correspondingly simple. For example, half of the corresponding region 110A or 110B shown in the upper part of the diagram of fig. 9 is arbitrarily set or randomly generated. Then, the half shown in the lower part of the diagram of fig. 9 is derived therefrom, and the gray values of the odd-numbered point symmetry are inverted or the gray values of the even-numbered point symmetry are not inverted by the point mirroring.
Such observation or generation may also be extended to color patterns and/or predefined point symmetric regions. In this case, in the case of odd point symmetry, the point mirrored RGB values can be formed by inverting the respective original RGB values, which in turn is the simplest possibility, i.e. r PU =1-r (red), g PU =1-g (g here stands for green)), b PU =1-b (blue). So that for example a dark purple color is imaged as light green and a blue color is imaged as orange. A color pattern may represent more information than a monochrome pattern, which may be advantageous. A prerequisite for using this advantage is that the color information is also used for the use of the original image (i.e. camera or other imaging sensorColor image of the device) into descriptors.
The specific implementation of the pattern 610 and/or the at least one predefined point-symmetric region 110 or 110A and/or 110B shall also be discussed below with reference to the above-mentioned figures.
With respect to the arrangement of the pattern 610 and/or the at least one predefined point symmetric region 110 or 110A and/or 110B, for example as shown in fig. 6, the point symmetric region 110 or 110A and/or 110B may be, for example, circular, and these regions may in turn be arranged mostly in a regular grid in the pattern 610. For example, the faces between the circular areas 110 or 110A and/or 110B may remain unused. There are alternatives to this: for example, the regions 110 or 110A and/or 110B may be square and connected to each other without gaps so as to use the entire face, or the symmetric regions 110 or 110A and/or 110B may be regular hexagonal faces, which are also connected to each other without gaps so as to use the entire face.
In this association, fig. 10 shows a schematic diagram of a pattern 610 of predefined point symmetric regions 110A and 110B, according to an embodiment. The predefined point-symmetric regions 110A and 110B in this case correspond or are similar to the predefined point-symmetric regions in fig. 1, 6 and/or 8. The regions 110A and 110B in fig. 10 are both circular and arranged on a hexagonal grid. In this case, the distance between grid points or symmetry centers may correspond to the circle diameter. So that the unused area 1010 between the areas 110A and 110B in the pattern 610 can be minimized.
Other arrangements and shapes, such as rectangular, polygonal, etc., are also possible, which may also be combined with each other in shape and/or size. For example, the alternation of pentagons and hexagons is similar to a normal football. The shapes may also be arranged in other ways, such as rotation, with asymmetric regions between them if desired. The center of symmetry may also lie outside the point-symmetric region itself. This is the case, for example, when a ring is to be shaped. Nor does it have to have all the point-symmetrical regions lie in a common plane. Instead, they may be located on different faces arranged in space, which faces also allow to be uneven.
The pattern 610 and/or the at least one predefined point symmetric region 110 or 110A and/or 110B may be formed in a variety of ways. Only a few examples are described below. Random patterns or quasi-random patterns, such as noise patterns. By introducing low spatial frequency components, these patterns are formed such that they are still perceived as noise patterns of sufficiently high contrast when the distance from the camera is medium and large. The so-called white noise, i.e. the uncorrelated gray values, is not suitable for this. Aesthetic, if necessary regular, patterns, such as floral patterns, tendrils patterns (leaves, branches, flowers), ornamental patterns, mosaics, mathematical patterns, traditional patterns, onion patterns, patterns constituted by logo symbols (heart shapes, etc.), imitation of random patterns of nature (for example farmlands, woodlands, lawns, pebble beaches, sand, bulk materials (gravel, salt, rice, seeds), marble, marbles, concretes, bricks, slates, asphaltic surfaces, starry sky, water surfaces, felts, hammer paints, rusted iron sheets, sheepskin, scattered particles, etc.), photographs of scenes with any content. In order to produce a point-symmetrical region and/or pattern from such a pattern, which is suitable for the purposes mentioned herein, one half of the respective surface is arbitrarily predefined, and the second half is constructed by point mirroring, and the gray values or color values are inverted if necessary. See also fig. 9 for a simple example for this.
There are countless possibilities regarding the material, surface and fabrication of the pattern 610 and/or the at least one predefined point symmetric region 110 or 110A and/or 110B. The following list is not complete: black and white printing, gray printing or multicolor printing on various materials, printing on or behind glass or transparent films, printing on or behind frosted glass or translucent films, embossing in stone or glass or plastic or rubber, embossing in fired materials such as crockery, terra cotta or ceramics, embossing in metal or concrete or gypsum, embossing on plastic or paper/cardboard, etching in glass or metal or ceramic surfaces, milling in wood, cardboard, metal, stone, etc., photographic exposure of fired surfaces in wood or paper, photographic exposure of paper or other materials, short-term or rotten or water-soluble patterns for short-term applications in plant materials, ash, sand, wood, paper, on fruit, food skin, etc., as a display of holograms, as well as a display on monitors or displays (which may also change over time, if desired), a display on LCD films or other display films (which may also change over time, if desired), etc.
With regard to the embossing manufacturing possibilities, as in the case of milling, embossing, stamping, etc., it should be noted that this area should be perceived by the camera as odd-numbered symmetry and/or even-numbered symmetry. It may be necessary to design with consideration already given to, for example, later illumination (e.g., oblique incidence of light onto the relief) and non-linearities and other disturbances in the optical imaging. It is not important whether the 3D shape or relief itself has an even and/or odd point symmetry type, but the image recorded by the camera shows this symmetry. The light incidence or illumination direction and the reflection of the light on the surface are also relevant here and should be taken into consideration together in the design. With respect to image recording and illumination, it should be noted that the recording technique should be designed to be suitable for capturing the pattern 610 and/or the at least one predefined point symmetric region 110 or 110A and/or 110B. In particular, in case of a fast relative movement between the pattern 610 and/or the region(s) 110 or 110A and/or 110B and the camera, it is recommended to use a suitable illumination (e.g. a flashlight or a strobe or a bright LED lamp) so that the exposure time and thus the movement blur in the image can be kept small. For various applications, it makes sense to apply the pattern 610 and/or the region(s) 110 or 110A and/or 110B to a transparent or translucent surface. This allows the pattern 610 and/or region(s) 110 or 110A and/or 110B to be illuminated from one side and viewed from the other side. By this solution, disturbing reflections of the light source on the display medium can be effectively avoided. For the arrangement of the pattern 610 and/or the region(s) 110 or 110A and/or 110B, the light source and the camera, there is in principle a freedom to choose the front or the back of the carrier or the display medium, respectively. When selected, the risk of the pattern 610 and/or the area(s) 110 or 110A and/or 110B or the camera being contaminated or the pattern 610 and/or the area(s) 110 or 110A and/or 110B being worn may also play a role: so that for example the pattern 610 and/or the region(s) 110 or 110A and/or 110B and the camera are applied on the back side, it is interesting because they are better protected there from for example dust or water, or because the pattern 610 and/or the region(s) 110 or 110A and/or 110B are protected there from mechanical wear.
A method, which is also used in the embodiments, is disclosed in DE 10 2020 202 160, which is hereinafter disclosed, to find symmetric areas or patterns in an image reliably and with little computational effort. In this case, the original image, i.e. the color image or the gray-value image of the camera or other imaging sensor, is converted into an image of descriptors, which are formed based on the local environment of the original image, respectively. Here, the descriptor is another representative form of the partial image content, which is prepared in a simpler processing form. Here, it is more simply understood in particular that: information about the environment of the spot is contained, not just about the spot itself, a large degree of invariance with respect to brightness or illumination and its variations, and a low sensitivity with respect to sensor noise. The descriptor image may have the same resolution as the original image such that there is approximately one descriptor for each pixel of the original image. Alternatively or additionally, other resolutions are also possible.
The signature is formed by a corresponding descriptor, which is represented in the computer unit as a binary word, or by a plurality of adjacent descriptors, respectively, which describes the local environment of the pixels of the original image as characteristically as possible. The signature may also be identical to the descriptor or a part thereof. The signature is used as an address to access a look-up table (Lookup table). Thus, if the signature consists of N bits, then a size of 2 can be accessed N (i.e., to the power N of: 2). Advantageously, the word length N of the signature should not be chosen to be too large, since the storage requirement of the table grows exponentially with N: for example, 8.ltoreq.N.ltoreq.32. The signature or descriptor is structured such that the signature symmetry can be determined using simple operations, such as bitwise XOR (exclusive or) of a portion of the bits. Examples: s is S P =s^R P Where s is a signature of length N bits, R P Is a point-symmetrical (P) reflector (R) coordinated therewith. The symbol is represented by a bitwise exclusive ORAnd (5) calculating. Thus, signature S P Representing the point-symmetric counterpart of the signature s. This relationship also applies to the opposite direction.
If the construction of the descriptor or signature is fixed, the reflector is thus also automatically set (and constant). By applying it to any signature, the signature can be converted into its symmetrical counterpart. There is an algorithm that can find one or more symmetric signature pixels for a given signature at the current pixel within an optionally limited search window. The centre of symmetry is then located in the middle of the connecting line between the positions of the two pixels. Where or as close as possible to the voting weights and collected in a voting matrix (voting map). In the voting matrix, the output voting weights are accumulated at the position of the symmetry center of the search. These symmetry centers can be found, for example, by traversing the voting matrix to find the accumulation points. This applies to point symmetry, horizontal axis symmetry, vertical axis symmetry and, if desired, other symmetries, such as mirror symmetry in the further axis, and rotational symmetry. A more accurate localization with sub-pixel accuracy can be achieved if the local environment is also included in the observation when evaluating the voting matrix to determine the accumulation points and to accurately localize the symmetry center, respectively.
Fig. 15 of DE 10 2020 202 160 shows an algorithm that can find a point-symmetrical correspondence with the signature currently observed. However, only even point symmetry is considered in this document.
According to an embodiment, the algorithm is extended to odd point symmetry. It is particularly advantageous here that the odd symmetry and the even symmetry can be determined simultaneously in only one common pass. This saves time because the signature image only needs to be traversed once instead of twice, and saves delay. When only one (rather than two) traversal is required, processing in streaming mode can provide the results of a symmetric search with much lower latency. Here, once the first pixel data from the camera arrives, the process starts, and the process steps are densely executed in sequence. This means that the signature has been calculated once the necessary image data from the local environment of the current pixel is present. A symmetric search is immediately performed for the signature just formed. Once portions of the voting matrix are complete (as is the case when they are no longer and will no longer be part of the search area), they can be immediately evaluated and the found symmetry (strong centre of symmetry) can be immediately output. This procedure results in a very low delay, which usually corresponds to only a small number of image lines, depending on the height of the search area. A low delay is very important if the reaction should be fast, e.g. in an adjusting ring, where the actuator influences the relative pose between the symmetric object and the camera. Memory may also be saved. The voting matrix (voting map) can be used in common for two symmetric forms, even point symmetry and odd point symmetry, where two symmetric forms or symmetric types with different symbols participate in the voting, e.g., subtracting the voting weight in the case of odd point symmetry and adding the voting weight in the case of even point symmetry. This is explained in more detail below. In addition, energy can be saved by saving memory. The low latency implementation possibilities described above also result in that only a small amount of intermediate data need to be stored compared to the whole image. This effort to use very little memory is particularly important for cost-critical Embedded Systems (Embedded Systems) and also results in a saving of energy requirements.
Fig. 11 shows a schematic diagram of the use of a lookup table 1150 according to an embodiment. The look-up table 1150 may be used by the determining means of the apparatus for providing or the like of fig. 1. In other words, an embodiment of the algorithmic process in searching for point-symmetrical correspondence in fig. 11 is a snapshot related to the apparatus for providing of fig. 1 or the like and/or the method for providing of fig. 3 or the like. In particular, the illustration in fig. 11 is also similar to fig. 15 of DE 102020202160 disclosed later, wherein here fig. 11 further comprises extensions to include even and odd point symmetries.
The look-up table 1150 may also be referred to herein as an entry table. A pixel grid 1100 is shown in which a signature s with exemplary values 2412 is generated for a currently observed or processed pixel. In other words, FIG. 11 shows a snappy during the formation of a link of pixels or pixel coordinates with the same signature sAnd (5) irradiating. For the sake of clarity, two out of a maximum of N possible chains are shown, and for signature S PG 364 and for signature S PU =3731. In pixel grid 1100, a reference to the location of the last pre-signature having the same signature value is stored for each pixel. Thereby generating links having the same signed locations, respectively. Thus, the signature value itself need not be stored. For each signature value, the corresponding entry location in pixel grid 1100 is stored in a lookup table 1150 or entry table having N table fields. Here, N corresponds to the number of possible signature values. The stored value may also be "invalid". The contents of the look-up table 1150 or entry table and the referenced image (linked image) change dynamically.
Processing pixels in the pixel grid 1100, e.g., row by row, for example, begins at the top left of fig. 11, as indicated by the arrow, and has currently advanced to the pixel with signature s=2412. Links between pixel locations each having the same signature s are stored only for the first image region 1101. For the second image area 1102 in the lower image portion, the link and signature is not yet known at the point in time shown, and for the third image area 1103 in the upper image portion no link is needed anymore, for example due to limitations of the search area, wherein the link memory for the pixels in the third image area 1103 can be released again.
For the signature s just formed, by applying a reflector R PG Forming an even point mirror image signature S PG =364. Index PG represents point symmetry, even. The index PU, which represents point symmetry, is also used below. This value is used as an address in the look-up table 1150 to assign the same signature value S PG This entry is found in the link of pixel positions of=364. At the point in time shown, the look-up table 1150 includes two elements: the entry pixel location of the corresponding signature s and a reference to that location, shown by the curved arrow. Other possible contents of the look-up table 1150 are not shown for clarity. Signature value S PG The link=364 includes three pixel positions shown here by way of example only. Two of which are located in search area 1104, search area 1104 may also have a search area corresponding theretoDifferent forms are shown, for example rectangular or circular. Here, when traversing unidirectionally along the link, starting from the lower part, two point-symmetrical corresponding candidates located within the search area 1104 are found. The third correspondence of the first element of the link, which is an even point symmetry, is not of interest here, because it is located outside the search area 1104 and thus too far from the current pixel position. If the number of symmetry center candidates 1112 is not too large, a voting weight for the location of the corresponding symmetry center may be output for each symmetry center candidate 1112. Symmetry center candidate 1112 locates signature S mirrored from corresponding even point at signature S' S location, respectively PG In the middle of the connecting shaft. If there is more than one symmetry center candidate 1112, the voting weights may be reduced, respectively, for example, the inverse of the number of symmetry center candidates may be used as the corresponding voting weight. Thus, ambiguous center of symmetry candidates are weighted smaller than explicit center of symmetry candidates.
An odd dot mirror signature will now be considered and used. In the snapshot shown in fig. 11, the signature s just formed is obtained by applying a further reflector R PU Forming an odd dot mirror signature S PU =3731. Similar to the flow described above for the even point image signature, the same steps are performed for the odd point image signature. The corresponding linked entry is found by the same look-up table 1150. Here, the look-up table 1150 points to links shown for odd point symmetry for signature 3731. The first two pixel locations along the link again result in the formation of symmetry center candidates 1112 because they are arranged in search area 1104 and because the number of candidate symmetry center candidates 1112 is not too large. The last pixel position along the link is located in the third image area 1103. This region is no longer needed at all because it can no longer enter the search region 1104 where it slides row by row.
If the next reference within the link points to the third image region 1103, the traversal along the link may be terminated. Of course, when the end of the link is reached, the traversal is also terminated. In both cases, it makes sense to limit the number of symmetry center candidates 1112, i.e. if there are too many competing symmetry center candidates 1112, all symmetry center candidates 1112 are discarded. Furthermore, it is expedient to terminate the travel along the link early if after a predefined maximum number of steps along the link neither its end nor the third image region 1103 can be reached. In this case also all symmetry center candidates 1112 up to there found should be discarded.
The memory for linking in the third image area 1103 may have been released again, so that only the linking memory needs to be reserved for the size of the first image area 1101. Thus, the link memory requirement is generally low and here depends substantially only on one dimension of the search area 1104 (here the search area height) and one dimension of the signature image (here the signature image width).
The center of symmetry candidate 1112 or center of symmetry candidate may not always fall exactly on the pixel location, but there are three additional possibilities. There are thus four possibilities in total:
1. the point or center of symmetry candidate 1112 falls on the pixel location.
2. The point or center of symmetry candidate 1112 falls midway between two horizontally directly adjacent pixel locations.
3. The point or center of symmetry candidate 1112 falls midway between two vertically directly adjacent pixel locations.
4. The point or center of symmetry candidate 1112 falls in the middle between four immediately adjacent pixel locations.
In ambiguous cases 2 to 4, it is advantageous to evenly distribute the voting weights to be output to the participating pixel positions. The output voting weights are input into a voting matrix and added or accumulated therein.
Here, not only the positive voting weight but also the negative voting weight are used at the same time. In particular, the even symmetry is provided with a different sign (here positive) than the odd symmetry (here negative). This results in a clear result: in image areas without symmetry, which in practice may mostly represent a majority, the positive and negative voting weight outputs are approximately balanced against each other, thus approximately canceling each other in the voting matrix. So on average, about zero is found in the voting matrix. In contrast, in the odd or even symmetric region, strong extrema are found in the voting matrix, and in this embodiment negative minima are found in the case of odd point symmetry, positive maxima are found in the case of even point symmetry.
According to the embodiment shown here, the same resources are used for both the odd and even point symmetries, i.e. the look-up table 1150 or the entry table, the link map, the voting matrix, which saves in particular memory requirements, and both symmetry forms or symmetry types are observed in one common traversal, which saves time and intermediate memory.
Fig. 12 shows a schematic diagram 1200 of a voting matrix according to an embodiment. The graph 1200 relates to a voting matrix as a 3D graph of a camera image processed by means of the device for providing or the like of fig. 1, in which camera image the pattern from the second part illustration of fig. 6 is recorded by the camera. In the voting matrix or chart 1200, exemplary three maxima 1210B and five minima 1210A can be identified as being representative of three even and five odd point symmetric regions from the pattern illustrated in the second portion of fig. 6. Outside these extremes, the values in the voting matrix are close to zero. The extremum can thus be determined very simply and the position of the center of symmetry in the camera image can be determined unambiguously and precisely.
Fig. 12 shows that these extreme values are very pronounced and can therefore be detected simply and unambiguously by the device for providing or the like of fig. 1 and/or the method for providing or the like of fig. 3. Here, information about the symmetry type (i.e., odd or even) is included in the symbol. If the local environment of the respective extremum is also taken into account when evaluating the voting matrix, the position of the center of symmetry can be determined with high accuracy in a subpixel accuracy. Corresponding methods for this purpose are known to the person skilled in the art. If the pattern is properly constructed, the odd and even point symmetries do not compete with each other. The image area (if any) then has either an odd or even point-symmetrical form. Even if the odd and even point symmetric regions are close to each other in the camera image, it can be ensured that their symmetry centers are still spatially separated or distinguishable from each other. Then, by jointly processing the negative symmetry and the positive symmetry, advantages are produced in terms of resources and speed.
According to an embodiment, separate processing of odd-numbered point symmetry and even-numbered point symmetry may be provided. It makes sense to split the entries before they are entered into the voting matrix: two unsigned voting matrices are then provided instead of a common signed voting matrix, wherein negative symmetric voting weights are input into the first voting matrix and positive symmetric voting weights are input into the second voting matrix. In this case, a potentially interesting advantage arises: it is also possible to construct patterns that have both odd and even point symmetry and whose centers of symmetry partially coincide and consider them by the detection algorithm. While this hybrid symmetric form is very unusual, this unusual guarantee is highly unlikely to confuse with randomly occurring patterns in the image. The two voting matrices are then searched to find the maximum value that exists at the same location in the two matrices. Another possible advantage of handling odd and even point symmetries separately is that parallelization is easier and thus faster to perform if necessary. This saves latency because by using two voting matrices access conflicts when entering voting weights can be avoided.
Fig. 13 shows a schematic diagram of a pattern 610, illustratively arranged in a cube form, in accordance with an embodiment in terms of proper identification of grid 1311. The pattern 610 shown in fig. 13 is, for example, a pattern from fig. 7 or fig. 8, wherein three patterns are arranged here in a cubic shape. In pattern 610, detected or identified centers of symmetry 112A and 112B of respective predefined point symmetry regions of pattern 610 are shown, wherein the sign and value of the associated extremum in the voting matrix can optionally also be known. In this case, the first symmetry center 112A is allocated to a predefined point-symmetry region having an odd number of point-symmetries, and the second symmetry center 112b is allocated to a predefined point-symmetry region having an even number of point-symmetries. A correct grid 1311 is drawn for one of the patterns 610, on which the predefined point symmetry region and thus the symmetry centers 112A and 112B are aligned. The other two patterns 610 will be searched for the correct grid, where in fig. 13 the incorrect resolution of the grid search is shown by the first marker 1313 and the correct resolution of the grid search is shown by the second marker 1314.
Finding the correct grid associated is a task with ambiguity. After detecting the symmetry centers 112A and 112B of the odd/even codes, the next step is typically to group them and determine which pattern 610 the group is assigned to, since it is not always known in advance which pattern 610 and how many patterns 610 are contained in the image. Part of this task may be to find a grid 1311 on which centers of symmetry 112A and 112B are disposed. Instead of square grid 1311, other topologies are also contemplated for the arrangement of symmetry centers 112A and 112B, such as a circular concentric arrangement, see, for example, the second partial illustration in fig. 6. As a representative, square grid 1311 is observed below.
The task of determining the position of the correct grid for all patterns 610 based solely on the positions of the centers of symmetry 112A and 112B in fig. 13 is an ambiguous problem in some cases. If the pattern 610 is observed in fig. 13 for which the correct grid 1311 has been drawn, it is not difficult to indicate (to the observer) the correct grid 1311. However, it is apparent that the output may be ambiguous for the other two patterns 610 captured by the camera from a significantly more oblique perspective. There are a number of possible solutions as to how the grid can be placed through the centers of symmetry 112A and 112B. Here, the initially most obvious solution when locally observed, i.e. the solution with an approximately vertical axis, is not the correct solution, as can be seen based on the first marker 1313. Instead, the second marker 1314 is properly located on the grid. This suggests a naive procedure, e.g. searching for nearest neighbors of the respective symmetry center, which may lead to erroneous solutions in case of oblique viewing angles. Solutions with very oblique viewing angles are precluded in practice because the centers of symmetry 112A and 112B can no longer be found.
Fig. 14 shows a schematic view of the pattern 610 shown in the first partial illustration of fig. 6 at an oblique viewing angle. In a first part of the illustration a, a display medium 600 having a pattern 610 of predefined point symmetric regions 110A and 110B is shown in fig. 14. The second part of the diagram B in fig. 14 shows the symmetry centers 112A and 112B of the pattern 610 identified or detected by means of the device for providing of fig. 1 or the like and/or the method for providing of fig. 3 or the like. The centers of symmetry 112A and 112B have been detected and at least their locations are available.
Fig. 15 shows a pattern 610 from the first part illustration of fig. 14, in which a predefined point-symmetrical region 110B is highlighted. Here, the predefined even point symmetric region 110B is only exemplarily graphically highlighted to illustrate the distortion of the pattern 610 or the regions 110A and 110B due to the oblique viewing angle. The circular predefined point symmetric regions 110A and 110B, which are exemplary herein, are distorted into ellipses by oblique perspective.
The reconstruction of the correct mesh or topology of pattern 610 is discussed below with particular reference to fig. 14 and 15 and with general reference to the above-described figures.
Under oblique viewing angles, each circular region 110A and 110B from which the vote of the respective center of symmetry 112A and 112B originates becomes an ellipse. By backtracking votes that contribute to the respective centers of symmetry 112A, 112B (e.g., center of symmetry 112B with even point symmetry as highlighted in fig. 15), the shape and orientation of the respective ellipses can be deduced. The direction and ratio of the major axes of the ellipse reveal how the ellipse can be stretched or straightened to convert it back into a circle. The exemplary highlighted predefined even point symmetry region 110B of the observation pattern 610 contributes to the highlighted point symmetry center 112B. Depending on the design or construction, this region 110B is circular or nearly circular, e.g., hexagonal. Under oblique viewing angles, this circle becomes elliptical. When voting to identify the center of symmetry 112B, the pairs of symmetry points help to form extrema in the voting matrix that lie within the ellipse.
According to one embodiment, the point pairs in the camera image that result in forming sufficiently strong extrema are traced back from where. Further processing steps are performed for this purpose. Assume first that a vote has been made and that a sufficiently strong center of symmetry has been found. The starting point is thus the situation as shown in the second part of the diagram B of fig. 14. The voting process is then traversed again in a modified form. However, the already existing voting matrix is not re-formed again here. Instead, for each pair of symmetry points contributing to the voting matrix, it is checked whether this contribution contributes to one of the found symmetry centers 112A, 112B and thus has already contributed in the first traversal. If this is the case, the two positions of the point pair are stored or immediately further calculated. Advantageously, here also an index of symmetry centers 112A, 112B contributed by the symmetry points is stored or used. In this way all contributions to the successful centre of symmetry can be determined afterwards and (intermediately) stored or further used.
The start of the further processing steps does not have to wait until the end of the first processing step, i.e. the formation of the voting matrix and the finding of the symmetry center, but can start in advance and use already completed intermediate results of the first processing step, i.e. the found symmetry centers 112A, 112B. Then, in the information formed in this way, all image positions contributing to this can be read out for each center of symmetry 112A, 112B found. These positions lie substantially within the ellipse, or in addition to a few outliers, as illustrated in fig. 15 for the center of symmetry 112B by way of example.
Methods for determining parameters of the ellipse are known to those skilled in the art. For example, a principal axis transformation may be formed over a set of all points contributing to the center of symmetry 112A, 112B to determine the orientation of the principal axis and the two diameters of the ellipse. This can be achieved even without the need to intermediately store the contributing image locations: instead, these image locations may be further processed immediately after knowledge. Alternatively, an elliptical envelope around the point set may also be determined, with which as large a portion of the point set as possible is surrounded as tightly as possible (possible outliers are excluded).
Alternatively, instead of storing a set of points in the sense of a list, an index image equivalent to an index matrix may be created. The index image is used for the same purpose, i.e. to form parameters of all ellipses, but it stores information in other forms. Ideally, the index image has the same size as the signature image and is set to store an index, and is an index assigned to the found center of symmetry 112A, 112B. A special index value, e.g. 0, is set for indicating that no entry yet exists. If a symmetric point pair or signature pair contributing to the ith index is found while traversing further processing steps, the index i is entered at two associated locations of the respective signature, respectively. Thus, at the end of the traversal, an index image is obtained in which all the indices assigned to the centers of symmetry 112A, 112B respectively appear multiple times, wherein these indices form an elliptical region: each elliptical area then contains only entries with uniform index, except for several outliers, and index 0 at the unused position. The index image can then be easily evaluated to determine the parameters of the individual ellipses. Furthermore, it is not necessary to store the index image entirely. Once the data is no longer changing in a section of the index image, that section can already be evaluated and then the memory can be released again. This also results in lower time delays so that intermediate results can be provided earlier.
The two-dimensional arrangement of detected symmetry centers (see fig. 14) can then be corrected with known ellipse parameters such that these symmetry centers then lie on a grid of patterns 610, which here is only exemplary at least approximately square.
Fig. 16 shows a schematic diagram of the pattern 610 of fig. 15 after viewing angle correction, according to an embodiment. In other words, for illustration purposes, fig. 16 shows the pattern 610 after stretching the pattern 610 of fig. 15 by a ratio of two principal axes lengths in a direction orthogonal or perpendicular to the found oval or highlighted oval twisted area 110B. The correct grid 1311 can thus be found in a simple manner. Thus, in comparison with fig. 15, the ellipse is corrected in such a manner that the original circular shape of the region 110B is restored. Then, it is a simple matter to determine the grid 1311 in which the centers of symmetry 112A and 112B are located or to determine the adjacency between the centers of symmetry 112A and 112B without error. Fig. 16 is for illustration purposes only. In practice, it is not necessary to warp the image. Since the information about the location of the centers of symmetry 112A and 112B already exists in compressed form, it makes sense to use only these data for further processing and transforming their coordinates, wherein the transformation rules are formed by the determined ellipse parameters and the ellipses are made to be circles.
In the case of recording a camera image at length Jiao Jiaoju, a global transformation is sufficient for each partial section once to determine grid 1311. In the case of recording a camera image using a wide-angle lens (e.g., a fisheye lens), it is possible to operate using a partial transformation at least in a partial region. Thus, the transformation rules described above may be applied globally and/or locally. In a global variant, all projection centers are transformed using the same common transformation rule. This is significant and sufficient in many cases. The common transformation rule may be formed from a common view of all ellipses. If the centers of symmetry 112A and 112B are spatially located on multiple faces, the ellipses may be divided into groups according to their parameters. In this case, ellipses belonging to a plane have very similar parameters, in particular if the plane is flat. Global transformation rules may then be determined and applied for each group. This procedure applies to length Jiao Jiaoju. Local transformations are significant when multiple circular regions are imaged by camera imaging as ellipses of different shapes or different orientations. This is especially true for wide angle cameras or high distortion lenses.
After the transformation is applied, the center of symmetry positions belonging to the same plane are at least approximately on a common grid 1311. The next task is to assign the centers of symmetry 112A and 112B to grid locations. This can be done iteratively, for example in small steps. For example, for symmetry centers 112A, 112B, up to four nearest neighbors with approximately the same distance are searched, for which reference is also made to the markers in fig. 13. From these neighbors, the traversal continues to more distant neighbors until all captured symmetry centers 112A and 112B belonging to pattern 610 are assigned to a common grid 1311 or can be excluded from the common grid 1311. Thus, if centers of symmetry that do not match the just observed grid 1311 in terms of distance are encountered during this search, these centers of symmetry are not recorded, as they may be outliers or centers of symmetry belonging to other planes. The iterative search may be repeated for other facets such that eventually each center of symmetry 112A, 112B is assigned to a facet except for outliers. For these facets, pattern 610 may then be identified, preferably based on the binary codes associated with centers of symmetry 112A and 112B, which are respectively contained in the symbols of the extremum.
Fig. 17 shows a schematic diagram of an embodiment with a hierarchically symmetrical pattern 1710. The pattern 1710 corresponds or is similar to the pattern in the figures described above. More precisely, by way of example only, the pattern 1710 has a two-level hierarchical structure consisting of four predefined point-symmetric regions 110A and 110B. According to the embodiment shown here, the pattern 1710 has two predefined odd point symmetric regions 110A and two predefined even point symmetric regions 110B, by way of example only. In this case, the pattern 1710 has an odd-numbered point-symmetrical structure as a whole. The own even point symmetric region 110B and the own odd point symmetric region 110A are located at the first hierarchical level. The overall arrangement of the odd point-symmetric pattern 610B is at the second hierarchical level. The center of symmetry 112 of the second hierarchical level is represented by a quarter circle.
Fig. 18 shows a schematic diagram of an embodiment with a hierarchically symmetrical pattern 1810. The pattern 1810 in fig. 18 is similar to the pattern from fig. 17. More specifically, fig. 18 shows another example of a two-level hierarchical structure composed of predefined point-symmetric regions 110B. In the first hierarchical level, the predefined point-symmetric regions 110B are each assumed to be point-symmetric in itself. In the second hierarchical level, there is an odd point symmetry at the level of the pattern 1810, where the center of symmetry 112 is at the center of the six-part hexagon shown for illustration. The odd symmetry is here represented as an inversion of the predefined point symmetry region 110B, for example mirroring dark symbols on a bright background to bright symbols on a dark background.
Fig. 19 shows a schematic diagram of an embodiment with a hierarchically symmetrical pattern 610. In this case, pattern 610 is constructed from patterns 1710 and 1810 from fig. 17 and 18 or inverted and/or dot-mirrored versions thereof. By way of example only, pattern 610 has a three-level layered structure composed of two patterns 1710 of fig. 17 and two patterns 1810 of fig. 18. Patterns 1710 and 1810 are odd and thus are inverted in a dot-mirrored fashion at the center of symmetry 112 of pattern 610 at the center of the six-part hexagon shown for illustration. For example, the pattern 1710 shown in the lower right hand corner of fig. 19 is an inverted version of the upper left hand corner pattern 1710. The layering principle can be arbitrarily continued, namely a fourth level and a fifth level can be constructed, and the like.
Patterns with hierarchical symmetry are further discussed below with reference to fig. 17, 18, and 19. The symmetric patterns 610, 1710, 1810 may be constructed in multiple stages such that, for example, there are smaller regions of self symmetry in a first hierarchical level, and their co-observation results in symmetry at a next higher hierarchical level. Fig. 17 and 18 each exemplarily show how the two-stage layered pattern 1710 or 1810 is constructed. Based on this, a three-level layered pattern 610 is constructed in fig. 19. Thus, three hierarchical levels are included in the example of fig. 19. The third hierarchical level extends over the entire face of the pattern 610 (the area outlined by the dashed line) and includes a center of symmetry 112. In the second hierarchical level, there are four patterns 1710 and 1810 (each framed by a solid line), each having a center of symmetry located in the middle (not explicitly shown here). According to the embodiment shown here, there are thus 16 predefined point-symmetrical regions in the first hierarchical level, each region having a centre of symmetry. Here, the symmetry of the third hierarchical level can be seen from a greater distance. Four symmetries of the second hierarchical level can also be seen during the approach. At shorter distances, or if the capture resolution of the pattern 610 is sufficient, the symmetry of the first hierarchical level also becomes visible. Thus, for example, visual control (visual servoing) may be implemented over a large range of distances, such as for example, visual control by a robot in the direction of the pattern 610 or in any other direction. If finer or lower levels of stratification can already be captured, it is generally not necessary to capture coarser or higher levels of stratification. It is also not necessary to be able to capture all symmetry of the respective layering levels simultaneously, e.g. it is no longer possible to capture the entire pattern 610 in the camera image at all at very short distances. Obviously, even symmetry and odd symmetry can be selected and combined partly freely. Additional information can also be included in the setting, in particular a bit for the selection between odd and even symmetry, respectively, wherein such additional information can be transmitted to the acquisition system in this way. "partially free" means here that the remainder of the symmetrical form at the respective hierarchical level inevitably results from the next higher hierarchical level. In other words, for example, in fig. 18, the patterns "X" and "O" can be freely selected for the top row. The second row is then inevitably derived and here by inversion, since negative point symmetry is chosen at the next hierarchical level.
Fig. 20 shows a schematic diagram of a pattern 610 according to an embodiment. In a first portion of illustration a, fig. 20 shows a pattern 610, which pattern 610 is one of the patterns from fig. 8, as an example. The first part of fig. 20, diagram a, is an example of implicit additional information, here 8.8=64 bits as an example only, which is derived based on the type of symmetry of the predefined point symmetric regions 110A and 110B of the pattern 610 or the sign of the point symmetry associated therewith. In a second part of the illustration B, fig. 20 shows a pattern 610, which is only exemplarily built up of four predefined point-symmetrical areas 110A and 110B, here for example one predefined odd point-symmetrical area 110A and three predefined even point-symmetrical areas 110B on one square grid. Further, a code matrix 2010 for explicit additional information is arranged in the pattern 610 in this case. By way of example only, implicit additional information from the first portion of diagram a is explicitly contained in the code matrix 2010. The predefined area 110A with odd point symmetry here represents or marks the beginning row of the 8 x 8 matrix, thereby explicitly setting the readout order.
The transmission of implicit or explicit additional information is discussed in more detail below with reference to fig. 20.
It may be useful or necessary to transmit additional information to the recipient, e.g., to a computer, autonomous robot, etc., based on the pattern 610. The additional information may be more or less extensive. Some illustrative examples of additional information include parking spots, charging, southwest facing locations at 52°07'01.9 "N9°53' 57.4" E, left turn, speed limit 20km/h, mower charging station, and the like. There are various options for transfer by means of an imaging sensor or camera. In particular, a distinction can be made between implicitly contained additional information and explicitly contained additional information, for which purpose reference is made to two examples in fig. 20, wherein 64 bits of additional information are provided implicitly once and explicitly once. Implicit additional information means that it is somehow contained in the patterns 610 themselves, which are symmetrical in nature, while explicit additional information is typically designed and captured separately from these patterns 610.
One possibility of transmitting implicit additional information is illustrated based on the first part of diagram a of fig. 20: as implicit additional information of the binary code. Since a selection between odd and even point symmetry is made for each of the symmetric regions 110A and 110B when the pattern 610 is constructed, additional binary information (corresponding to 1 bit) can be transferred separately. If additionally a pattern with both odd and even point symmetry is allowed, the binary additional information is changed to ternary additional information, i.e. three cases instead of two.
Another possibility of transferring additional information is derived by using a non-uniform distance between the centers of symmetry of the areas 110A and 110B, i.e. implicit additional information based on the arrangement. Then unlike the arrangement shown in fig. 20-in fig. 20 the symmetry centers are on a square grid, these symmetry centers will be irregularly arranged, wherein additional information or a part thereof is encoded in the arrangement. Examples: if the corresponding center of symmetry is allowed to shift a fixed distance left/right and up/down, 9 possible positions are derived, whereby each center of symmetry can encode log 2 (9) Additional information of=3.17 bits. The oblique viewing angle between the imaging sensor and the pattern 610 is not a problem in any of the possibilities mentioned. For example, a portion of the center of symmetry (e.g., the outermost four centers of symmetry in a corner) may be used to define a coordinate system or regular base grid. Deviation or binary/ternary code for encoding is then associated with the baseGrid related.
The symmetric regions 110A and 110B for implicit additional information should not be so small that sufficiently prominent extrema are formed in the voting matrix. If a larger amount of additional information, in particular static, location-based additional information, is to be transmitted to the receiving party, e.g. a mobile robot, it is advantageous to explicitly encode the additional information.
In a second part of the diagram B of fig. 20 it is shown how in particular static, location-based additional information can be explicitly conveyed to the recipient (e.g. mobile robot): it may for example be agreed that at certain coordinates in the coordinate system defined by the center of symmetry there is further information, for example in binary (black/white) or further gradation (grey level) or color coding. The process then consists of two steps: in a first step, a field, e.g. code matrix 2010, is found based on the odd and even symmetry, in which field additional information is encoded. In a second step, this field and thus the information contained therein is read out. The oblique viewing angle between the imaging sensor and the pattern 610 does not cause problems here, since for reading out the display additional information it is neither necessary that the basis vectors of the found coordinate system are perpendicular to each other nor that these basis vectors have the same length. Alternatively, the image may also be corrected such that a Cartesian coordinate system is then present. Optionally, a display may also be installed in the field with the pattern 610, which may transmit time-varying information in addition to time-static information and/or transmit information over time.
High resolution additional information may also be contained in the pattern 610 itself by implicit error recognition. Thus, there is a further possibility to transfer (in particular static, location-based) additional information via the pattern 610 itself: this means that additional information is contained in the sequence of black and white or color or grayscale patterns 610 themselves. By the above classification, this additional information will be both implicit and explicit. Since the pattern 610 or at least some portions thereof have symmetry, additional information is automatically contained redundantly, typically in duplicate, respectively. This applies to both odd and even point symmetry. This fact can be used for error correction or error detection. For example, if the pattern 610 is contaminated with e.g. bird droppings, the errors thus occurring in the additional information may be detected with high reliability, since the same errors are likely not to occur at the associated symmetric positions.
Fig. 21 shows a schematic diagram of the control device of fig. 1 and/or the application situation for control of fig. 4. In this case, the application situation is merely by way of example a docking maneuver of a robot of the supply vessel 2165, embodied as a movable gangway 2160, on the platform 2170 of the offshore wind power plant 2175. At least one camera 102 is disposed on gangway 2160 and/or on supply vessel 2165. At least one predefined point symmetric region 110 and/or at least one pattern 610 of at least one predefined point symmetric region 110 is arranged in the region of the platform 2170 on the offshore wind power plant 2175. At least one predefined point symmetric region 110 corresponds to or is similar to one of the predefined point symmetric regions described above from one of the figures described above. Pattern 610 corresponds to or is similar to one of the patterns described above from one of the figures described above.
In other words, fig. 21 shows a situation in which a supply vessel 2165 with a movable gangway 2160 should rest on a platform 2170 of an offshore wind power plant or an offshore wind power plant 2175, which movable gangway 2160 is movable by lifting, pivoting, pitching and/or telescoping as indicated by the arrow. To this end, the depicted areas 110 and/or patterns 610 are laid down on a platform 2170, while cameras 102 are mounted on the gangway 2160 and on the supply vessel 2165, which cameras view these areas 110 or patterns 610. Using the control device of fig. 1 and/or the method for control of fig. 4, etc., the relative position and orientation between the platform 2170 and gangway 2160 is continuously determined and adjusted so that docking and undocking can be reliably and gently performed and at the same time the position can be maintained.
The possibility of improving visual robot control or visual servoing with point-symmetric (hidden) areas 110 or patterns 610 is derived therefrom. Here, a mechanical actuator system (here gangway 2160) is controlled based on the detection of a symmetric region 110 or pattern 610 in an image of the camera 102 or other imaging sensor, wherein manipulation of the actuator system results in relative movement between the pattern 610 and the camera 102 or sensor. With respect to the application of offshore pressure maneuvers shown and explained in fig. 21, the challenge is that the supply vessel 2165 is moved by waves and wind, while the platform 2170, which may also be floating, is also moved by waves and wind, but with different dynamics. Nevertheless, a safe transfer of service personnel between the supply vessel 2165 and the platform 2170 should be achieved, and as much as possible in all weather.
The depicted area 110 and/or pattern 610 is laid on the side of a wind power plant or offshore wind power plant 2175 and is directly adjacent to and above a docking point or platform 2170. The pattern 610 is passive and therefore does not require electrical connections and manipulation. The light required for night time operation comes from supply vessel 2165. At least one camera 102 and associated image processing is located on the ship side. For example, the camera 102 is mounted on the gangway 2160 and thus moves therewith. However, the camera 102 may also be mounted further away from the point of attachment, for example at the rear of the telescopic mechanism, as for example the path of extension of the telescope is known and may be taken into account when calculating the relative movement. For practical reasons, for example, a plurality of cameras 102 are used, which are mounted at different locations on the ship, such that at least one camera 102 always has favorable viewing conditions (unobstructed view, favorable distance, favorable viewing angle, no glare) for one or more patterns 610. For the same reason, for example, a plurality of patterns 610 are used, which patterns can be designed in such a way that they are distinguishable from each other, as explained for example with reference to fig. 7 and 8. Triangulation may be performed using multiple patterns 610, which makes determination of relative orientation easier or more accurate.
Since the distance between the camera 102 and the pattern 610 during such docking or undocking processes may vary widely, for example, a pattern 610 with layered symmetry is used, as explained for example with reference to fig. 17 to 19. These patterns include both large-area point symmetry, which is detectable from outside the distance and still in fog, rain or snow, and many small-area point symmetry for accurate operation at short distances.
Typically, the relative movement between the camera and the pattern has six degrees of freedom, three rotational degrees of freedom and three translational degrees of freedom. Movement with these degrees of freedom results in a change in the image content, and relative rotation-like the camera rotates about itself, about its center of projection-results in a distance-independent displacement and rotation of the image content, and relative translation results in a distance-dependent change in the image content, in particular a change in the size and viewing angle of the object being imaged.
It is assumed that the symmetry arrangement in the observed pattern 610 is known, e.g. based on a reference pattern. For example, the pattern 610 shown in fig. 7 or 8 may be used. Thus, for example, it is known that there is 8 x 8 point symmetry on a square grid with known distances. Alternatively, sequences of odd and even point symmetries of the reference pattern will also be known. Alternatively, the sequence can also be determined from observations, which reference pattern is present then possibly being determined by database comparison.
The six degrees of freedom sought may be determined from each observation of the pattern 610 (i.e., with each new camera image). For this, the center of symmetry of the region 110 of the pattern 110 is first determined. These symmetry centers are then registered with the reference pattern, whereby the assignment of each observed symmetry center to the corresponding reference symmetry center is known. It is here of no problem that a substantial part (e.g. significantly more than half) of the centre of symmetry cannot be observed (e.g. due to occlusion) because the symmetry arrangement in the respective pattern 610 has a sufficient degree of redundancy such that registration and even a clear identification is still possible. Thus, if the allocation of a sufficient number of symmetry centers is known, six degrees of freedom can be determined therefrom. If the geometry of each rigid portion of the arrangement is known, the six degrees of freedom determined between the camera 102 and the pattern 610 may be transformed into other references, such as for example, for use between the docking point of gangway 2160 and the docking point of platform 2170, or alternatively, between the second sensor and the pattern 610.
Alternatively, a stereoscopic or multi-camera system may also be used so that multiple images of the same pattern 610 may be recorded from different perspectives, respectively, and evaluated together. Stereoscopic or multi-camera systems provide the advantage that the distance and surface orientation of the pattern 610 can be determined by triangulation, which can make the desired determination of six degrees of freedom more accurate, simpler, and/or faster.
According to embodiments, the relative arrangement between the camera 102 or imaging sensor and the pattern 610 and if desired the relative orientation may be determined quickly and accurately. This may be used to manipulate a mechanical actuator system that changes the relative arrangement or orientation. For example, a hydraulic system for controlling the gangway 2160 with all desired degrees of freedom. In particular, this can be embedded in the regulating circuit.
The embodiment mentioned in fig. 21 represents only one of many conceivable applications. In general, the camera 102 may be located on a portion that is moved together by an actuator system, while the pattern 610 is located on a non-moving portion, or vice versa. Also both sides can be moved, as is the case with floating wind power plants and supply vessels. Thus, various actions may be performed in a controlled manner, such as purposefully approaching a particular point, aligning a prescribed orientation, and gently docking, as opposed to undocking and orienting and safely moving away, parallel movement of the robot toward the surface, movement along a prescribed trajectory relative to one or more patterns 610 at a prescribed location-dependent speed and orientation, grasping of the object by the robot during movement of the object with the patterns 610, and the like.
Another example of application is the visual docking of contact points, such as the connection of electrical contacts to mating contacts, or the pressure connection of gases, liquids, etc. to counterparts or tank connections. For example for charging or refuelling of ground vehicles, aerial vehicles, water vehicles, robots or unmanned aerial vehicles. Since contactless charging of electric vehicles and electric hybrid vehicles has not yet been established, and may not be established due to their loss of power, charging via electrical contacts is still of interest and importance. Manual plugging (insertion) of the plug is still undesirable because it is cumbersome, time consuming, hands dirty or cold, there is a tripping hazard, there is a risk of cable theft, there is a risk of electric shock when the cable is damaged. Another possibility is to connect to a system similar to fig. 21 via contacts on the ground, wherein the side on which the actuator system for contact is located is arbitrary, i.e. either on the ground or on the vehicle, and the side on which the camera or imaging sensor and pattern 610 or at least one symmetry area 110 is located is also arbitrary. In such applications, a unique point-symmetric region 110 is sufficient. For example, the contact point to be found should then be placed in the center of symmetry of the region 110, as this center of symmetry is the uniquely marked place. For further applications, something can be hidden behind the center of symmetry, for example, which only the robot should find, but not the human. Alternatively, the center of symmetry may mark the location where the flying drone should land, and the human cannot identify this marked point.
Another example of an application is navigation in buildings and facilities by means of hidden symmetry using at least one area 110 and/or pattern 610. In which case hidden symmetry in the form of pattern 610 may be utilized or employed to achieve visual orientation and navigation, particularly in underground and open air buildings and systems. This enables the machine, in particular a mobile robot, and the person equipped with an auxiliary tool with a camera, such as a smartphone, to self-locate and navigate independently.
GPS-based systems are most adequate for many applications of positioning and navigation if there is visual contact with satellites. However, in many locations, this possibility is not available, for example inside a building, in a parking lot, in a tunnel, in a mining area or underwater. Especially in large, uniformly designed multi-story buildings, orientation and navigation can be very difficult for both people and machines. All currently known SLAM systems (slam= Simultaneous Localization and Mapping, synchronized locating and mapping) can only be based on landmarks already present in the environment anyway when forming and using their maps, and therefore can work unreliable and/or imprecisely there. Thus, the use of this system has so far been limited to a few simple use cases and demonstrations. In practice, technical markers are used in virtually all applications where reliability, high availability and accuracy are required. There are various possibilities for this, such as beacons, different types of tags, such as radio frequency light, electro-magnetic light or acousto-magnetic light, pulsed light, and various types of passive tags, such as April Tag, arUco Tag, AR Tag, QR code, etc. All these markers are attractive. Some of these are also difficult to install, for example beacons that need to be buried underground or require maintenance.
The pattern 610 according to an embodiment provides imperceptibility, i.e. the pattern 610 is not perceived as a mark. They are designed to be particularly imperceptible to humans, but particularly easily perceptible to machines using the method for providing of fig. 3. Furthermore, the pattern 610 is passive and therefore does not require electrical power, networking, or maintenance. With the possibilities already described above for implementing the areas 110 and the patterns 610 in terms of material and symmetrical pattern content, various options are derived for making the useful patterns 610 attractive or at least unobtrusive and ensuring that they blend into the environment as best as possible. For example, interior designers, decorators, hospitals, offices, hotels, airports, train stations, etc. may use these possibilities.
For example, the multi-story building may also be post-assembled with the pattern 610 in preparation for use of the mobile robot without such intervention to alter or even impair the appearance of hallways and rooms. For this purpose, it is possible, for example, to modify or replace existing elements of the wall or ceiling panel, for example elements which are embedded with the same dimensions and colours but with, for example, relief, milled, embossed, drilled, punched or printed, laser or etched patterns 610. For example, by replacing an acoustic ceiling, the hole pattern used therein is replaced only by a hole pattern having symmetrical characteristics. New services and business models and expertise in equipping rooms, buildings and facilities with such patterns 610 are thereby also made possible. Even in application fields where the advantage of concealment may not be critical, such as in warehouses, in tunnels, in mining, under water, in offshore platforms, etc., the consideration of the pattern 610 consisting of the areas 110 with random features already described is advantageous for its use in the sense of robust detectability and accuracy. With this association, a layered pattern 610 corresponding to or similar to the layered pattern described with reference to fig. 17 to 19 may also be advantageously used.
If a large number of patterns 610 are required, for example in a large building such as a hospital, it may be advantageous to make the patterns 610 distinguishable or to equip the patterns 610 with additional information, which is also explained in particular with reference to fig. 20. If a large area or a long room is covered by only a few patterns 610, each of these patterns should be detectable over a large distance. For this, a solution with a layered pattern 610 having a large area point symmetry for detection from the outside of a long distance and a small area point symmetry for accurately determining the pose between the robot and the pattern 610 at a short distance is proposed with reference to fig. 17 to 19.
Fig. 22 shows a schematic diagram of various display media 600 with predefined point symmetric regions 110, according to an embodiment. A center of symmetry 112 of the predefined point symmetry region 110 is also drawn. At least one predefined point symmetric region 110 is created on, at or in each display medium 600. The display medium 600 is in this case a landmark element. In fig. 22, a useful or decorative object of a garden is shown as an example of a landmark element. Point-symmetry imperceptible to humans is hidden in the faces of landmark elements, respectively, but they support positioning and navigation of robotic lawnmowers as landmarks, for example. Thus, the display medium 600 is an object with a hidden symmetric or predefined point symmetric region 110 for gardens and houses to support mobile robots. Specific examples of display media 600 are plant pots, flowerpots, containers, brackets, rims, stones, floors, wallboards, garden lights, rain barrels, or decorative elements.
Thus, symmetrically designed objects in gardens and houses can be advantageously used as display medium 600 to support self-positioning and navigation of mobile robots, where these objects are unobtrusive in the sense that their function is not visible to humans. Optionally, there may also be included a surveillance camera, which may be present, typically non-mobile, wherein support is enabled in both directions between the robot and the surveillance camera. In this case, the self-positioning and navigation of the autonomous mower as a robot may be improved or supported in particular according to embodiments.
Traditionally, for example, borderlines and, if necessary, guidelines are used, which are manually laid in the turf and may be vulnerable to damage, for example, when working on the ground. Furthermore, the mowing robot may be oriented on the ground wire strongly or frequently, leaving unsightly marks on the lawn. Alternative conventional solutions may also be unsatisfactory for a number of reasons. The GPS positioning combined with lawn identification may be inaccurate, particularly the distinction between lawns and nurseries and adherence to the boundaries with neighbors may be unreliable. Conventional solutions employing battery powered radio beacons, which should be inserted at multiple locations in a nursery, for example, require maintenance and do not run shifts or turns. It is assumed that this solution is not suitable for fault-free operation for years without problems.
According to one embodiment, it is provided that elements such as the landmark element or the display medium 600 in fig. 22 are laid in the garden, which elements have two functions, namely a first function as a difficult-to-perceive landmark and a second function, for example as a plant pot, flowerpot, container, stand, border, stone, floor, wall panel, garden light, rain tub or as a decorative element. There are various possibilities for their design in terms of materials, surfaces, textures, structures and colors. Wall designs on houses or garden sheds, designs of railing, fencing or privacy preserving elements with point symmetric areas 110 are also contemplated. The landmark element or display medium 600 should be anchored to the ground, building or fence as fixed or difficult to move as possible, whereby accidental displacement is less likely to occur. Preferably, the robot equipped with a camera or other imaging sensor system should have as much as possible at least one landmark element in the field of view, respectively. Preferably, the robot has two or more elements in the field of view, so triangulation is also possible so that the distance and relative position of the robot with respect to the landmark elements or the display medium 600 can be more accurately determined. Instead of using a number of landmark elements or display medium 600, the robot may also have a plurality of sensors, for example two cameras or one camera and one laser scanner, to always ensure an accurate distance measurement of the respective landmark element of the scene and possibly other possible points by means of stereoscopic or time-of-flight measurements.
The landmark elements or areas 110 or patterns of the display medium 600 may be encoded, see also fig. 8, or provided with additional information, see also fig. 20, to make them easily distinguishable by the robot. If only a few landmark elements or display mediums 600 are used to cover a large area or long extension, these landmark elements or display mediums 600 should be detectable in a long distance range, respectively. For this purpose, a layered pattern proposed with reference to fig. 17 to 19 may be used, which has a large-area point symmetry for detection from the outside of a long distance and a small-area point symmetry for precisely determining the pose between the robot and the landmark element or the display medium 600 at a small distance.
The practical application according to the embodiment can be summarized as follows: a user purchases three compatible landmark elements or display mediums 600, such as self-adhesive decorative films of plant pots, decorative stones and rain barrels, for their mowing robot and places them in different locations of their garden. In addition to the charging station, three landmark elements or display media 600 can be used and find their best placement, for example on a website, where it can choose to upload a picture of their garden, draw their garden or the surface to be mowed and/or mark their garden on a map. The larger and more complex the garden, the more landmark elements or display media 600 are needed.
Then, the teaching operation of manual control is performed on the robot by the user. The user may choose to control the robot in sound by smart phones, tablet computers, remote controls, game controllers, gesture controls, or by shout commands. The teaching functions to show the robot the boundaries of the surface to be mowed and how the robot reaches from one garden section to the next. The robot automatically recognizes the obstacle, so that it is not necessary to consider the obstacle particularly during the teaching operation. The forced charging station is also provided with a point-symmetrical region 110 and is always used, for example, as a landmark element. During the teaching run, the robot captures landmark elements or display medium 600 visible from the respective location, and the robot is later oriented towards the landmark elements or display medium 600.
The robot can then autonomously drive through the garden and mow along the track. Here, the robot is oriented towards a known landmark element or display medium 600 having an area 110 or pattern. The robot optionally uses odometry, inertial navigation and/or GPS to overcome temporary loss or occlusion of view to the landmark element or display medium 600. If too few landmark elements or display medium 600 are found to be present during the teach or mowing operation, the robot outputs a suggestion (e.g., through a smart phone application) as to how many elements should be added and where they should be laid. If the landmark or display medium 600 is obscured by foliage growth over time, the robot gives a corresponding indication and requests cleaning of the obscured line of sight. If the landmark element or display medium 600 is unintentionally significantly displaced or rotated, the robot recognizes that the relative relationship between the landmark element or display medium 600 has changed and requests the user to update the teaching of the involved parts of the garden.
If monitoring cameras are also present on the property, these monitoring cameras may optionally be included in the system: the method for detecting landmark elements or the area 110 of the display medium 600 also runs on a monitoring camera. The generally fixed location of the landmark element or display medium 600 is captured by the surveillance camera. These locations are stored. From time to time, such as once per minute, these locations are reacquired and compared to stored locations. If the landmark position changes, for example, inadvertently, a surveillance camera system networked with the robot reports the change in position to the robot. The update teaching can then be cancelled if necessary. The surveillance camera system may also provide information about no changes and that mowing operations may be performed normally. Conversely, a surveillance camera may also benefit from the presence of a landmark element or arrangement of display media 600: the robot builds a 2-or 3-dimensional digital map of its environment during its travel. The landmark element or display medium 600 has a fixed location in the map. The respective surveillance camera finds a landmark element or a part of the display medium 600 and can relatively position itself in a map constructed by the robot, i.e. self-positioning, based on the map. This is premised on networking of robots and cameras. Without networking, the surveillance camera may still locate itself with respect to the landmark element or the display medium 600.
Networking can go farther: the map formed by the robot is uploaded back to the server where planning was previously performed. The planning and actual states may be compared to each other. The user obtains a digital image of his garden in which the user can influence the control of the robot more accurately. Optionally, the camera image of the robot camera can also be used in the case of this digital imaging, so that the user can also view the state of his garden in detail afterwards in the virtual reality view, for example during the various seasons. In return, the manufacturer may voluntarily obtain data about all gardens of voluntary participating users and use this information to further develop their mowing robots and landmark elements or display media 600. The applications and modes of operation shown here for gardens can also be transferred to the interior of a house, in particular for vacuum cleaning robots, floor cleaning robots, window cleaning robots, flight surveillance robots and non-mobile surveillance cameras.
Fig. 23 shows a camera image of a conveyor belt as a display medium 600, the conveyor belt having an embodiment of a pattern 610 of predefined point-symmetric areas and an object 2300 placed on the conveyor belt. In this case, the conveyor belt is therefore provided with a pattern 610 similar to or corresponding to one of the patterns described above. The pattern 610 is constructed of predefined point symmetric regions having odd and even symmetry. The centers of symmetry 112A and 112B of these predefined point symmetric regions are shown in fig. 23. In other words, the view from above of fig. 23 shows a camera image with a belt snapshot. On which an object 2300 is placed, for example in the form of various light fixtures with GU10 bases. Even if the object 2300 obscures more than half of the conveyor belt, a portion of the pattern 610 that is large enough for this function is still visible. When evaluating the image, the plotted point symmetry centers 112A and 112B are found. Based on the encoding of the pattern 610, the absolute position on the conveyor belt can be determined explicitly in the conveyor belt coordinate system.
Fig. 24 shows the camera image of fig. 23 after processing using the method for providing of fig. 3. Thus, in fig. 24, a contour identified for the camera image of fig. 23 on the conveyor belt due to partial occlusion of the pattern 610 is shown. Here, after the coordinate range of the conveyor belt included in the camera image is determined, the corresponding known reference pattern is compared with the camera image pixel by pixel. Where the camera image and the reference pattern deviate from each other, there is occlusion. These areas are marked in fig. 24 by means of highlighting. Thus, the exact contour of the object 2300 and its location can be detected.
Referring to fig. 23 and 24, it should be noted that the pattern 610 with the incorporated symmetry may be used for moving a belt, in particular a conveyor belt, wherein the pattern 610 is applied on the outside or inside of the belt and is observed using at least one imaging sensor (e.g. a camera), in particular during use and movement of the belt. For example, if the surface of a conveyor belt for industrial production is provided with a suitable pattern 610, possible advantages result therefrom for the observation and control of the process performed at least in part by the robot. The centers of symmetry 112A and 112B form marked places on the conveyor belt that are well-defined even under mechanical stretching of the belt or when the belt is running inaccurately (e.g., laterally offset) or when the belt is vibrating. These marked places can define a flexible coordinate system firmly connected to the belt.
By knowing and using the relative arrangement of the centers of symmetry 112A and 112B to each other, particularly considering the odd and even symmetries, the absolute coordinates of each belt portion visible in the camera image can be determined in the belt coordinate system by evaluating the camera image containing the sections of the conveyor belt. This may be used, for example, to place the object 2300 at a well-defined location on a conveyor belt from which it may be removed again at a different location without the need to detect or identify the object 2300. To this end, the placement site equipped with the corresponding camera including the assessment digitally provides, for example, the coordinate(s) where the object 2300 is placed. Additional information may be provided together, such as what object type, how the object 2300 is oriented on the conveyor belt, which side is facing down, where the object may be grasped, and so forth. At the next pick-up location, which is also equipped with a camera including an evaluation, the digitally transmitted data is ready in time.
By observing the conveyor belt with a camera and (continuously) evaluating, the belt coordinates and belt speed at the grabbing place can be constantly known. Under normal circumstances, the object 2300 is still precisely located at the band coordinates where it was previously placed at this point in time. Thus, the pick-up mechanism may be ready in time or may reliably control the robot to accurately pick up the object 2300 from the conveyor belt without impact. There is no need to identify or measure the object 2300 itself in the camera image. It is sufficient to know only the digitally transmitted data about the type, orientation and orientation on the belt coordinate system to achieve a reliable gripping function. Fig. 23 illustrates an evaluated image of a camera viewing a conveyor belt on which the object 2300 is positioned from above. It can be seen that despite the partial occlusion, more than a sufficient number of symmetry centers 112A and 112B are found to determine the corresponding location on the band coordinate system.
In a simple design, the evaluation unit or determination device of fig. 1 only needs very little information about the pattern 610 applied to the strip, i.e. at which coordinates the centers of symmetry 112A and 112B are located and their signs, i.e. respectively odd (negative) or even (positive) point symmetry. In another design, the evaluation unit or the determination device in fig. 1 is also provided with a complete reference pattern, for example in the form of an image file, also referred to as a reference image, for comparison therewith. Alternatively, the reference pattern can also be used as a calculation rule, for example in the form of parameters of a quasi-random number generator which always provides the same number sequence with the same initialization and is called here a plurality of times to create a reference pattern which is shared in a plurality of spatial frequency ranges, whereby the reference pattern is applicable for short distances, medium distances and long distances at the same time. Thus, the reference image (or a portion of the reference image) can be calculated if desired and occupies little storage space when not in use.
The camera image may then be compared, e.g. pixel by pixel, with the corresponding sections of the reference image using knowledge of the reference image. The scaling and rotation required for this is a precondition here, so that the two images match in terms of resolution and orientation. This additional image comparison step is only possible or significantly simplified, since the position in the band coordinate system has been previously determined based on the pattern 610 with encoded point symmetry. Any remaining uncertainty about this position is very small. In other words, the two images to be compared are already well registered to each other, except for a small local residual uncertainty.
A high-precision evaluation can then be made based on an accurate comparison of the camera image and the reference image, for example to determine local distortion of the band due to current loading or aging or poor band tracking. Then, in case an empty and clean conveyor belt is used, it should be possible to determine an almost perfect agreement of the whole surface between the camera image or the camera image section and the corresponding reference image section. However, if there is an object 2300 of the local occlusion pattern 610 on the conveyor belt, then the correspondence between the camera image and the reference image is no longer given at these locations or pixels. From this information, a mask of the object 2300 on the conveyor belt may be generated, or an accurate contour of the object 2300 may be derived. See fig. 24 for this purpose.
Further advantages are listed in non-exhaustive form below. Additional sensors and associated processing steps, such as gratings, laser scanners, time-of-flight sensors, ultrasonic sensors, rangefinders, mechanical switches, and camera-based specialized algorithms, which are commonly used for presence detection, positioning, measurement, classification, or orientation identification, may be saved. In addition to the already mentioned possibilities of determining the absolute position in the belt coordinate system based on the found symmetry centers 112A and 112B and of determining the speed from the time derivative of the position, there is also the possibility of determining the belt stretching or compression, in particular stretching in the longitudinal direction. The stretching may be caused, for example, by ageing or overloading of the belt, stiff drums or other parts of the mechanism, loading of the belt, poorly synchronized drive units. Determining stretch may help detect adverse effects early. This can also be detected in time if the belt deviates laterally from its target orientation, without the need to observe with a camera edges that might be occluded. Otherwise, such lateral deviations may have a negative effect, for example, on the service life. Longitudinal or transverse vibration of the belt may be detected. The longitudinal vibrations may be felt as small periodic deviations of the determined longitudinal speed. Longitudinal vibrations may be caused by defective or retarded bearings and may cause the subject to slip on the conveyor belt.
Post assembly can be performed relatively simply by integration into existing systems. Only the tape needs to be replaced with a tape having the pattern 610. The existing camera of the viewing band may continue to be used as necessary. The software or embedded hardware for evaluating its image may be replaced or supplemented with an algorithm for detecting the center of symmetry and further processing the center of symmetry or an embodiment of one of the methods described above. The strips may be made of different materials or parts, such as rubber strips or rubber fabric strips or plastic or metal connecting strips. The pattern 610 may be applied to the upper or lower side of the belt. The upper side refers to a side in contact with the transported object 2300 or cargo, and the lower side is more likely to be in contact with transport equipment, transport rollers, or the like. Applying the pattern 610 on the underside may provide advantages because the pattern cannot be obscured by the transported object 2300 and because the pattern 610 and camera may better prevent contamination, for example, in dusty environments. The pattern on the upper side may provide the advantage of achieving higher accuracy also because the object 2300 is placed on the upper side and is visible in the same camera image as the pattern 610.
If an embodiment includes an "and/or" link between a first feature and a second feature, this should be understood as: this example has both the first and second features according to one embodiment, and either only the first feature or only the second feature according to another embodiment.

Claims (15)

1. A method (300) for providing navigation data (135) for controlling a robot (2160), wherein the method (300) has the steps of:
reading in (324) image data (105) provided by means of a camera (102) from an interface (122) to the camera (102), wherein the image data (105) represent a camera image of at least one predefined even and/or odd point-symmetrical region (110; 110A, 110B) in the environment of the camera (102);
-determining (326) at least one center of symmetry (112; 112a,112 b) of the at least one even and/or odd point-symmetric region (110; 110a,110 b) using the image data (105) and a determination rule (128);
comparing (330) the position of the at least one center of symmetry (112; 112A, 112B) in the camera image with a predefined position of at least one reference center of symmetry in a reference image (115) with respect to a reference coordinate system to determine a positional deviation (131) between the center of symmetry (112; 112A, 112B) and the reference center of symmetry; and/or
-obtaining (332) displacement information (133) of at least a subset of pixels of the camera image relative to corresponding pixels of the reference image (115) using the positional deviation (131), wherein the navigation data (135) is provided using the positional deviation (131) and/or the displacement information (133).
2. The method (300) of claim 1, wherein the determination rules (128) used in the determining step (326) are structured such that
Generating a signature(s) for a plurality of pixels of at least one section of the camera image to obtain a plurality of signatures(s), wherein each of the signatures(s) is generated using a descriptor having a plurality of different filters, wherein each filter has at least one symmetry type, wherein each of the signatures(s) has a sign for each filter of the descriptor,
for the signature (S), at least one mirror signature (S) of at least one symmetry type for the filter is determined PG ,S PU ),
Checking whether a pixel with said signature(s) has at least one further pixel in a search area (1104) in the surrounding of the pixel,the at least one further pixel having a value corresponding to the at least one mirror signature (S PG ,S PU ) To find the pixel coordinates of at least one symmetric signature pair from at least one further pixel and the pixel when the pixel is present,
and evaluating pixel coordinates of the at least one symmetric signature pair to identify the at least one center of symmetry (112; 112A, 112B),
And/or wherein at least one reflector (R PG 、R PU ) Is applied to the sign of one of said signatures (S) to determine the sign of said at least one mirror signature (S PG ,S PU ) Wherein each reflector (R PG 、R PU ) Rules with filters specific to the symmetry type and dependent on the descriptors for modifying the symbols, wherein the search area (1104) depends on the applied reflector (R PG 、R PU ) At least one reflector of the pair of reflectors.
3. The method (300) according to claim 2, wherein in the determining step (326), for each determined symmetry center (112; 112a,112 b) pixel coordinates of each symmetry signature pair that has contributed to correctly identifying the symmetry center (112; 112a,112 b) are used to determine transformation rules for transforming pixel coordinates of the symmetry center (112; 112a,112 b) and/or the at least one even and/or odd point symmetry region (110; 110a,110 b), wherein the transformation rules are applied to pixel coordinates of the symmetry center (112; 112a,112 b) and/or the at least one even and/or odd point symmetry region (110; 110a,110 b) to correct a distorted view of the camera image.
4. The method (300) according to any of the preceding claims, wherein a type of symmetry of the at least one center of symmetry (112; 112a,112 b) is determined in the determining step (326), wherein the type of symmetry represents even point symmetry and/or odd point symmetry, and/or in the comparing step (330) the type of symmetry of the at least one center of symmetry (112; 112a,112 b) in the camera image is compared with a predefined type of symmetry of at least one reference center of symmetry in the reference image (115) to check the consistency between the at least one center of symmetry (112; 112a,112 b) and the at least one reference center of symmetry.
5. The method (300) according to claim 4, wherein the image data (105) read in the reading step (324) represents a camera image of at least one pattern (610; 1710, 1810) constituted by a plurality of predefined even and/or odd point symmetry regions (110; 110a, 110B), wherein in the determining step (326) a geometrical arrangement of symmetry centers (112; 112a, 112B) of the at least one pattern (610; 1710, 1810) is determined, a geometrical sequence of symmetry types of the symmetry centers (112; 112a, 112B) is determined, and/or the pattern (610; 1710, 1810) is determined from a plurality of predefined patterns using the sequence, wherein the arrangement and/or the sequence represents an identification code of the pattern (610; 1710, 1810).
6. The method (300) according to claim 5, wherein in the determining step (326) an arrangement of symmetry centers (112; 112a, 112B) of the at least one pattern (610; 1710, 1810) and/or a sequence of symmetry types of the symmetry centers (112; 112a, 112B) is used to determine implicit additional information of the at least one pattern (610; 1710, 1810) or a readout rule for reading out explicit additional information in the camera image, wherein the arrangement and/or the sequence represent the additional information in encoded form, wherein the additional information is related to controlling the robot (2160).
7. The method (300) according to any one of claims 5 to 6, wherein in the step of comparing (330), the reference image (115) is selected from a plurality of stored reference images according to the determined arrangement, the determined sequence and/or the determined pattern (610; 1710, 1810) or the reference image (115) is generated using stored generation rules.
8. The method (300) according to any one of claims 5 to 7, wherein the step (326) of determining and/or the step (330) of comparing is performed jointly for all symmetry centers (112; 112a,112 b) independently of the symmetry type of the symmetry centers (112; 112a,112 b), or the step (330) of determining and/or the step of comparing is performed separately for symmetry centers (112; 112a,112 b) of the same symmetry type depending on the symmetry type of the symmetry centers (112; 112a,112 b).
9. A method (400) for controlling a robot (2160), wherein the method (400) has the steps of:
evaluating (444) navigation data (135) provided by the method (300) according to any of the preceding claims to generate a control signal (145) dependent on the navigation data (135); and
the control signal (145) is output to an interface (148) to the robot (2160) to control the robot (2160).
10. Method (500) for manufacturing at least one predefined even and/or odd point symmetric region (110; 110a,110 b) for use in a method (300; 400) according to any of the preceding claims, wherein the method (500) has the steps of:
generating (502) design data (204) representing a graphical representation of the at least one predefined even and/or odd point symmetric region (110; 110A, 110B); and
-generating (506) the at least one predefined even and/or odd point symmetric region (110; 110a,110 b) on, at or in the display medium (600) using the design data (204) to manufacture the at least one predefined even and/or odd point symmetric region (110; 110a,110 b).
11. The method (500) according to claim 10, wherein in the generating step (502) design data (204) is generated representing the graphical representation of the at least one predefined even and/or odd point symmetric region (110; 110a,110 b) as a circle, ellipse, square, rectangle, pentagon, hexagon, polygon or torus, wherein the at least one predefined even and/or odd point symmetric region (110; 110a,110 b) has a regular or quasi-random content pattern, and/or wherein any of the at least one predefined even and/or odd point symmetric region (110; 110a,110 b) is predefined, and wherein the second half is structured by dot mirroring and/or inversion of gray values and/or color values, and/or wherein in the generating step (506) the at least one predefined even and/or odd point symmetric region (110 a,110 b) is generated by additive manufacturing process, separation, coating, shaping, primary shaping or optical display, and/or wherein the display medium (600) has a regular or quasi-random content pattern, and/or wherein the display medium (600) has glass, stone, rubber, plastic, paper, metal, cardboard, concrete, paper, or concrete.
12. The method (500) according to any one of claims 10 to 11, wherein in the generating step (502) design data (204) representing a graphical representation of at least one pattern (610; 1710, 1810) constituted by a plurality of predefined even and/or odd point symmetric regions (100; 110a,110 b) are generated, wherein at least a subset of the point symmetric regions (100; 110a,110 b) are aligned on a regular or irregular grid (1311), directly adjacent to each other and/or separated from at least one adjacent even and/or odd point symmetric region (110; 110a,110 b) by a gap portion, are identical or different from each other in their size and/or their content pattern and/or are arranged in a common plane or in different planes, wherein in the generating step (502) design data (204) representing a graphical representation of at least one pattern (610; 1710, 1810) with hierarchical symmetry is generated.
13. An apparatus (120; 140; 200) arranged to perform and/or manipulate the steps of the method (300; 400; 500) according to any of the preceding claims in a corresponding unit (124, 126, 130, 132;144, 146;202, 206).
14. Computer program arranged to perform and/or manipulate the steps of a method (300; 400; 500) according to any of claims 1 to 12.
15. A machine readable storage medium having stored thereon the computer program of claim 14.
CN202180090220.6A 2020-11-12 2021-10-15 Method for providing navigation data for controlling a robot, method and device for manufacturing at least one predefined point-symmetrical area Pending CN116710975A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE102020214249.1 2020-11-12
DE102020214249.1A DE102020214249A1 (en) 2020-11-12 2020-11-12 Method for providing navigation data for controlling a robot, method for controlling a robot, method for producing at least one predefined point-symmetrical area and device
PCT/EP2021/078611 WO2022100960A1 (en) 2020-11-12 2021-10-15 Method for providing navigation data in order to control a robot, method for controlling a robot, method for producing at least one predefined point-symmetric region, and device

Publications (1)

Publication Number Publication Date
CN116710975A true CN116710975A (en) 2023-09-05

Family

ID=78212127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180090220.6A Pending CN116710975A (en) 2020-11-12 2021-10-15 Method for providing navigation data for controlling a robot, method and device for manufacturing at least one predefined point-symmetrical area

Country Status (3)

Country Link
CN (1) CN116710975A (en)
DE (1) DE102020214249A1 (en)
WO (1) WO2022100960A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115781765B (en) * 2023-02-02 2023-07-25 科大讯飞股份有限公司 Robot fault diagnosis method, device, storage medium and equipment

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2778525B2 (en) 1995-05-31 1998-07-23 日本電気株式会社 Polygon figure shaping device
KR100485696B1 (en) * 2003-02-07 2005-04-28 삼성광주전자 주식회사 Location mark detecting method for a robot cleaner and a robot cleaner using the same method
JP4572947B2 (en) 2008-03-31 2010-11-04 ブラザー工業株式会社 Image generating apparatus and printing apparatus
JP5364528B2 (en) 2009-10-05 2013-12-11 株式会社日立ハイテクノロジーズ Pattern matching method, pattern matching program, electronic computer, electronic device inspection device
JP5491162B2 (en) 2009-12-24 2014-05-14 浜松ホトニクス株式会社 Image pattern matching apparatus and method
JP2012254518A (en) 2011-05-16 2012-12-27 Seiko Epson Corp Robot control system, robot system and program
JP5623473B2 (en) 2012-08-01 2014-11-12 キヤノン株式会社 Image processing apparatus, image processing method, and program.
US10533846B2 (en) 2015-06-09 2020-01-14 Mitsubishi Electric Corporation Image generation device, image generating method, and pattern light generation device
US10386847B1 (en) 2016-02-19 2019-08-20 AI Incorporated System and method for guiding heading of a mobile robotic device
KR102203439B1 (en) * 2018-01-17 2021-01-14 엘지전자 주식회사 a Moving robot and Controlling method for the moving robot
DE102020202160A1 (en) 2020-02-19 2021-08-19 Robert Bosch Gesellschaft mit beschränkter Haftung Method for determining a symmetry property in image data, method for controlling a function and device

Also Published As

Publication number Publication date
DE102020214249A1 (en) 2022-05-12
WO2022100960A1 (en) 2022-05-19

Similar Documents

Publication Publication Date Title
US9989969B2 (en) Visual localization within LIDAR maps
US11120560B2 (en) System and method for real-time location tracking of a drone
US6778171B1 (en) Real world/virtual world correlation system using 3D graphics pipeline
CN110345937A (en) Appearance localization method and system are determined in a kind of navigation based on two dimensional code
CN107024199A (en) Surveyed and drawn by mobile delivery vehicle
CN102419178A (en) Mobile robot positioning system and method based on infrared road sign
CN106872993A (en) Portable distance-measuring device and the method for catching relative position
CN101702233B (en) Three-dimension locating method based on three-point collineation marker in video frame
AU2007355942A2 (en) Arrangement and method for providing a three dimensional map representation of an area
US10949980B2 (en) System and method for reverse optical tracking of a moving object
Zhang et al. High-precision localization using ground texture
US11398085B2 (en) Systems, methods, and media for directly recovering planar surfaces in a scene using structured light
Yamamoto et al. Optical sensing for robot perception and localization
RU2697942C1 (en) Method and system for reverse optical tracking of a mobile object
CN116710975A (en) Method for providing navigation data for controlling a robot, method and device for manufacturing at least one predefined point-symmetrical area
Joshi et al. High definition, inexpensive, underwater mapping
CN112074706A (en) Accurate positioning system
US20240013437A1 (en) Method for providing calibration data for calibrating a camera, method for calibrating a camera, method for producing at least one predefined point-symmetric region, and device
CN110580721A (en) Continuous area positioning system and method based on global identification map and visual image identification
TWI812865B (en) Device, method, storage medium and electronic apparatus for relative positioning
CN113008135B (en) Method, apparatus, electronic device and medium for determining a position of a target point in space
CN116783628A (en) Method for providing monitoring data for detecting a movable object, method and device for manufacturing at least one predefined point-symmetrical area
JP7472946B2 (en) Location acquisition device, location acquisition method, and program
Herrmann et al. Robust human-identifiable markers for absolute relocalization of underwater robots in marine data science applications
Stepanov et al. VISUAL-INERTIAL SENSOR FUSION TO ACCURACY INCREASE OF AUTONOMOUS UNDERWATER VEHICLES POSITIONING.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination