CN116783628A - Method for providing monitoring data for detecting a movable object, method and device for manufacturing at least one predefined point-symmetrical area - Google Patents

Method for providing monitoring data for detecting a movable object, method and device for manufacturing at least one predefined point-symmetrical area Download PDF

Info

Publication number
CN116783628A
CN116783628A CN202180090216.XA CN202180090216A CN116783628A CN 116783628 A CN116783628 A CN 116783628A CN 202180090216 A CN202180090216 A CN 202180090216A CN 116783628 A CN116783628 A CN 116783628A
Authority
CN
China
Prior art keywords
symmetry
pattern
point
camera
predefined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180090216.XA
Other languages
Chinese (zh)
Inventor
S·西蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of CN116783628A publication Critical patent/CN116783628A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a method for providing monitoring data (135) for detecting a movable object (100). The method comprises the step of reading in image data (105) provided by means of a camera (102) from the camera (102). The image data (105) represents a camera image of an environment of the camera (102), wherein at least one predefined even and/or odd point symmetric region (110) is arranged in a field of view of the camera (102), wherein the at least one predefined even and/or odd point symmetric region (110) is at least partially obstructable by the movable object (100) from the perspective of the camera (102). The method further comprises the steps of: -determining the presence of at least one center of symmetry (112) of the at least one even and/or odd point symmetry region (110) using the image data (105) and a determination rule (128) to determine an occlusion state of the movable object (100) for the at least one predefined even and/or odd point symmetry region (110). -providing the monitoring data (135) in dependence of the occlusion status.

Description

Method for providing monitoring data for detecting a movable object, method and device for manufacturing at least one predefined point-symmetrical area
Technical Field
The invention is based on a device or a method of the kind according to the independent claims. Computer programs are also the subject of the present invention.
Background
Conventional gratings may use a narrow beam of light, and interruption of the beam by an object entering the beam may be detected by a sensor. The light source and the sensor may be arranged on opposite sides of the area to be monitored from each other. Another conventional design is a reflective grating. Here, the light source and the sensor are arranged on the same side, for example, in a common housing. A retro-reflector may be arranged on the opposite side, which retro-reflector reflects light back into the direction in which it emanates. Both classical forms in particular require their own light source and are therefore active systems in this sense. The light beam used may be visible or may be made visible, for example by means of fog and infrared cameras, if infrared gratings.
DE 102020202160A1, which is disclosed later, discloses a method for determining symmetry properties in image data and a method for controlling functions.
Disclosure of Invention
Against this background, a method according to the main claim, as well as a device using the method, and a corresponding computer program are proposed by the solutions presented herein. Advantageous extensions and improvements of the device described in the independent claims can be made by the measures listed in the dependent claims.
According to an embodiment, the following facts may be exploited in particular: the points or objects in the world are marked or will be marked by means of point-symmetrical areas, so that a system with an imaging sensor and the suitable method proposed herein can detect and locate these point-symmetrical areas with high accuracy to perform specific technical functions robustly and locally, optionally without humans or living beings regarding such marks as disturbances.
For example, it may happen that the symmetric region is not fully imaged into the camera image, for example because the symmetric region may be partially occluded by the object, or because the symmetric region may partially protrude from the image, or because the pattern may have been cropped. Advantageously, the positioning accuracy of the point symmetry center can still be maintained, since partial occlusion does not distort its position: the remaining point-symmetric pairs can still vote to support the correct center of symmetry. Partial occlusion can only reduce the intensity of the foci in the voting matrix, etc., but the location of the center of symmetry can be preserved and still be accurately and simply determined. This is a particular advantage of utilizing point symmetry.
A further advantage in case of finding a point-symmetry based region or pattern may particularly result from the fact that: the point symmetry is constant with respect to the rotation between the point symmetric region and the camera or image recording and largely constant with respect to the viewing angle. For example, the point symmetry plane may be invariant with respect to affine imaging. Imaging of arbitrarily oriented planes by a real camera is at least partially always well approximated by affine imaging. For example, if a circular point-symmetric region is observed at a squint angle, the circular shape becomes an ellipse in which the point-symmetric characteristic and the point-symmetric center are maintained. Thus, the at least one point-symmetrical region does not necessarily have to be observed from a frontal view-even a very oblique view does not cause difficulties and achievable accuracy can be maintained. This invariance, in particular with respect to rotation and with respect to viewing angle, makes it possible to dispense with precautions for properly aligning the camera in the symmetrical area or vice versa. Instead, it may be sufficient that a corresponding point-symmetric region is at least partially captured in the camera image, such that the point-symmetric region may be detected. The relative positional relationship or arrangement between the point-symmetrical region and the camera may be unimportant or hardly important in this case.
The invention proposes a method for providing monitoring data for detecting a movable object, wherein the method has the following steps:
reading in image data provided by means of a camera from an interface to the camera, wherein the image data represent a camera image of an environment of the camera, wherein at least one predefined even and/or odd point-symmetrical region in the environment is arranged in a field of view of the camera, wherein the at least one predefined point-symmetrical region may be at least partially obscured from view of the camera by the movable object; and
determining the presence of at least one center of symmetry of the at least one even and/or odd point symmetric region in the camera image using the image data and a determination rule to determine an occlusion state of the movable object for the at least one predefined even and/or odd point symmetric region, wherein the monitoring data is provided in accordance with the occlusion state.
The method may be implemented, for example, in software or hardware or in a hybrid form of software and hardware, for example, in a control device or device. The at least one predefined point-symmetrical region can be manufactured by performing a variant of the method for manufacturing described below. In the reading-in step, image data may also be read in from a plurality of cameras, wherein these image data may represent a plurality of camera images of at least one predefined even and/or odd point-symmetrical region. The determination rules may be similar or correspond to the procedure disclosed in DE 102020202160, which applicant later discloses.
According to an embodiment, the determination rule used in the determining step may be configured such that a signature is generated for a plurality of pixels of at least one section of the camera image to obtain a plurality of signatures. In this case, each signature may be generated using a descriptor with a plurality of different filters. Each filter may be of at least one symmetric type. Each signature may have a sign for each filter of the descriptor. The determination rule may also be structured such that at least one mirror signature for at least one symmetry type of the filter is determined for the signature. The determining rule may also be structured such that it is checked whether a pixel with a signature has at least one further pixel in a search area in the surrounding of the pixel, the at least one further pixel having a signature corresponding to at least one mirror signature, to determine pixel coordinates of at least one symmetric signature pair from the pixel and the further pixel when the at least one further pixel is present. Additionally, the determination rule may be configured such that pixel coordinates of the at least one symmetric signature pair are evaluated to identify the at least one center of symmetry. The descriptor may describe the image content in a local environment around a pixel or reference pixel in a compact form. The signature may represent, for example, in binary, the value of a descriptor describing the pixel. Thus, the at least one mirror signature may be determined using a plurality of calculated signature images, e.g. one signature image having a normal filter, one signature image having an even point mirror filter, one signature image having an odd point mirror filter. Additionally or alternatively, at least one reflector may be applied to the sign of one of the signatures to determine at least one mirror signature. In this case, each reflector may have rules specific to the symmetric type and descriptor-dependent filter for modifying the symbol. The search area may depend on at least one of the reflectors used. Such an embodiment provides the advantage of enabling efficient and accurate detection of symmetric properties in image data. In this case, symmetry detection in the image can be achieved with minimal effort.
In this case, in the determining step, a transformation rule for transforming the pixel coordinates of the center of symmetry and/or of at least one predefined even and/or odd point symmetry region is determined for each determined center of symmetry using the pixel coordinates of each symmetric signature pair that help to correctly identify the center of symmetry. The transformation rules may be applied to the center of symmetry and/or to the pixel coordinates of at least one predefined even and/or odd point symmetric region to correct for distorted viewing angles of the camera image. Such an embodiment provides the advantage that a reliable and accurate reconstruction of the correct grid or correct topology of the plurality of point-symmetric regions can be achieved.
Furthermore, the method may have the steps of: at least one center of symmetry from the camera image is compared with at least one reference center of symmetry from reference data in terms of intensity, intensity variation over time, and/or local intensity variation to determine an intensity-related deviation between the center of symmetry and the reference center of symmetry. In this case, the monitoring data is provided in accordance with the deviation. The reference data may represent information about a reference center of symmetry or a reference image. The reference data or the reference image may be selected from a plurality of stored reference data or reference images or generated using stored generation rules. The deviation may be determined with respect to the detected intensity or response of symmetry for the respective center of symmetry, which may be measured as weight or height after smoothing at the convergence point. In particular, the deviation may be determined from the change in one or more responses over time.
The type of symmetry of the at least one symmetry center may also be determined in the determining step. The symmetry type may represent even point symmetry and additionally or alternatively odd point symmetry. Additionally or alternatively, in the step of comparing in this case, a type of symmetry of the at least one center of symmetry in the camera image may be compared with a predefined type of symmetry of the at least one reference center of symmetry from the reference data to check for consistency between the at least one center of symmetry and the at least one reference center of symmetry. Odd point symmetry may be created by dot mirroring of gray or color value inversions. By using and identifying two different point symmetries, the information content of the point symmetric region and the pattern can be increased.
In this case, the image data read in the reading-in step may represent a camera image of at least one pattern composed of a plurality of predefined even and/or odd point-symmetrical regions. Here, in the determining step, a geometric arrangement of symmetry centers of the at least one pattern may be determined, a geometric sequence of symmetry types of the symmetry centers may be determined, and additionally or alternatively the sequence may be used to determine the pattern from a plurality of predefined patterns. The arrangement and/or the sequence may represent an identification code of a pattern. Such an embodiment provides the advantage that the reliability of identifying the center of symmetry can be increased and that further information can be obtained by identifying a specific pattern. Reliable identification of the center of symmetry can also be achieved for different distances between the camera and the pattern.
In this case, in the determining step, the implicit additional information of the at least one pattern or the readout rules for reading out explicit additional information in the camera image are determined using the arrangement of the symmetry center of the at least one pattern and additionally or alternatively using the sequence of symmetry types of the symmetry center. The arrangement and additionally or alternatively the sequence may represent the additional information in encoded form. The additional information may be related to detecting the movable object. Such an embodiment provides the advantage that additional information may be conveyed by the topology of the at least one pattern.
Furthermore, the determination step and additionally or alternatively the comparison step may be performed jointly for all symmetry centers independently of the symmetry type of the symmetry center, or separately for symmetry centers of the same symmetry type depending on the symmetry type of the symmetry center. Thus, low memory and time requirements for accurately and reliably identifying centers of symmetry can be achieved by co-execution. Alternatively, confusion with randomly occurring patterns in the image may be minimized, in particular by performing it separately.
A method for detecting a movable object is also proposed, wherein the method has the steps of:
evaluating the monitoring data provided according to an embodiment of the above method to generate a detection signal dependent on the monitoring data; and
outputting the detection signal to an interface to a processing unit for performing a raster function to perform detection of the movable object.
The method may be implemented, for example, in software or hardware or in a hybrid form of software and hardware, for example, in a control device or device. The method for detecting can be advantageously performed in combination with the embodiments of the method for providing described above.
Furthermore, a method for producing at least one predefined even and/or odd point-symmetrical region for use in an embodiment of the above method is proposed, wherein the method has the following steps:
generating design data representing a graphical representation of the at least one predefined even and/or odd point symmetric region; and
the at least one predefined even and/or odd point symmetric region is generated on, at or in the display medium using the design data to fabricate the at least one predefined even and/or odd point symmetric region.
The method may be implemented, for example, in software or hardware or in a hybrid form of software and hardware, for example, in a control device or device. By performing the manufacturing method, at least one predefined even and/or odd point symmetric region may be manufactured, which may be used within the scope of the embodiments of the method described above.
According to one embodiment, design data representing a graphical representation of at least one predefined even and/or odd point symmetric region as a circle, ellipse, square, rectangle, pentagon, hexagon, polygon or torus may be generated in the generating step. In this case, at least one predefined even and/or odd point symmetric region may have a regular or quasi-random content pattern. Additionally or alternatively, the first half of at least one predefined even and/or odd point-symmetrical region can be arbitrarily predefined, and the second half can be constructed by point mirroring and optionally an inversion of the gray values and additionally or alternatively the color values. Additionally or alternatively, in the generating step, the at least one predefined even and/or odd point symmetric region may be generated by an additive manufacturing process, separation, coating, shaping, primary shaping, or optical display. Additionally or alternatively, the display medium may have glass, stone, ceramic, plastic, rubber, metal, concrete, gypsum, paper, cardboard, food, or an optical display device. Thus, at least one predefined even and/or odd point-symmetrical region can be manufactured in a precisely suitable manner, depending on the specific application or depending on the specific application and the boundary conditions prevailing there.
Design data representing a graphical representation of at least one pattern of a plurality of predefined even and/or odd point symmetric regions may also be generated in the generating step. In this case, at least a subset of the even and/or odd point symmetric regions may be aligned on a regular or irregular grid, directly adjoined to each other and additionally or alternatively separated from at least one adjacent even and/or odd point symmetric region by a gap portion, may be identical to each other or different from each other in their size and/or their content pattern, and additionally or alternatively be arranged in a common plane or in different planes. Additionally or alternatively, in the generating step, design data representing a graphical representation of the at least one pattern having hierarchical symmetry may be generated. In this way, different patterns with specific information content and additionally or alternatively patterns with hierarchical symmetry can be produced for different distances from the patterns.
Even if the presence of the corresponding marks is known, in particular humans, it is difficult to perceive the symmetry hidden in the pattern. This also makes it possible to conceal such a marking, for example. This may be significant or desirable for various reasons, for example, particularly for aesthetic reasons, because technical indicia should not or are not desired to be seen, because, for example, attention should not be reduced due to indicia that are not important to humans, or because the indicia should be kept secret. Aesthetic reasons play an important role, especially in the design field. For example in the interior space of a vehicle, on the vehicle skin, on an aesthetically designed object or in the field of interior architecture or architectural architecture, conspicuous technical markings are not or hardly accepted. However, if the technical marking is to be hidden, for example in a textile pattern or in a plastic or ceramic relief or in a hologram or on a printed surface, as is possible according to an embodiment, the technical marking may be simultaneously attractive and useful, for example to provide one or more reference points for the camera, for example to thereby be able to determine the relative camera pose. Depending on the application, the hidden aspects may also be irrelevant or have little relevance. Thus, the robustness of the technique is still applicable to the use of such designed patterns. In particular, patterns with random or pseudo-random characters may provide a variety of possibilities to find as well-defined pairs of symmetry points as possible. Depending on the embodiment, this may be exploited, for example, in particular to favor the signal-to-noise ratio of the response measured at the center of symmetry and thus to favor robustness in the sense of error-free detection and accurate positioning of the center of symmetry. The pattern may in particular comprise one or more point symmetric regions with an odd or even point symmetry. These areas may be designed, for example, as circles, hexagons, squares, ovals, polygons or other shapes. The point-symmetric regions may be of the same type or of different shapes and sizes. The point-symmetrical regions may be connected to each other or spaced apart without gaps.
The solution presented here also creates a device configured to execute, manipulate or implement the steps of the variant of the method presented here in the corresponding apparatus. The task on which the invention is based can also be solved quickly and effectively by this embodiment variant of the invention in the form of a device.
To this end, the device may have at least one computing unit for processing signals or data, at least one memory unit for storing signals or data, at least one interface to a sensor or an actuator, at least one communication interface for reading in sensor signals from the sensor or for reading in or outputting data or control signals to the actuator and/or for reading in or outputting data embedded in a communication protocol. The computing unit may be, for example, a signal processor, a microcontroller, etc., wherein the memory unit may be a flash memory, an EEPROM or a magnetic memory unit. The communication interface may be configured to read in or output data wirelessly and/or wiredly, wherein the communication interface, which may read in or output wired data, may read in the data electrically or optically, for example, from a corresponding data transmission line or may output the data electrically or optically into a corresponding data transmission line.
In the present case, a device is understood to mean an electrical device that processes a sensor signal and outputs a control signal and/or a data signal as a function of the sensor signal. The device may have an interface that may be constructed as hardware and/or software. In the case of a hardware configuration, the interface may be, for example, part of a so-called system ASIC, which contains the various functions of the device. However, the interface may also be a separate integrated circuit or at least partly consist of discrete components. In the case of a software design, the interface may be a software module which is present on the microcontroller together with other software modules, for example.
A system for detecting a movable object is also proposed, wherein the system has the following features:
an embodiment of the above apparatus;
at least one camera, wherein the camera and the device are connectable to each other or have been connected in a data-transmissible manner; and
at least one predefined even and/or odd point symmetric region manufactured by an embodiment of the method for manufacturing as described above, wherein said region can or has been arranged in the field of view of said camera.
The system may in this case provide the function of a grating.
A computer program product or a computer program having a program code which can be stored on a machine-readable carrier or a storage medium, such as a semiconductor memory, a hard disk memory or an optical memory, for performing, implementing and/or manipulating the steps of a method according to one of the embodiments described above is also advantageous, in particular when the program product or the program is run on a computer or a device. The method may be implemented here as a hardware accelerator on a SoC or ASIC.
Drawings
Embodiments of the solutions presented herein are illustrated in the accompanying drawings and explained in more detail in the following description.
FIG. 1 shows a schematic diagram of an embodiment of a device for providing, an embodiment of a device for detecting, and a camera;
FIG. 2 shows a schematic diagram of an embodiment of an apparatus for manufacturing;
FIG. 3 shows a flow chart of an embodiment of a method for providing;
fig. 4 shows a flow chart of an embodiment of a method for detection.
FIG. 5 shows a flow chart of an embodiment of a method for manufacturing;
FIG. 6 shows a schematic diagram of a display medium having a pattern of predefined point symmetric regions, according to an embodiment;
FIG. 7 shows a schematic diagram of a display medium having a pattern of predefined point symmetric regions, according to an embodiment;
FIG. 8 shows a schematic view of a display medium having a pattern from FIG. 7, with a graphic highlighting of the pattern or predefined point-symmetric region;
FIG. 9 shows a schematic diagram of a predefined point symmetric region according to an embodiment;
FIG. 10 shows a schematic diagram of a pattern of predefined point symmetric regions, according to an embodiment;
FIG. 11 illustrates a schematic diagram of the use of a lookup table according to an embodiment;
FIG. 12 shows a schematic diagram of a voting matrix according to an embodiment;
FIG. 13 shows a schematic diagram of an exemplary pattern arranged in a cube form in accordance with an embodiment with respect to the correct identification of a grid;
FIG. 14 shows a schematic view of the pattern shown in the first partial illustration of FIG. 6 in an oblique view;
FIG. 15 shows a pattern from the first partial illustration of FIG. 14, wherein predefined point symmetric regions are highlighted;
FIG. 16 shows a schematic diagram of the pattern of FIG. 15 after viewing angle correction, in accordance with an embodiment;
FIG. 17 shows a schematic diagram of an embodiment with a hierarchically symmetrical pattern;
FIG. 18 shows a schematic diagram of an embodiment with a hierarchically symmetrical pattern;
FIG. 19 shows a schematic diagram of an embodiment with a hierarchically symmetrical pattern;
FIG. 20 shows a schematic diagram of a pattern according to an embodiment;
FIG. 21 shows a schematic diagram of a detection situation using a predefined point symmetric region according to an embodiment;
FIG. 22 shows a schematic diagram of a detection situation using a pattern of predefined point symmetric regions, according to an embodiment;
FIG. 23 shows a schematic diagram of a detection situation using a pattern of predefined point symmetric regions, according to an embodiment;
FIG. 24 shows a schematic diagram of a detection situation using a pattern of predefined point symmetric regions, according to an embodiment; and
fig. 25 shows a schematic diagram of a detection situation using a pattern composed of predefined point-symmetric regions according to an embodiment.
Detailed Description
In the following description of advantageous embodiments of the present invention, the same or similar reference numerals are used for elements shown in different drawings and having similar effects, wherein repeated descriptions of these elements are omitted.
Fig. 1 shows a schematic diagram of an embodiment of a device 120 for providing, an embodiment of a device 140 for detecting, and, by way of example only, a camera 102. In the illustration of fig. 1, the device for providing 120 or the device for providing 120 and the device for detecting 140 or the detecting device 140 are shown separately or arranged externally to the camera 102. The providing device 120 and the detecting device 140 are connected to the camera 102 in a data transmission-enabled manner. According to another embodiment, the providing device 120 and/or the detecting device 140 may also be part of the camera 102 and/or may be combined with each other.
The camera 102 is configured to record camera images of the environment of the camera 102. In the context of the camera 102, only predefined even and/or odd point symmetric regions 110 having a center of symmetry 112 are illustratively arranged in the field of view of the camera 102. From the camera's perspective, at least one predefined even and/or odd point-symmetric region 110 may be at least partially obscured by the movable object 100. The camera 102 is further configured to provide or generate image data 105 representing a camera image, wherein the camera image also shows predefined even and/or odd point symmetric regions 110 and/or the movable object 100.
The providing device 120 is configured to provide monitoring data 135 for detecting a movable object. To this end, the provision device 120 comprises a reading-in means 124, a determining means 126 and optionally also an executing means 130. The reading-in device 124 is configured to read in the image data 105 from the providing apparatus 120 to the input interface 122 of the camera 102. Furthermore, the reading-in device 124 is also configured to forward the image data 105 representing the camera image to the determination device 126.
The determining means 126 of the providing device 120 is configured to determine the presence of the symmetry center 112 of the at least one point-symmetric region 110 in the camera image using the image data 105 and the determining rules 128 to determine the occlusion status of the movable object 100 for the at least one predefined even and/or odd point-symmetric region 110. The determination rules 128 will be discussed in more detail below. It should be pointed out here that the determination rules 128 are similar or correspond to the procedure disclosed in DE 102020202160 which the applicant has disclosed later. The providing device 120 is configured to provide the monitoring data 135 in accordance with the occlusion status. More precisely, the providing device 120 is configured to provide the monitoring data 135 to the detecting device 140 via the output interface 138 of the providing device 120.
According to one embodiment, the determining means 1 26 are configured to forward the determined at least one symmetry center 112 and/or occlusion state to the executing means 130. The execution means 130 is configured to compare the position of the at least one center of symmetry 112 in the camera image with at least one reference center of symmetry from the reference data 115 in terms of intensity, intensity course over time and/or local intensity course to determine an intensity-dependent deviation 131 between the center of symmetry 112 and the reference center of symmetry. The execution means 130 is further configured to read in or receive the reference data 115 from the storage means 150. The storage apparatus 150 may be implemented as part of the providing device 120 or separate from the providing device 120. The providing device 120 is in this case configured to use the deviation 131 to provide the monitoring data 135.
The detection device 140 is configured to detect the movable object 100. To this end, the detection device 140 comprises an evaluation means 144 and an output means 146. The detection device 140 is configured to receive or read in the monitoring data 135 from the providing device 120 via an input interface 142 of the detection device 140. The evaluation means 144 are configured to evaluate the monitoring data 135 provided by the providing device 120 to generate a detection signal 145 dependent on the monitoring data 135. The evaluation means 144 are further configured to forward the detection signal 145 to the output means 146. The output device 146 is configured to output the detection signal 145 to an output interface 148 to a processing unit performing a raster function, to perform detection of the movable object 100.
In particular, the determination rule 128 is structured such that a signature is generated for a plurality of pixels of at least one section of the camera image to obtain a plurality of signatures. In this case, each signature is generated using a descriptor with a plurality of different filters. Each filter is of at least one symmetrical type. Each signature has a sign for each filter of the descriptor. The determination rule 128 may also be structured such that at least one reflector is applied to the sign of one of the signatures to determine at least one mirror signature for at least one symmetry type of filter for that signature. In this case, each reflector includes rules specific to the symmetric type and descriptor-dependent filter for modifying the symbol. The determining rule is further structured such that it is checked whether a pixel with a signature has at least one further pixel in a search area in the surroundings of the pixel that depends on the at least one reflector applied, the at least one further pixel having a signature corresponding to at least one mirror signature, to determine the pixel coordinates of at least one symmetric signature pair from the pixel and the further pixel when the at least one further pixel is present. In addition, the determination rule is structured such that pixel coordinates of the at least one symmetric signature pair are evaluated to identify the at least one center of symmetry.
According to one embodiment, the determining means 1 26 is configured to generate, for each determined symmetry center 112, a transformation rule for transforming the pixel coordinates of the symmetry center 112 and/or the point symmetry region 110 using the pixel coordinates of each symmetry signature pair that help to correctly identify the symmetry center 112. The transformation rules are applied to the pixel coordinates of the center of symmetry 112 and/or the point-symmetric region 110 to correct for distorted viewing angles of the camera image. Furthermore, it is advantageous to determine the transformation rules based on a plurality of, in particular adjacent, point-symmetric regions 110, as these are more robust, more accurate and less affected by noise, in particular if these point-symmetric regions lie on a common plane. The application of the transformation is particularly advantageous when looking at the arrangement of the plurality of symmetry centers 112.
According to one embodiment, the determination device 1 is further configured to determine a symmetry type of the at least one symmetry center 112. The symmetry type represents even point symmetry and additionally or alternatively represents odd point symmetry. Additionally or alternatively, the execution means 130 is in this case configured to compare the symmetry type of the at least one symmetry center 112 in the camera image with a predefined symmetry type of the at least one reference symmetry center from the reference data 115 to check for consistency between the at least one symmetry center 112 and the at least one reference symmetry center.
In particular, the image data 105 in this case represent a camera image of at least one pattern consisting of a plurality of predefined point-symmetrical regions 110. Here, the determining means 126 are configured to determine a geometrical arrangement of the symmetry center 112 of the at least one pattern, to determine a geometrical sequence of symmetry types of the symmetry center 112, and/or to use said sequence to determine a correct pattern from a plurality of predefined patterns, which is represented by the image data 105. The arrangement and/or the sequence may represent an identification code of the pattern. According to an embodiment, the determining means 126 are in this case configured to use the arrangement of the symmetry center 112 of the at least one pattern and/or the sequence of symmetry types of the symmetry center 112 to determine implicit additional information of the at least one pattern or a readout rule for reading out explicit additional information in the camera image. The arrangement and/or the sequence represent the additional information in encoded form. The additional information is related to detecting the movable object.
Fig. 2 shows a schematic diagram of an embodiment of an apparatus 200 for manufacturing. The apparatus 200 for manufacturing is configured to manufacture at least one predefined even and/or odd point symmetric region 110 for use by the provision apparatus or the like in fig. 1 and/or the detection apparatus or the like in fig. 1. To this end, the apparatus 200 for manufacturing comprises a generating means 202 and a generating means 206. The generating means 202 is configured to generate design data 204. Design data 204 represents a graphical representation of at least one even and/or odd predefined point symmetric region 110. The generating means 206 is configured to generate at least one predefined even and/or odd point symmetric region 110 on, at or in the display medium using the design data 204 to produce the at least one predefined even and/or odd point symmetric region 110.
According to one embodiment, the generating means 202 is configured to generate the design data 204 as a graphical representation of a circle, oval, square, rectangle, pentagon, hexagon, polygon or torus representing at least one predefined even and/or odd point symmetric region 110, wherein the at least one predefined even and/or odd point symmetric region 110 has a regular or quasi-random content pattern, and/or wherein the first half of the at least one predefined even and/or odd point symmetric region 110 is arbitrarily predefined, and the second half is configured by dot mirroring and/or grey value and/or inversion of colour value. Additionally or alternatively, the generating means 206 is configured to generate the at least one predefined even and/or odd point symmetric region 110 by an additive manufacturing process, separation, coating, shaping, primary shaping, or optical display. Additionally or alternatively, the display medium in this case has glass, stone, ceramic, plastic, rubber, metal, concrete, gypsum, paper, cardboard, food or optical display means.
According to one embodiment, the generating means 202 is configured to generate design data 204 representing a graphical representation of at least one pattern of a plurality of predefined even and/or odd point symmetric regions 110, wherein at least a subset of the point symmetric regions 110 are aligned on a regular or irregular grid, directly adjacent to each other and/or separated from at least one adjacent point symmetric region 110 by a gap portion, are identical to each other or different from each other in their size and/or their content pattern, and/or are arranged in a common plane or in different planes. Additionally or alternatively, the generating means 202 is configured to generate design data 204 representing a graphical representation of at least one pattern having hierarchical symmetry.
FIG. 3 illustrates a flow chart of an embodiment of a method 300 for providing monitoring data to detect a movable object. The method 300 for providing may in this case be performed using the providing device in fig. 1 or a similar device. The method 300 for providing comprises a reading step 324, a determining step 326 and optionally also an executing step 330.
In a reading step 324, image data provided by means of the camera is read in from the interface to the camera. The image data represents a camera image of the camera environment. Here, at least one predefined even and/or odd point-symmetrical region in the environment is arranged in the field of view of the camera. From the camera's perspective, at least one predefined even and/or odd point-symmetric region may be at least partially obscured by the movable object. The presence of at least one center of symmetry of at least one even and/or odd point symmetry region in the camera image is then determined in a determination step 326 using the image data and the determination rules to determine the occlusion status of the movable object to at least one predefined even and/or odd point symmetry region. Monitoring data is provided in accordance with the determined occlusion state.
Then in an optional execution step 330, at least one center of symmetry from the camera image is compared with at least one reference center of symmetry from the reference data in terms of intensity, intensity course over time and/or local intensity course to determine an intensity-dependent deviation between the center of symmetry and the reference center of symmetry. In this case, the monitoring data is provided according to the deviation.
According to one embodiment, the image data read in the reading step 324 represents a camera image of at least one pattern of a plurality of predefined point symmetric regions. Here, in a determining step 326, a geometric arrangement of symmetry centers of at least one pattern is determined, a geometric sequence of symmetry types of symmetry centers is determined, and/or the pattern is determined from a plurality of predefined patterns using the sequence. The arrangement and/or the sequence represents an identification code of the pattern. Optionally, the determining step 326 and/or the performing step 330 are performed together for all symmetry centers independently of the symmetry type of the symmetry center or individually for symmetry centers of the same symmetry type according to the symmetry type of the symmetry center.
Fig. 4 shows a flow chart of an embodiment of a method 400 for detecting a movable object. The method 400 for detecting may be performed using the detection device of fig. 1 or a similar device. Furthermore, the method 400 for detecting may be performed in conjunction with the method for providing of fig. 3 or similar methods. The method 400 for detection includes an evaluation step 444 and an output step 446.
In an evaluation step 444, the monitoring data provided according to the method for providing of fig. 3 or a similar method is evaluated to generate a detection signal dependent on the monitoring data. Subsequently, in an output step 446, the detection signal is output to an interface to a processing unit for performing a raster function to perform detection of the movable object.
Fig. 5 shows a flow chart of an embodiment of a method 500 for manufacturing. The method 500 for manufacturing may be performed to manufacture at least one predefined point symmetric region for use with the method for providing of fig. 3 or the like and/or for use with the method for detecting of fig. 4 or the like. The method 500 for manufacturing may also be performed in conjunction with or using the apparatus for manufacturing of fig. 2 or similar apparatus. The method 500 for manufacturing includes a generating step 502 and a generating step 506.
In a generating step 502, design data representing a graphical representation of at least one predefined point symmetric region is generated. Subsequently, in a generating step 506, at least one predefined point-symmetric region is generated on, at or in the display medium using the design data to produce at least one predefined point-symmetric region.
Fig. 6 shows a schematic diagram of a display medium 600 with a pattern 610 of predefined point symmetric regions 110A and 110B, according to an embodiment. In this case, each of the predefined point-symmetric regions 110A and 110B corresponds to or is similar to the predefined point-symmetric region in fig. 1. In the first partial illustration a, a pattern 610 consisting of only exemplary 49 predefined point-symmetric regions 110A and 110B is shown, and in the second partial illustration B, a pattern 610 consisting of only exemplary eight predefined point-symmetric regions 110A and 110B is shown. In this case, the first predefined point-symmetric region 110A has odd point symmetry as a symmetric type, and the second predefined point-symmetric region 110B has even point symmetry as a symmetric type. In this case, a noise-like image pattern having a corresponding pattern 610 is printed on each display medium 600.
The use of symmetry in the machine vision field according to an embodiment can be illustrated based on fig. 6, wherein symmetry can be designed to be imperceptible or hardly perceptible to humans, but at the same time is robust, locally accurate for an embodiment and can be detected with minimal computational effort. In this case, point symmetry is more or less hidden in the pattern 610, and the observer hardly recognizes these point symmetries. By graphically highlighting the predefined point-symmetric regions 110A and 110B in FIG. 6, a human observer can identify these predefined point-symmetric regions in a noise-like image pattern on the display medium 600. 49 exemplary circularly symmetric regions 110A and 110B are included in the first partial illustration A, with only exemplary 25 first regions 110A having odd point symmetry and 24 second regions 110B having even point symmetry. In the second partial illustration B, the symmetric regions 110A and 110B are selected to be larger than in the first partial illustration a, with only exemplary five having odd point symmetry and only exemplary three having even point symmetry, and thus are particularly suitable for larger camera distances or lower image resolutions. Thus, circularly symmetric regions 110A and 110B are located on a display medium 600 designed as a plate, where in the case of odd or negative point symmetry the point mirror will image light to dark and vice versa, whereas in the case of even or positive point symmetry such inversion will not occur. If multiple patterns 610 are desired, these patterns may be designed to be distinguishable. This may be accomplished by an arrangement of the centers of symmetry of the regions 110A and 110B, as shown in fig. 6, wherein the first partial graphic a and the second partial graphic B are simply distinguishable, or based on a sequence of negative or odd point symmetry and positive or even point symmetry of the regions 110A and 110B within the respective pattern 610.
Fig. 7 shows a schematic diagram of a display medium 600 with a pattern 610 of predefined point symmetric regions according to an embodiment. The pattern 610 corresponds in this case to or is similar to one of the patterns from fig. 6, wherein the pattern 610 is shown in the illustration of fig. 7 without being highlighted graphically. Ten display mediums 600 similar to those in fig. 6 are shown by way of example only in fig. 7.
Fig. 8 shows a schematic diagram of a display medium 600 having a pattern 610 from fig. 7, wherein the pattern or predefined point symmetric regions 110A and 110B are graphically highlighted. By way of example only, a pattern 610 having predefined point symmetric regions 110A and 110B is arranged or graphically highlighted on ten display mediums 600 in this case.
Accordingly, fig. 7 and 8 show only ten patterns 610 optimized for distinguishability by way of example. Each pattern 610 has a separate arrangement of odd and even point symmetric regions 110A and 110B. The pattern 610 is thus encoded by this arrangement. Here the encodings are chosen and mutually coordinated and/or optimized by training such that even if ten patterns 610 are rotated or mirrored or partially hidden from capture by the camera, the ten patterns are still clearly identifiable and distinguishable. In the pattern 610 of fig. 7 and 8, the point-symmetric regions 110A and 110B in the four corners of each display medium 600, respectively, are intentionally designed to be slightly more pronounced. This is independent of the function itself, but provides practical advantages when manually assembling the display medium 600 with the pattern 610. The display medium 600 having the pattern 610 may be arbitrarily arranged within the scope of the manufacturing method already described, for example, three-dimensionally or planarly in series or as a surface. The center of point symmetry of the pattern 610 can be found correctly and precisely within the scope of the providing method already described and/or by means of the providing device already described. The pattern 610 may be printed, for example, on solid plates of any size, which may optionally be placed in a partially orthogonal arrangement relative to each other. In the event that the imaging of the pattern 610 by the camera is blurred, the center of symmetry may also be detected sufficiently well to thereby achieve the described functionality. Therefore, the detection of the point symmetry center is robust for blurred imaging. This expands the application range to situations where work with a shallow depth of field, such as scenes with weaker light, or situations where the focus or auto focus settings of the camera are incorrect or perfect clear imaging cannot be achieved, such as in liquid or turbid or moving media or in the edge area of the lens or during relative movement between the pattern 610 and the camera (movement blur, orientation blur). Even if point symmetry occurs naturally and in particular in an artificially designed environment, the possible false detections based on them differ in spatial distribution from the detection based on the correct pattern 610 and thus the two groups can be easily separated or distinguished from each other.
To demonstrate that the above-described method of providing is also applicable to non-planar and even elastic surfaces in motion, the pattern 610 of fig. 7 and 8 may be printed on, for example, paper and combined into a flexible box. The above-described provision method is applicable without any problem even for a non-flat or elastic surface (e.g., made of paper). This enables the movement of these surfaces to be determined. In contrast to many substances, the paper does not allow shearing, but point symmetry is also unchanged for shearing, so that shearing does not cause problems.
In particular, the orientation of the center of symmetry in the camera image can be precisely determined. However, extending such accurate measurements to the entire face of the pattern 610 may also be of interest in various applications. I.e., each point or pixel of the pattern 610 describes where that point or pixel is located in the camera image. This then allows, for example, determining a minimum deviation between the truly observed pattern 610 and the ideal pattern from the Ground Truth (reference true value). For example, it is of interest to apply the pattern 610 in a printed manner on a non-smooth or non-rigid surface and thereby create, for example, variable folds or indentations in the pattern 610, the exact shape of which should be determined. In particular a pattern with random properties is very suitable for finding the corresponding points from the first image to the second image. Here, the first image and the second image may be recorded chronologically from different perspectives with the same camera or with two cameras.
In particular, it should now be examined when the first image is a real image from a camera and the second image is an artificially generated (stored) image of a given pattern (also referred to as a reference image), which is placed (e.g. scaled, rotated, affine mapped, projected) into the second image, for example based on the found centre of symmetry, so that it is as close as possible to the real (first) image. For the reference image, the processing steps required for the first image from the camera, such as the image preprocessing steps, are skipped or omitted as necessary. The following known methods, such as methods of optical flow or disparity estimation, can then be applied, for example, to find a corresponding-or vice versa-in the reference image for each pixel in the camera image. Thus a two-step process is obtained: in a first step, the found center of symmetry and, if necessary, the contained code are used to register or coarsely align the real image with the known pattern. This then represents an initialization to accurately determine again in a second step the minimum deviation in the sense of local displacement between the registered real image and pattern, for example using a light flow method, and if necessary for each point or pixel of the image or pattern 610. The smaller the search area, the less computational effort is required for the second step. Here the computational effort is typically not Chang Xiao-due to the good initialization from the first step. Since these two steps require little computational effort, high pixel throughput is achieved on commonly used computer platforms, defined as the product of the frame repetition rate [ image/second ] and the image size [ pixel/image ]. If no coincidence is found locally, it can be interpreted by the object generally blocking the line of sight to the pattern 610. From this, the shape or contour of the occluded object can be deduced.
The reference image should be provided for the two-step process described above. This can be solved by: the associated reference image is maintained in one memory for all patterns 610 in question. The memory effort involved can be reduced by storing only the corresponding parameters required for recalculating or generating the reference image when needed. For example, the pattern 610 may be generated according to simple rules by means of a quasi-random number generator. The term "quasi" here means that the random number generator actually works according to deterministic rules, so that its result is reproducible, which is advantageous here. The rule is understood here, for example, as to what diameter the symmetrical areas 110A and 110B have and how the mirroring should be performed and how the pattern 610 is composed of a plurality of patterns with different degrees of detail in a weighted manner, for example such that the pattern is well detected at short, medium and long distances. It is then sufficient to store only the initialization data (seed) of the quasi-random number generator and, if necessary, the selection of the rules for constructing the pattern 610. By means of this formation rule, the reference pattern can be generated again and identically (and then deleted again) if required.
In summary, the two-step process may be represented, for example, as follows. In a first step, the centers of symmetry are found and their signs are determined. Here, the symbols represent case differences between odd and even symmetry. By comparing the symbol sequences, it is possible to determine which one of the plurality of patterns is involved. The symbol sequence of pattern 610 may also be referred to as a code. The code can be described in a compact manner and requires up to 64 bits for a pattern 610 having, for example, an 8 x 8 center of symmetry. For comparison purposes, all existing or contemplated codes should be stored. Code is searched from this set that is as contradictory as possible with observations. This result is generally clear. Even if the camera can only capture a portion of the pattern 610, e.g. due to occlusion, such a search is still possible, because in this example with 8 x 8 symmetry centers, the code provides a very large number of up to 2 64 The number of patterns 610 that have been completed will be much smaller, giving a high degree of redundancy. For each stored code, the generation should also be storedInformation required for reference images, such as parameters and rule selections. The reference image is generated for the second step, for example, on demand, i.e. when needed, and is only temporary if necessary.
Based on the position of the center of symmetry found in the first step given by the camera image coordinates and the known position in the reference image corresponding thereto, transformation rules can be calculated that map these coordinates to each other as well as possible, for example using projection or affine mapping, which is optimized in the sense of a least squares method. Through such a transformation and appropriate filtering of the image data, the two images may be transformed (warped) into a common coordinate system, for example into the coordinate system of the camera image or into the coordinate system of the reference image or into any third coordinate system. Then, a more accurate comparison is made of the two images thus already aligned to each other, for example using the optical flow method. For example, for each pixel of the first image (preferably taking its environment into account) the best corresponding pixel of the second image with the environment is searched. The relative displacement of the corresponding position may be expressed as displacement information, in particular as absolute coordinates or displacement vectors. Such displacement vectors may be determined with sub-pixel accuracy, so that the correspondence is typically not on the pixel grid, but between pixel grids. This information allows a highly accurate analysis of the entire face of the pattern 610 captured in the camera image, for example to analyze the deformation or distortion of the pattern 610 or its carrier/display medium 600 using an elastic pattern, or in the case of a rigid pattern, to analyze imaging aberrations in the optical path.
If the searched correspondence is not found in the expected area, a local occlusion of the pattern 610 may be inferred. The reason for the occlusion may be, for example, an object located on the pattern 610, or a second pattern that partially occludes the first pattern. Valuable information, such as a mask or outline of the object, can also be obtained from the occlusion analysis.
Fig. 9 shows a schematic diagram of predefined point symmetric regions 110A and 110B according to an embodiment. In this case, each of the predefined point-symmetric regions 110A and 110B corresponds to or is similar to the predefined point-symmetric region from one of the above-described figures. In the first part of the diagram a is shown a second or even point-symmetrical region 110B comprising its centre of symmetry 112 and in the second part of the diagram B is shown a first or odd point-symmetrical region 110A comprising its centre of symmetry 112. In this case, the predefined point-symmetric regions 110A and 110B represent regions formed by gray levels.
The use of point symmetry has the following advantages over other symmetry forms: point symmetry is preserved when the pattern and/or the at least one predefined point symmetry region is rotated about the viewing axis; when the pattern and/or the at least one predefined point-symmetrical area is tilted, i.e. at a tilted viewing angle, point symmetry is also preserved. Rotation and tilting of the pattern and/or the at least one predefined point symmetry region does not cause problems for the detection of odd and even point symmetries, as they are preserved in the process. Thus, the above-mentioned method for providing or the providing method is also applicable to oblique viewing angles for the pattern or the at least one predefined point-symmetric region. In the case of even point symmetry, for example, gray values or color values are preserved when the points are mirrored.
The same partner gray value g is found for each gray value g separately, point symmetrically to the center of symmetry 112 in the first part of the diagram a of fig. 9 PG =g. In the second part of the diagram B of fig. 9, odd point symmetry is shown, in which each gray value is inverted: for example, white turns black and vice versa, light grey turns dark grey and vice versa. In the example where the gradation value g is within the interval 0.ltoreq.g.ltoreq.1, from the original gradation value g, from among the half of the region 110A shown in the upper part in the diagram of FIG. 9, the gradation value g is based on g PU =1-g forms the gray value g through the point mirror image in the simplest possible manner PU . Nonlinearities can also be integrated into the inversion, such as gamma correction, to compensate for other nonlinearities in image display and image recording, for example. The formation of a suitable odd or even point symmetric pattern is correspondingly simple. For example, half of the corresponding region 110A or 110B shown in the upper part of the diagram of fig. 9 is arbitrarily set or randomly generated. The half shown in the lower part of the diagram of fig. 9 is then derived therefrom, and by dot mirroring, whereinThe gray values of the odd point symmetry are inverted or the gray values of the even point symmetry are not inverted.
Such observation or generation may also be extended to color patterns and/or predefined point symmetric regions. In this case, in the case of odd point symmetry, the point mirrored RGB values can be formed by inverting the respective original RGB values, which in turn is the simplest possibility, i.e. r PU =1-r (red), g PU =1-g (g here stands for green)), b PU =1-b (blue). So that for example a dark purple color is imaged as light green and a blue color is imaged as orange. A color pattern may represent more information than a monochrome pattern, which may be advantageous. A prerequisite for using this advantage is that the color information is also used for converting the original image, i.e. the color image of the camera or other imaging sensor, into a descriptor.
The specific implementation of the pattern 610 and/or the at least one predefined point-symmetric region 110 or 110A and/or 110B shall also be discussed below with reference to the above-mentioned figures.
With respect to the arrangement of the pattern 610 and/or the at least one predefined point symmetric region 110 or 110A and/or 110B, for example as shown in fig. 6, the point symmetric region 110 or 110A and/or 110B may be, for example, circular, and these regions may in turn be arranged mostly in a regular grid in the pattern 610. For example, the faces between the circular areas 110 or 110A and/or 110B may remain unused. There are alternatives to this: for example, the regions 110 or 110A and/or 110B may be square and connected to each other without gaps so as to use the entire face, or the symmetric regions 110 or 110A and/or 110B may be regular hexagonal faces, which are also connected to each other without gaps so as to use the entire face.
In this association, fig. 10 shows a schematic diagram of a pattern 610 of predefined point symmetric regions 110A and 110B, according to an embodiment. The predefined point-symmetric regions 110A and 110B in this case correspond or are similar to the predefined point-symmetric regions in fig. 1, 6 and/or 8. The regions 110A and 110B in fig. 10 are both circular and arranged on a hexagonal grid. In this case, the distance between grid points or symmetry centers may correspond to the circle diameter. So that the unused area 1010 between the areas 110A and 110B in the pattern 610 can be minimized.
Other arrangements and shapes, such as rectangular, polygonal, etc., are also possible, which may also be combined with each other in shape and/or size. For example, the alternation of pentagons and hexagons is similar to a normal football. The shapes may also be arranged in other ways, such as rotation, with asymmetric regions between them if desired. The center of symmetry may also lie outside the point-symmetric region itself. This is the case, for example, when a ring is to be shaped. Nor does it have to have all the point-symmetrical regions lie in a common plane. Instead, they may be located on different faces arranged in space, which faces also allow to be uneven.
The pattern 610 and/or the at least one predefined point symmetric region 110 or 110A and/or 110B may be formed in a variety of ways. Only a few examples are described below. Random patterns or quasi-random patterns, such as noise patterns. By introducing low spatial frequency components, these patterns are formed such that they are still perceived as noise patterns of sufficiently high contrast when the distance from the camera is medium and large. The so-called white noise, i.e. the uncorrelated gray values, is not suitable for this. Aesthetic, if necessary regular, patterns, such as floral patterns, tendrils patterns (leaves, branches, flowers), ornamental patterns, mosaics, mathematical patterns, traditional patterns, onion patterns, patterns constituted by logo symbols (heart shapes, etc.), imitation of random patterns of nature (for example farmlands, woodlands, lawns, pebble beaches, sand, bulk materials (gravel, salt, rice, seeds), marble, marbles, concretes, bricks, slates, asphaltic surfaces, starry sky, water surfaces, felts, hammer paints, rusted iron sheets, sheepskin, scattered particles, etc.), photographs of scenes with any content. In order to produce a point-symmetrical region and/or pattern from such a pattern, which is suitable for the purposes mentioned herein, one half of the respective surface is arbitrarily predefined, and the second half is constructed by point mirroring, and the gray values or color values are inverted if necessary. See also fig. 9 for a simple example for this.
There are countless possibilities regarding the material, surface and fabrication of the pattern 610 and/or the at least one predefined point symmetric region 110 or 110A and/or 110B. The following list is not complete: black and white printing, gray printing or multicolor printing on various materials, printing on or behind glass or transparent films, printing on or behind frosted glass or translucent films, embossing in stone or glass or plastic or rubber, embossing in fired materials such as crockery, terra cotta or ceramics, embossing in metal or concrete or gypsum, embossing on plastic or paper/cardboard, etching in glass or metal or ceramic surfaces, milling in wood, cardboard, metal, stone, etc., photographic exposure of fired surfaces in wood or paper, photographic exposure of paper or other materials, short-term or rotten or water-soluble patterns for short-term applications in plant materials, ash, sand, wood, paper, on fruit, food skin, etc., as a display of holograms, as well as a display on monitors or displays (which may also change over time, if desired), a display on LCD films or other display films (which may also change over time, if desired), etc.
With regard to the embossing manufacturing possibilities, as in the case of milling, embossing, stamping, etc., it should be noted that this area should be perceived by the camera as odd-numbered symmetry and/or even-numbered symmetry. It may be necessary to design with consideration already given to, for example, later illumination (e.g., oblique incidence of light onto the relief) and non-linearities and other disturbances in the optical imaging. It is not important whether the 3D shape or relief itself has an even and/or odd point symmetry type, but the image recorded by the camera shows this symmetry. The light incidence or illumination direction and the reflection of the light on the surface are also relevant here and should be taken into consideration together in the design. With respect to image recording and illumination, it should be noted that the recording technique should be designed to be suitable for capturing the pattern 610 and/or the at least one predefined point symmetric region 110 or 110A and/or 110B. In particular, in case of a fast relative movement between the pattern 610 and/or the region(s) 110 or 110A and/or 110B and the camera, it is recommended to use a suitable illumination (e.g. a flashlight or a strobe or a bright LED lamp) so that the exposure time and thus the movement blur in the image can be kept small. For various applications, it makes sense to apply the pattern 610 and/or the region(s) 110 or 110A and/or 110B to a transparent or translucent surface. This allows the pattern 610 and/or region(s) 110 or 110A and/or 110B to be illuminated from one side and viewed from the other side. By this solution, disturbing reflections of the light source on the display medium can be effectively avoided. For the arrangement of the pattern 610 and/or the region(s) 110 or 110A and/or 110B, the light source and the camera, there is in principle a freedom to choose the front or the back of the carrier or the display medium, respectively. When selected, the risk of the pattern 610 and/or the area(s) 110 or 110A and/or 110B or the camera being contaminated or the pattern 610 and/or the area(s) 110 or 110A and/or 110B being worn may also play a role: so that for example the pattern 610 and/or the region(s) 110 or 110A and/or 110B and the camera are applied on the back side, it is interesting because they are better protected there from for example dust or water, or because the pattern 610 and/or the region(s) 110 or 110A and/or 110B are protected there from mechanical wear.
A method, which is also used in the embodiments, is disclosed in DE 10 2020 202 160, which is hereinafter disclosed, to find symmetric areas or patterns in an image reliably and with little computational effort. In this case, the original image, i.e. the color image or the gray-value image of the camera or other imaging sensor, is converted into an image of descriptors, which are formed based on the local environment of the original image, respectively. Here, the descriptor is another representative form of the partial image content, which is prepared in a simpler processing form. Here, it is more simply understood in particular that: information about the environment of the spot is contained, not just about the spot itself, a large degree of invariance with respect to brightness or illumination and its variations, and a low sensitivity with respect to sensor noise. The descriptor image may have the same resolution as the original image such that there is approximately one descriptor for each pixel of the original image. Alternatively or additionally, other resolutions are also possible.
The signature is formed by a corresponding descriptor, which is represented in the computer unit as a binary word, or by a plurality of adjacent traces, respectively The symbol is formed, the signature describing as characteristically as possible the local environment of the pixels of the original image. The signature may also be identical to the descriptor or a part thereof. The signature is used as an address to access a look-up table (Lookup table). Thus, if the signature consists of N bits, then a size of 2 can be accessed N (i.e., to the power N of: 2). Advantageously, the word length N of the signature should not be chosen to be too large, since the storage requirement of the table grows exponentially with N: for example, 8.ltoreq.N.ltoreq.32. The signature or descriptor is structured such that the signature symmetry can be determined using simple operations, such as bitwise XOR (exclusive or) of a portion of the bits. Examples: s is S P =s^R P Where s is a signature of length N bits, R P Is a point-symmetrical (P) reflector (R) coordinated therewith. The symbol x represents a bitwise exclusive or operation. Thus, signature S P Representing the point-symmetric counterpart of the signature s. This relationship also applies to the opposite direction.
If the construction of the descriptor or signature is fixed, the reflector is thus also automatically set (and constant). By applying it to any signature, the signature can be converted into its symmetrical counterpart. There is an algorithm that can find one or more symmetric signature pixels for a given signature at the current pixel within an optionally limited search window. The centre of symmetry is then located in the middle of the connecting line between the positions of the two pixels. Where or as close as possible to the voting weights and collected in a voting matrix (voting map). In the voting matrix, the output voting weights are accumulated at the position of the symmetry center of the search. These symmetry centers can be found, for example, by traversing the voting matrix to find the accumulation points. This applies to point symmetry, horizontal axis symmetry, vertical axis symmetry and, if desired, other symmetries, such as mirror symmetry in the further axis, and rotational symmetry. A more accurate localization with sub-pixel accuracy can be achieved if the local environment is also included in the observation when evaluating the voting matrix to determine the accumulation points and to accurately localize the symmetry center, respectively.
Fig. 15 of DE 102020202160 shows an algorithm that can find a point-symmetrical correspondence with the signature currently observed. However, only even point symmetry is considered in this document.
According to an embodiment, the algorithm is extended to odd point symmetry. It is particularly advantageous here that the odd symmetry and the even symmetry can be determined simultaneously in only one common pass. This saves time because the signature image only needs to be traversed once instead of twice, and saves delay. When only one (rather than two) traversal is required, processing in streaming mode can provide the results of a symmetric search with much lower latency. Here, once the first pixel data from the camera arrives, the process starts, and the process steps are densely executed in sequence. This means that the signature has been calculated once the necessary image data from the local environment of the current pixel is present. A symmetric search is immediately performed for the signature just formed. Once portions of the voting matrix are complete (as is the case when they are no longer and will no longer be part of the search area), they can be immediately evaluated and the found symmetry (strong centre of symmetry) can be immediately output. This procedure results in a very low delay, which usually corresponds to only a small number of image lines, depending on the height of the search area. A low delay is very important if the reaction should be fast, e.g. in an adjusting ring, where the actuator influences the relative pose between the symmetric object and the camera. Memory may also be saved. The voting matrix (voting map) can be used in common for two symmetric forms, even point symmetry and odd point symmetry, where two symmetric forms or symmetric types with different symbols participate in the voting, e.g., subtracting the voting weight in the case of odd point symmetry and adding the voting weight in the case of even point symmetry. This is explained in more detail below. In addition, energy can be saved by saving memory. The low latency implementation possibilities described above also result in that only a small amount of intermediate data need to be stored compared to the whole image. This effort to use very little memory is particularly important for cost-critical Embedded Systems (Embedded Systems) and also results in a saving of energy requirements.
Fig. 11 shows a schematic diagram of the use of a lookup table 1150 according to an embodiment. The look-up table 1150 may be used by the determining means of the apparatus for providing or the like of fig. 1. In other words, an embodiment of the algorithmic process in searching for point-symmetrical correspondence in fig. 11 is a snapshot related to the apparatus for providing of fig. 1 or the like and/or the method for providing of fig. 3 or the like. In particular, the illustration in fig. 11 is also similar to fig. 15 of DE 10 2020 202 160 disclosed later, wherein here fig. 11 further comprises extensions to include even and odd point symmetries.
The look-up table 1150 may also be referred to herein as an entry table. A pixel grid 1100 is shown in which a signature s with exemplary values 2412 is generated for a currently observed or processed pixel. In other words, FIG. 11 shows a snapshot during the formation of a link of pixels or pixel coordinates having the same signature s. For the sake of clarity, two out of a maximum of N possible chains are shown, and for signature S PG 364 and for signature S PU =3731. In pixel grid 1100, a reference to the location of the last pre-signature having the same signature value is stored for each pixel. Thereby generating links having the same signed locations, respectively. Thus, the signature value itself need not be stored. For each signature value, the corresponding entry location in pixel grid 1100 is stored in a lookup table 1150 or entry table having N table fields. Here, N corresponds to the number of possible signature values. The stored value may also be "invalid". The contents of the look-up table 1150 or entry table and the referenced image (linked image) change dynamically.
Processing pixels in the pixel grid 1100, e.g., row by row, for example, begins at the top left of fig. 11, as indicated by the arrow, and has currently advanced to the pixel with signature s=2412. Links between pixel locations each having the same signature s are stored only for the first image region 1101. For the second image area 1102 in the lower image portion, the link and signature is not yet known at the point in time shown, and for the third image area 1103 in the upper image portion no link is needed anymore, for example due to limitations of the search area, wherein the link memory for the pixels in the third image area 1103 can be released again.
For just formedBy applying a reflector R PG Forming an even point mirror image signature S PG =364. Index PG represents point symmetry, even. The index PU, which represents point symmetry, is also used below. This value is used as an address in the look-up table 1150 to assign the same signature value S PG This entry is found in the link of pixel positions of=364. At the point in time shown, the look-up table 1150 includes two elements: the entry pixel location of the corresponding signature s and a reference to that location, shown by the curved arrow. Other possible contents of the look-up table 1150 are not shown for clarity. Signature value S PG The link=364 includes three pixel positions shown here by way of example only. Two of which are located in the search area 1104, the search area 1104 may also have a different form than shown here, such as rectangular or circular. Here, when traversing unidirectionally along the link, starting from the lower part, two point-symmetrical corresponding candidates located within the search area 1104 are found. The third correspondence of the first element of the link, which is an even point symmetry, is not of interest here, because it is located outside the search area 1104 and thus too far from the current pixel position. If the number of symmetry center candidates 1112 is not too large, a voting weight for the location of the corresponding symmetry center may be output for each symmetry center candidate 1112. Symmetry center candidate 1112 locates signature S mirrored from corresponding even point at signature S' S location, respectively PG In the middle of the connecting shaft. If there is more than one symmetry center candidate 1112, the voting weights may be reduced, respectively, for example, the inverse of the number of symmetry center candidates may be used as the corresponding voting weight. Thus, ambiguous center of symmetry candidates are weighted smaller than explicit center of symmetry candidates.
An odd dot mirror signature will now be considered and used. In the snapshot shown in fig. 11, the signature s just formed is obtained by applying a further reflector R PU Forming an odd dot mirror signature S PU =3731. Similar to the flow described above for the even point image signature, the same steps are performed for the odd point image signature. The corresponding linked entry is found by the same look-up table 1150. Here, the look-up table 1150 points to an odd for signature 3731Links shown in point symmetry. The first two pixel locations along the link again result in the formation of symmetry center candidates 1112 because they are arranged in search area 1104 and because the number of candidate symmetry center candidates 1112 is not too large. The last pixel position along the link is located in the third image area 1103. This region is no longer needed at all because it can no longer enter the search region 1104 where it slides row by row.
If the next reference within the link points to the third image region 1103, the traversal along the link may be terminated. Of course, when the end of the link is reached, the traversal is also terminated. In both cases, it makes sense to limit the number of symmetry center candidates 1112, i.e. if there are too many competing symmetry center candidates 1112, all symmetry center candidates 1112 are discarded. Furthermore, it is expedient to terminate the travel along the link early if after a predefined maximum number of steps along the link neither its end nor the third image region 1103 can be reached. In this case also all symmetry center candidates 1112 up to there found should be discarded.
The memory for linking in the third image area 1103 may have been released again, so that only the linking memory needs to be reserved for the size of the first image area 1101. Thus, the link memory requirement is generally low and here depends substantially only on one dimension of the search area 1104 (here the search area height) and one dimension of the signature image (here the signature image width).
The center of symmetry candidate 1112 or center of symmetry candidate may not always fall exactly on the pixel location, but there are three additional possibilities. There are thus four possibilities in total:
1. the point or center of symmetry candidate 1112 falls on the pixel location.
2. The point or center of symmetry candidate 1112 falls midway between two horizontally directly adjacent pixel locations.
3. The point or center of symmetry candidate 1112 falls midway between two vertically directly adjacent pixel locations.
4. The point or center of symmetry candidate 1112 falls in the middle between four immediately adjacent pixel locations.
In ambiguous cases 2 to 4, it is advantageous to evenly distribute the voting weights to be output to the participating pixel positions. The output voting weights are input into a voting matrix and added or accumulated therein.
Here, not only the positive voting weight but also the negative voting weight are used at the same time. In particular, the even symmetry is provided with a different sign (here positive) than the odd symmetry (here negative). This results in a clear result: in image areas without symmetry, which in practice may mostly represent a majority, the positive and negative voting weight outputs are approximately balanced against each other, thus approximately canceling each other in the voting matrix. So on average, about zero is found in the voting matrix. In contrast, in the odd or even symmetric region, strong extrema are found in the voting matrix, and in this embodiment negative minima are found in the case of odd point symmetry, positive maxima are found in the case of even point symmetry.
According to the embodiment shown here, the same resources are used for both the odd and even point symmetries, i.e. the look-up table 1150 or the entry table, the link map, the voting matrix, which saves in particular memory requirements, and both symmetry forms or symmetry types are observed in one common traversal, which saves time and intermediate memory.
Fig. 12 shows a schematic diagram 1200 of a voting matrix according to an embodiment. The graph 1200 relates to a voting matrix as a 3D graph of a camera image processed by means of the device for providing or the like of fig. 1, in which camera image the pattern from the second part illustration of fig. 6 is recorded by the camera. In the voting matrix or chart 1200, it is apparent that exemplary three maxima 1210B and five minima 1210 a are identified, which represent three even and five odd point symmetric regions from the pattern illustrated in the second portion of fig. 6. Outside these extremes, the values in the voting matrix are close to zero. The extremum can thus be determined very simply and the position of the center of symmetry in the camera image can be determined unambiguously and precisely.
Fig. 12 shows that these extreme values are very pronounced and can therefore be detected simply and unambiguously by the device for providing or the like of fig. 1 and/or the method for providing or the like of fig. 3. Here, information about the symmetry type (i.e., odd or even) is included in the symbol. If the local environment of the respective extremum is also taken into account when evaluating the voting matrix, the position of the center of symmetry can be determined with high accuracy in a subpixel accuracy. Corresponding methods for this purpose are known to the person skilled in the art. If the pattern is properly constructed, the odd and even point symmetries do not compete with each other. The image area (if any) then has either an odd or even point-symmetrical form. Even if the odd and even point symmetric regions are close to each other in the camera image, it can be ensured that their symmetry centers are still spatially separated or distinguishable from each other. Then, by jointly processing the negative symmetry and the positive symmetry, advantages are produced in terms of resources and speed.
According to an embodiment, separate processing of odd-numbered point symmetry and even-numbered point symmetry may be provided. It makes sense to split the entries before they are entered into the voting matrix: two unsigned voting matrices are then provided instead of a common signed voting matrix, wherein negative symmetric voting weights are input into the first voting matrix and positive symmetric voting weights are input into the second voting matrix. In this case, a potentially interesting advantage arises: it is also possible to construct patterns that have both odd and even point symmetry and whose centers of symmetry partially coincide and consider them by the detection algorithm. While this hybrid symmetric form is very unusual, this unusual guarantee is highly unlikely to confuse with randomly occurring patterns in the image. The two voting matrices are then searched to find the maximum value that exists at the same location in the two matrices. Another possible advantage of handling odd and even point symmetries separately is that parallelization is easier and thus faster to perform if necessary. This saves latency because by using two voting matrices access conflicts when entering voting weights can be avoided.
Fig. 13 shows a schematic diagram of a pattern 610, illustratively arranged in a cube form, in accordance with an embodiment in terms of proper identification of grid 1311. The pattern 610 shown in fig. 13 is, for example, a pattern from fig. 7 or fig. 8, wherein three patterns are arranged here in a cubic shape. In pattern 610, detected or identified centers of symmetry 112A and 112B of respective predefined point symmetry regions of pattern 610 are shown, wherein the sign and value of the associated extremum in the voting matrix can optionally also be known. In this case, the first symmetry center 112A is allocated to a predefined point-symmetry region having an odd number of point-symmetries, and the second symmetry center 112b is allocated to a predefined point-symmetry region having an even number of point-symmetries. A correct grid 1311 is drawn for one of the patterns 610, on which the predefined point symmetry region and thus the symmetry centers 112A and 112B are aligned. The other two patterns 610 will be searched for the correct grid, where in fig. 13 the incorrect resolution of the grid search is shown by the first marker 1313 and the correct resolution of the grid search is shown by the second marker 1314.
Finding the correct grid associated is a task with ambiguity. After detecting the symmetry centers 112A and 112B of the odd/even codes, the next step is typically to group them and determine which pattern 610 the group is assigned to, since it is not always known in advance which pattern 610 and how many patterns 610 are contained in the image. Part of this task may be to find a grid 1311 on which centers of symmetry 112A and 112B are disposed. Instead of square grid 1311, other topologies are also contemplated for the arrangement of symmetry centers 112A and 112B, such as a circular concentric arrangement, see, for example, the second partial illustration in fig. 6. As a representative, square grid 1311 is observed below.
The task of determining the position of the correct grid for all patterns 610 based solely on the positions of the centers of symmetry 112A and 112B in fig. 13 is an ambiguous problem in some cases. If the pattern 610 is observed in fig. 13 for which the correct grid 1311 has been drawn, it is not difficult to indicate (to the observer) the correct grid 1311. However, it is apparent that the output may be ambiguous for the other two patterns 610 captured by the camera from a significantly more oblique perspective. There are a number of possible solutions as to how the grid can be placed through the centers of symmetry 112A and 112B. Here, the initially most obvious solution when locally observed, i.e. the solution with an approximately vertical axis, is not the correct solution, as can be seen based on the first marker 1313. Instead, the second marker 1314 is properly located on the grid. This suggests a naive procedure, e.g. searching for nearest neighbors of the respective symmetry center, which may lead to erroneous solutions in case of oblique viewing angles. Solutions with very oblique viewing angles are precluded in practice because the centers of symmetry 112A and 112B can no longer be found.
Fig. 14 shows a schematic view of the pattern 610 shown in the first partial illustration of fig. 6 at an oblique viewing angle. In a first part of the illustration a, a display medium 600 having a pattern 610 of predefined point symmetric regions 110A and 110B is shown in fig. 14. The second part of the diagram B in fig. 14 shows the symmetry centers 112A and 112B of the pattern 610 identified or detected by means of the device for providing of fig. 1 or the like and/or the method for providing of fig. 3 or the like. The centers of symmetry 112A and 112B have been detected and at least their locations are available.
Fig. 15 shows a pattern 610 from the first part illustration of fig. 14, in which a predefined point-symmetrical region 110B is highlighted. Here, the predefined even point symmetric region 110B is only exemplarily graphically highlighted to illustrate the distortion of the pattern 610 or the regions 110A and 110B due to the oblique viewing angle. The circular predefined point symmetric regions 110A and 110B, which are exemplary herein, are distorted into ellipses by oblique perspective.
The reconstruction of the correct mesh or topology of pattern 610 is discussed below with particular reference to fig. 14 and 15 and with general reference to the above-described figures.
Under oblique viewing angles, each circular region 110A and 110B from which the vote of the respective center of symmetry 112A and 112B originates becomes an ellipse. By backtracking votes that contribute to the respective centers of symmetry 112A, 112B (e.g., center of symmetry 112B with even point symmetry as highlighted in fig. 15), the shape and orientation of the respective ellipses can be deduced. The direction and ratio of the major axes of the ellipse reveal how the ellipse can be stretched or straightened to convert it back into a circle. The exemplary highlighted predefined even point symmetry region 110B of the observation pattern 610 contributes to the highlighted point symmetry center 112B. Depending on the design or construction, this region 110B is circular or nearly circular, e.g., hexagonal. Under oblique viewing angles, this circle becomes elliptical. When voting to identify the center of symmetry 112B, the pairs of symmetry points help to form extrema in the voting matrix that lie within the ellipse.
According to one embodiment, the point pairs in the camera image that result in forming sufficiently strong extrema are traced back from where. Further processing steps are performed for this purpose. Assume first that a vote has been made and that a sufficiently strong center of symmetry has been found. The starting point is thus the situation as shown in the second part of the diagram B of fig. 14. The voting process is then traversed again in a modified form. However, the already existing voting matrix is not re-formed again here. Instead, for each pair of symmetry points contributing to the voting matrix, it is checked whether this contribution contributes to one of the found symmetry centers 112A, 112B and thus has already contributed in the first traversal. If this is the case, the two positions of the point pair are stored or immediately further calculated. Advantageously, here also an index of symmetry centers 112A, 112B contributed by the symmetry points is stored or used. In this way all contributions to the successful centre of symmetry can be determined afterwards and (intermediately) stored or further used.
The start of the further processing steps does not have to wait until the end of the first processing step, i.e. the formation of the voting matrix and the determination of the symmetry center, but may start in advance and may use already completed intermediate results of the first processing step, i.e. the found symmetry center 112A, 112B. Then, in the information formed in this way, all image positions contributing to this can be read out for each center of symmetry 112A, 112B found. These positions lie substantially within the ellipse, or in addition to a few outliers, as illustrated in fig. 15 for the center of symmetry 112B by way of example.
Methods for determining parameters of the ellipse are known to those skilled in the art. For example, a principal axis transformation may be formed over a set of all points contributing to the center of symmetry 112A, 112B to determine the orientation of the principal axis and the two diameters of the ellipse. This can be achieved even without the need to intermediately store the contributing image locations: instead, these image locations may be further processed immediately after knowledge. Alternatively, an elliptical envelope around the point set may also be determined, with which as large a portion of the point set as possible is surrounded as tightly as possible (possible outliers are excluded).
Alternatively, instead of storing a set of points in the sense of a list, an index image equivalent to an index matrix may be created. The index image is used for the same purpose, i.e. to form parameters of all ellipses, but it stores information in other forms. Ideally, the index image has the same size as the signature image and is set to store an index, and is an index assigned to the found center of symmetry 112A, 112B. A special index value, e.g. 0, is set for indicating that no entry yet exists. If a symmetric point pair or signature pair contributing to the ith index is found while traversing further processing steps, the index i is entered at two associated locations of the respective signature, respectively. Thus, at the end of the traversal, an index image is obtained in which all the indices assigned to the centers of symmetry 112A, 112B respectively appear multiple times, wherein these indices form an elliptical region: each elliptical area then contains only entries with uniform index, except for several outliers, and index 0 at the unused position. The index image can then be easily evaluated to determine the parameters of the individual ellipses. Furthermore, it is not necessary to store the index image entirely. Once the data is no longer changing in a section of the index image, that section can already be evaluated and then the memory can be released again. This also results in lower time delays so that intermediate results can be provided earlier.
The two-dimensional arrangement of detected symmetry centers (see fig. 14) can then be corrected with known ellipse parameters such that these symmetry centers then lie on a grid of patterns 610, which here is only exemplary at least approximately square.
Fig. 16 shows a schematic diagram of the pattern 610 of fig. 15 after viewing angle correction, according to an embodiment. In other words, for illustration purposes, fig. 16 shows the pattern 610 after stretching the pattern 610 of fig. 15 by a ratio of two principal axes lengths in a direction orthogonal or perpendicular to the found oval or highlighted oval twisted area 110B. The correct grid 1311 can thus be found in a simple manner. Thus, in comparison with fig. 15, the ellipse is corrected in such a manner that the original circular shape of the region 110B is restored. Then, it is a simple matter to determine the grid 1311 in which the centers of symmetry 112A and 112B are located or to determine the adjacency between the centers of symmetry 112A and 112B without error. Fig. 16 is for illustration purposes only. In practice, it is not necessary to warp the image. Since the information about the location of the centers of symmetry 112A and 112B already exists in compressed form, it makes sense to use only these data for further processing and transforming their coordinates, wherein the transformation rules are formed by the determined ellipse parameters and the ellipses are made to be circles.
In the case of recording a camera image at length Jiao Jiaoju, a global transformation is sufficient for each partial section once to determine grid 1311. In the case of recording a camera image using a wide-angle lens (e.g., a fisheye lens), it is possible to operate using a partial transformation at least in a partial region. Thus, the transformation rules described above may be applied globally and/or locally. In a global variant, all projection centers are transformed using the same common transformation rule. This is significant and sufficient in many cases. The common transformation rule may be formed from a common view of all ellipses. If the centers of symmetry 112A and 112B are spatially located on multiple faces, the ellipses may be divided into groups according to their parameters. In this case, ellipses belonging to a plane have very similar parameters, in particular if the plane is flat. Global transformation rules may then be determined and applied for each group. This procedure applies to length Jiao Jiaoju. Local transformations are significant when multiple circular regions are imaged by camera imaging as ellipses of different shapes or different orientations. This is especially true for wide angle cameras or high distortion lenses.
After the transformation is applied, the center of symmetry positions belonging to the same plane are at least approximately on a common grid 1311. The next task is to assign the centers of symmetry 112A and 112B to grid locations. This can be done iteratively, for example in small steps. For example, for symmetry centers 112A, 112B, up to four nearest neighbors with approximately the same distance are searched, for which reference is also made to the markers in fig. 13. From these neighbors, the traversal continues to more distant neighbors until all captured symmetry centers 112A and 112B belonging to pattern 610 are assigned to a common grid 1311 or can be excluded from the common grid 1311. Thus, if centers of symmetry that do not match the just observed grid 1311 in terms of distance are encountered during this search, these centers of symmetry are not recorded, as they may be outliers or centers of symmetry belonging to other planes. The iterative search may be repeated for other facets such that eventually each center of symmetry 112A, 112B is assigned to a facet except for outliers. For these facets, pattern 610 may then be identified, preferably based on the binary codes associated with centers of symmetry 112A and 112B, which are respectively contained in the symbols of the extremum.
Fig. 17 shows a schematic diagram of an embodiment with a hierarchically symmetrical pattern 1710. The pattern 1710 corresponds or is similar to the pattern in the figures described above. More precisely, by way of example only, the pattern 1710 has a two-level hierarchical structure consisting of four predefined point-symmetric regions 110A and 110B. According to the embodiment shown here, the pattern 1710 has two predefined odd point symmetric regions 110A and two predefined even point symmetric regions 110B, by way of example only. In this case, the pattern 1710 has an odd-numbered point-symmetrical structure as a whole. The own even point symmetric region 110B and the own odd point symmetric region 110A are located at the first hierarchical level. The overall arrangement of the odd point-symmetric pattern 610B is at the second hierarchical level. The center of symmetry 112 of the second hierarchical level is represented by a quarter circle.
Fig. 18 shows a schematic diagram of an embodiment with a hierarchically symmetrical pattern 1810. The pattern 1810 in fig. 18 is similar to the pattern from fig. 17. More specifically, fig. 18 shows another example of a two-level hierarchical structure composed of predefined point-symmetric regions 110B. In the first hierarchical level, the predefined point-symmetric regions 110B are each assumed to be point-symmetric in itself. In the second hierarchical level, there is an odd point symmetry at the level of the pattern 1810, where the center of symmetry 112 is at the center of the six-part hexagon shown for illustration. The odd symmetry is here represented as an inversion of the predefined point symmetry region 110B, for example mirroring dark symbols on a bright background to bright symbols on a dark background.
Fig. 19 shows a schematic diagram of an embodiment with a hierarchically symmetrical pattern 610. In this case, pattern 610 is constructed from patterns 1710 and 1810 from fig. 17 and 18 or inverted and/or dot-mirrored versions thereof. By way of example only, pattern 610 has a three-level layered structure composed of two patterns 1710 of fig. 17 and two patterns 1810 of fig. 18. Patterns 1710 and 1810 are odd and thus are inverted in a dot-mirrored fashion at the center of symmetry 112 of pattern 610 at the center of the six-part hexagon shown for illustration. For example, the pattern 1710 shown in the lower right hand corner of fig. 19 is an inverted version of the upper left hand corner pattern 1710. The layering principle can be arbitrarily continued, namely a fourth level and a fifth level can be constructed, and the like.
Patterns with hierarchical symmetry are further discussed below with reference to fig. 17, 18, and 19. The symmetric patterns 610, 1710, 1810 may be constructed in multiple stages such that, for example, there are smaller regions of self symmetry in a first hierarchical level, and their co-observation results in symmetry at a next higher hierarchical level. Fig. 17 and 18 each exemplarily show how the two-stage layered pattern 1710 or 1810 is constructed. Based on this, a three-level layered pattern 610 is constructed in fig. 19. Thus, three hierarchical levels are included in the example of fig. 19. The third hierarchical level extends over the entire face of the pattern 610 (the area outlined by the dashed line) and includes a center of symmetry 112. In the second hierarchical level, there are four patterns 1710 and 1810 (each framed by a solid line), each having a center of symmetry located in the middle (not explicitly shown here). According to the embodiment shown here, there are thus 16 predefined point-symmetrical regions in the first hierarchical level, each region having a centre of symmetry. Here, the symmetry of the third hierarchical level can be seen from a greater distance. Four symmetries of the second hierarchical level can also be seen during the approach. At shorter distances, or if the capture resolution of the pattern 610 is sufficient, the symmetry of the first hierarchical level also becomes visible. Thus, for example, visual control (visual servoing) may be implemented over a large range of distances, such as for example, visual control by a robot in the direction of the pattern 610 or in any other direction. If finer or lower levels of stratification can already be captured, it is generally not necessary to capture coarser or higher levels of stratification. It is also not necessary to be able to capture all symmetry of the respective layering levels simultaneously, e.g. it is no longer possible to capture the entire pattern 610 in the camera image at all at very short distances. Obviously, even symmetry and odd symmetry can be selected and combined partly freely. Additional information can also be included in the setting, in particular a bit for the selection between odd and even symmetry, respectively, wherein such additional information can be transmitted to the acquisition system in this way. "partially free" means here that the remainder of the symmetrical form at the respective hierarchical level inevitably results from the next higher hierarchical level. In other words, for example, in fig. 18, the patterns "X" and "O" can be freely selected for the top row. The second row is then inevitably derived and here by inversion, since negative point symmetry is chosen at the next hierarchical level.
Fig. 20 shows a schematic diagram of a pattern 610 according to an embodiment. In a first portion of illustration a, fig. 20 shows a pattern 610, which pattern 610 is one of the patterns from fig. 8, as an example. The first part of fig. 20, diagram a, is an example of implicit additional information, here 8.8=64 bits as an example only, which is derived based on the type of symmetry of the predefined point symmetric regions 110A and 110B of the pattern 610 or the sign of the point symmetry associated therewith. In a second part of the illustration B, fig. 20 shows a pattern 610, which is only exemplarily built up of four predefined point-symmetrical areas 110A and 110B, here for example one predefined odd point-symmetrical area 110A and three predefined even point-symmetrical areas 110B on one square grid. Further, a code matrix 2010 for explicit additional information is arranged in the pattern 610 in this case. By way of example only, implicit additional information from the first portion of diagram a is explicitly contained in the code matrix 2010. The predefined area 110A with odd point symmetry here represents or marks the beginning row of the 8 x 8 matrix, thereby explicitly setting the readout order.
The transmission of implicit or explicit additional information is discussed in more detail below with reference to fig. 20.
It may be useful or necessary to transmit additional information to the recipient, e.g., to a computer, autonomous robot, etc., based on the pattern 610. The additional information may be more or less extensive. Some illustrative examples of additional information include parking spots, charging, southwest facing locations at 52°07'01.9 "N9°53' 57.4" E, left turn, speed limit 20km/h, mower charging station, and the like. There are various options for transfer by means of an imaging sensor or camera. In particular, a distinction can be made between implicitly contained additional information and explicitly contained additional information, for which purpose reference is made to two examples in fig. 20, wherein 64 bits of additional information are provided implicitly once and explicitly once. Implicit additional information means that it is somehow contained in the patterns 610 themselves, which are symmetrical in nature, while explicit additional information is typically designed and captured separately from these patterns 610.
One possibility of transmitting implicit additional information is illustrated based on the first part of diagram a of fig. 20: as implicit additional information of the binary code. Since a selection between odd and even point symmetry is made for each of the symmetric regions 110A and 110B when the pattern 610 is constructed, additional binary information (corresponding to 1 bit) can be transferred separately. If additionally a pattern with both odd and even point symmetry is allowed, the binary additional information is changed to ternary additional information, i.e. three cases instead of two.
Another possibility of transferring additional information is derived by using a non-uniform distance between the centers of symmetry of the areas 110A and 110B, i.e. implicit additional information based on the arrangement. Then unlike the arrangement shown in fig. 20, where the symmetry centers are located on a square grid in fig. 20, these symmetry centers will be irregularly arranged, wherein additional information or a part thereof is encoded in the arrangement. Examples: if the respective symmetry center is allowed to shift left/right and up/down by a fixed distance, 9 possible positions are obtained, whereby each symmetry center can encode additional information of log2 (9) =3.17 bits. The oblique viewing angle between the imaging sensor and the pattern 610 is not a problem in any of the possibilities mentioned. For example, a portion of the center of symmetry (e.g., the outermost four centers of symmetry in a corner) may be used to define a coordinate system or regular base grid. The bias or binary/ternary code used for encoding is then related to the base grid.
The symmetric regions 110A and 110B for implicit additional information should not be so small that sufficiently prominent extrema are formed in the voting matrix. If a larger amount of additional information, in particular static, location-based additional information, is to be transmitted to the receiving party, e.g. a mobile robot, it is advantageous to explicitly encode the additional information.
In a second part of the diagram B of fig. 20 it is shown how in particular static, location-based additional information can be explicitly conveyed to the recipient (e.g. mobile robot): it may for example be agreed that at certain coordinates in the coordinate system defined by the center of symmetry there is further information, for example in binary (black/white) or further gradation (grey level) or color coding. The process then consists of two steps: in a first step, a field, e.g. code matrix 2010, is found based on the odd and even symmetry, in which field additional information is encoded. In a second step, this field and thus the information contained therein is read out. The oblique viewing angle between the imaging sensor and the pattern 610 does not cause problems here, since for reading out the display additional information it is neither necessary that the basis vectors of the found coordinate system are perpendicular to each other nor that these basis vectors have the same length. Alternatively, the image may also be corrected such that a Cartesian coordinate system is then present. Optionally, a display may also be installed in the field with the pattern 610, which may transmit time-varying information in addition to time-static information and/or transmit information over time.
High resolution additional information may also be contained in the pattern 610 itself by implicit error recognition. Thus, there is a further possibility to transfer (in particular static, location-based) additional information via the pattern 610 itself: this means that additional information is contained in the sequence of black and white or color or grayscale patterns 610 themselves. By the above classification, this additional information will be both implicit and explicit. Since the pattern 610 or at least some portions thereof have symmetry, additional information is automatically contained redundantly, typically in duplicate, respectively. This applies to both odd and even point symmetry. This fact can be used for error correction or error detection. For example, if the pattern 610 is contaminated with e.g. bird droppings, the errors thus occurring in the additional information may be detected with high reliability, since the same errors are likely not to occur at the associated symmetric positions.
Fig. 21 shows a schematic diagram of a detection situation using a predefined point symmetric region 110 according to an embodiment. In other words, fig. 21 shows a system 2100 for detecting a movable object 100. The predefined point-symmetric region 110 corresponds to or is similar to one of the predefined point-symmetric regions from one of the above-described figures. Also shown is the center of symmetry 112 of the predefined point symmetric region 110. Furthermore, an object 100 in the form of a vehicle and a unit consisting of a camera 102 and devices 120, 140 are shown, which are only exemplary. The object 100 may move in a movement direction through a surveillance area between the camera 102 and the predefined point symmetric area 110.
The system 2100 includes the devices 120, 140 and the camera 102, and here illustratively only one predefined even and/or odd point symmetric region 110. The devices 120, 140 correspond or are similar to the devices 120, 140 from fig. 1.
In other words, fig. 21 shows a simple arrangement with a camera 102 and a point symmetric region 110 located on opposite sides of one, e.g. conical, monitored region or regions. If the object 100 obstructs a major part of the camera's line of sight towards the point-symmetrical area 110, it is detected when the method for providing of fig. 3 and the method for detecting of fig. 4 are performed and/or by means of the providing device and the detecting device of fig. 1. This arrangement allows, for example, counting objects 100 moving through between the camera 102 and the region 110.
Fig. 22 shows a schematic diagram of a detection situation using a pattern 610 constituted by predefined point-symmetric areas 110 according to an embodiment. The illustration in fig. 22 corresponds to or is similar to the illustration in fig. 21, except that a pattern 610 of a plurality of predefined point symmetric regions 110 is provided for the system 2100 instead of one predefined point symmetric region. Pattern 610 corresponds to or is similar to the pattern from one of the figures described above. In the pattern 610, the predefined point-symmetric regions 110 are illustratively arranged along a line.
In other words, a plurality of point symmetric regions 110 belong to a pattern 610 observed by only one camera 102 in fig. 22. Here, the point symmetry center 112 is laid on a road section substantially parallel to the direction along which the object 100 typically moves. Here, the object 100 has occluded one point symmetric region 110 and will occlude another region 110 in succession as the journey continues. This arrangement of the areas 110 allows for example to determine the speed and length of the object 100.
Fig. 23 shows a schematic diagram of a detection situation using a pattern 610 constituted by predefined point-symmetrical areas according to an embodiment. Pattern 610 corresponds to or is similar to the pattern from one of the figures described above. In the first and second part of fig. 23, a pattern 610 is shown, respectively, consisting of a plurality of predefined odd point-symmetric regions with a first center of symmetry 112A highlighted for illustration and even point-symmetric regions with a second center of symmetry 112B highlighted for illustration, wherein from the camera's point of view the pattern 610 is partly occluded by the object 100, the object 100 being implemented as a vehicle only by way of example. In addition, mirror images of some of the centers of symmetry 112A, 112B of the pattern 610 may be identified on the lane on which the object 100 is moving. In other words, fig. 23 illustrates presence detection of an object 100, wherein the pattern 610 has a two-dimensional arrangement of point-symmetric regions.
Fig. 24 shows a schematic diagram of a detection situation using a pattern 610 constituted by predefined point-symmetrical areas according to an embodiment. Pattern 610 corresponds to or is similar to the pattern from one of the figures described above. More precisely, fig. 24 in this case shows a bird's eye view of an exemplary arrangement of a system 2100 for a three-way intersection, with a camera 102, a pattern 610, a road 2400 for an object in the form of a vehicle, a pedestrian or other traffic participant inside or outside a building, and two mirrors 2403 by way of example only. This arrangement with properly aligned mirrors allows for simultaneous monitoring of all commonly used roads 2400. On each road 2400, the object normally blocks twice a bundle of light rays that is deflected between the camera 102 and the pattern 610 via two mirrors 2403. As the beam of light becomes narrower in the direction of the camera 102, the closer the mirror 2403 is to the camera 102 along the line of sight, the smaller the mirror 2403 is allowed. In particular, fig. 24 shows a T-intersection of an aisle inside or outside a building, where in particular people can walk on road 2400. The boundary areas shown in dashed lines in this illustration represent walls, windows, doors, etc.
Fig. 25 shows a schematic diagram of a detection situation using a pattern 610 constituted by predefined point-symmetrical areas according to an embodiment. Pattern 610 corresponds to or is similar to the pattern from one of the figures described above. More specifically, fig. 25 shows an exemplary arrangement of a system 2100 with a camera 102, a pattern 610, and a mirror 2403 in a bird's eye view. Here, the camera 102 and the pattern 610 are located on the same side of the monitored area. The camera 102 is hidden behind the pattern 610 and is looking through the holes in the pattern 610. On the opposite side is a mirror 2403 laid down so that the camera 102 can capture a mirror image of the entire pattern 610. For this purpose, it is sufficient if the mirror 2403 is only half as large as the pattern 610 in two dimensions.
With particular reference to fig. 21 to 25, embodiments in the context of an imperceptible, passive grating substitution with hidden symmetry are summarized below and in other words briefly explained.
The method described with reference to fig. 3 and/or fig. 4 may be used in the sense of a grating. Grating substitutions implemented according to embodiments include, inter alia, at least one region 110 or pattern 610 comprising one or more point symmetries and an imaging sensor, such as camera 102. If the mirror 2403 is not used, the pattern 610 or the area 110 and the camera 102 are arranged on opposite sides from each other. For an example of such an arrangement, please refer to fig. 21. The camera 102 either has a clear line of sight to the pattern 610 or the area 110 or the line of sight is obscured by the object 100 in between. According to an embodiment, at least two cases can be distinguished. Similar tasks may be accomplished as classical gratings, i.e. e.g. determining if an object 100 is present or counting objects such as persons, vehicles, objects on a conveyor belt, etc. Embodiments provide some of the following exemplary mentioned advantages over conventional gratings, which may be decisive in certain application scenarios.
The system is passive. It does not require a special light source. It may utilize existing ambient light such as daylight or indoor lighting. The method described with reference to fig. 3 and/or fig. 4 works more reliably than a similar method of image processing and may also be performed with little computational effort. Fluctuations in illumination intensity, illumination color, brightness distribution, color distribution, line of sight conditions, shadows, image sharpness or arrangement do not interfere with the method, since the properties of point symmetry remain largely unaffected in combination with the descriptors used. The method may also discard the reference image of the pattern 610 to compare the camera image to the reference. Alternatively, however, a reference image may be used, which may be temporarily generated, for example, when needed.
Another advantage is the imperceptibility of the region 110 or the pattern 610. The pattern 610 or region may be designed such that one is virtually unaware of its use. In particular, point symmetry, especially odd point symmetry, is difficult for humans to find, even if one looks for point symmetry, depending on the pattern. In contrast, in conventional reflection gratings, the retro-reflector is always visible. The advantage of imperceptibility may mean that embodiments may better prevent vandalism, criminals are difficult to circumvent, and do not require interfering technical means to compromise the design of an aesthetically pleasing environment (e.g., hotels, museums, residential buildings, offices, waiting rooms, landscape architecture, etc.). However, the pattern 610 having point symmetry may also be intentionally designed to be aesthetically pleasing, as symmetry is absolutely considered to be aesthetically pleasing. The use of gratings is still unknown to most people.
The use of a single such system allows monitoring of the entire road section or the entire surface due to the large area of the camera sensor. Instead of only accommodating a single point-symmetrical region 110 in the pattern 610, a plurality of regions 110 are placed side by side for this purpose, in particular for monitoring road sections, see fig. 22, or side by side and one above the other, in particular for monitoring surfaces, see fig. 23. If the road segment is monitored, the speed and length of the corresponding object 100 (e.g., vehicle) is also determined therefrom. The velocity is derived from the distance between adjacent symmetry centers 112 or 112A, 112B and the time interval between their occlusion or exposure. The length of the object 100 is derived, for example, from the number of center of symmetry 112 or 112A, 112B that are occluded, or alternatively from the speed and time between occlusion and exposure of the center of symmetry 112 or 112A, 112B. The latter approach may even be used to determine the length of the object 100 temporarily fully occluding the pattern 610. The contour of the object 100 or its shadow image may also be determined at least roughly if the surface is monitored. Please refer to fig. 23.
In addition, such a system provides all other advantages derived from the use of the camera 102 or imaging sensor in a known manner. In particular, the object 100, such as color, shape, object category, vehicle type, vehicle model, biometric of a person, may be analyzed more deeply. No additional sensor is needed other than the camera 102. A vehicle as an example of the object 100 passes through the pattern 610 in both traveling directions and temporarily obscures a portion of the point-symmetrical region 110. In addition to the pattern 610 itself, a mirror image of the pattern on a smooth reflective surface is optionally used here, since the mirror image of the point-symmetrical region 110 is itself point-symmetrical. See also fig. 23 for this purpose. The capture of the center of symmetry 112 or 112A, 112B requires only a small number of computational operations and therefore works very energy-efficient. Is very suitable for continuous use. If the object 100 has been detected, further algorithms may be activated as needed for a more accurate analysis, which further algorithms then allow temporarily consuming more energy, e.g. a deep learning based approach.
If the existing ambient light is weak, it can be compensated by appropriate adaptation of the parameters of the camera 102, as described below. There are few drawbacks by opening the aperture or reducing the aperture value, particularly when the camera optics are focused on the pattern 610. For this purpose, see also fig. 23, in which the presence of an object 100 located outside the depth of field can still be detected. Even if the pattern 610 is located outside the depth of field, little degradation occurs. Opening the aperture in case of weak ambient light is therefore a priority. As long as the observed process is slow enough, extending the exposure time during image recording does not create drawbacks. In particular, it is recommended that the exposure time should be shorter than the duration for which the object 100 obscures more than half of the point symmetric region 110. Thus, extending the exposure time is another preference, as long as the observed process does not run too fast. It should be noted, however, that increasing the gain or magnification of the camera 102 does not yield a significant improvement. In order to meaningfully set the value range of the camera 102, it may be necessary to adapt the gain, but it does not lead to a significant improvement in the signal-to-noise ratio compared to the two previous options.
By using one or more mirrors 2403, the monitored road segment may extend and/or bypass the corner without requiring additional cameras. See fig. 24 for this purpose. When the pattern 610 is mirrored, the point symmetry properties of the pattern are preserved-as are the odd and even point symmetry properties. The corresponding mirror 2403 should be at least so large that it can image the pattern 610 into the camera 102 as completely as possible. This means that all mirrors 2403 are allowed to be smaller than the pattern 610, and the closer the mirrors 2403 are to the camera 102 along the measurement path, the smaller the mirrors 2403 can be. This is illustrated in fig. 24. Mirrors 2403 may also be used or included anyway, such as a mirror in a hallway. If at least one mirror 2403 is used, it is also possible to place the camera 102 and the pattern 610 on the same side of the area to be monitored, e.g., to hide the camera 102 behind the pattern 610, so that the camera 102 views the mirror 2403 through an aperture in the pattern where the camera 102 again sees the entire pattern 610. Fig. 25 shows a corresponding embodiment. Direct line of sight of the camera 102 to the pattern 610 may also be used simultaneously as well as line of sight via one or more mirrors 2403. Fig. 23 shows an example in which a mirror image on the ground meaningfully complements a direct line of sight.
If an embodiment includes an "and/or" link between a first feature and a second feature, this should be understood as: this example has both the first and second features according to one embodiment, and either only the first feature or only the second feature according to another embodiment.

Claims (15)

1. A method (300) for providing monitoring data (135) for detecting a movable object (100), wherein the method (300) has the steps of:
reading in (324) image data (105) provided by means of a camera (102) from an interface (122) to the camera (102), wherein the image data (105) represents a camera image of an environment of the camera (102), wherein at least one predefined even and/or odd point-symmetrical region (110; 110A, 110B) in the environment is arranged in a field of view of the camera (102), wherein the at least one predefined even and/or odd point-symmetrical region (110; 110A, 110B) can be at least partially obscured from view of the camera (102) by the movable object (100); and
-determining (326) the presence of at least one center of symmetry (112; 112a,112 b) of the at least one even and/or odd point-symmetric region (110; 110a,110 b) in the camera image using the image data (105) and a determination rule (128) to determine an occlusion state of the movable object (100) for the at least one predefined even and/or odd point-symmetric region (110; 110a,110 b), wherein the monitoring data (135) is provided according to the occlusion state.
2. The method (300) of claim 1, wherein the determination rules (128) used in the determining step (326) are structured such that
Generating a signature(s) for a plurality of pixels of at least one section of the camera image to obtain a plurality of signatures(s), wherein each of the signatures(s) is generated using a descriptor having a plurality of different filters, wherein each filter has at least one symmetry type, wherein each of the signatures(s) has a sign for each filter of the descriptor,
determining at least one mirror signature (S) for at least one symmetry type of the filter for the signature (S) PG ,S PU ),
Checking whether a pixel with said signature (S) has at least one further pixel in a search area (1104) in the surroundings of the pixel, the at least one further pixel having a signature (S) corresponding to said at least one mirror image PG ,S PU ) To determine pixel coordinates of at least one symmetric signature pair from the pixel and the further pixel when the at least one further pixel is present,
and evaluating pixel coordinates of the at least one symmetric signature pair to identify the at least one center of symmetry (112; 112A, 112B),
And/or wherein at least one reflector (R PG 、R PU ) To a sign applied to one of said signatures (S) to determine said at least one mirror signature (S PG ,S PU ) Wherein each reflector (R PG 、R PU ) Rules with filters specific to the symmetry type and dependent on the descriptors for modifying the symbols, wherein the search area (1104) depends on the applied reflector (R PG 、R PU ) At least one reflector of the pair of reflectors.
3. The method (300) according to claim 2, wherein in the determining step (326), for each determined symmetry center (112; 112a,112 b) pixel coordinates of each symmetry signature pair that has contributed to correctly identifying the symmetry center (112; 112a,112 b) are used to determine transformation rules for transforming pixel coordinates of the symmetry center (112; 112a,112 b) and/or the at least one even and/or odd point symmetry region (110; 110a,110 b), wherein the transformation rules are applied to pixel coordinates of the symmetry center (112; 112a,112 b) and/or the at least one even and/or odd point symmetry region (110; 110a,110 b) to correct a distorted view of the camera image.
4. The method (300) according to any of the preceding claims, having the step (330): -comparing at least one center of symmetry (112; 112a,112 b) from the camera image with at least one reference center of symmetry from reference data (115) in terms of intensity, intensity course over time and/or local intensity course to determine an intensity-dependent deviation (131) between the center of symmetry (112; 112a,112 b) and the reference center of symmetry, wherein the monitoring data (135) is provided in accordance with the deviation (131).
5. The method (300) according to any of the preceding claims, wherein a symmetry type of the at least one symmetry center (112; 112a,112 b) is determined in the determining step (326), wherein the symmetry type represents even point symmetry and/or odd point symmetry, and/or in the comparing step (330) the symmetry type of the at least one symmetry center (112; 112a,112 b) in the camera image is compared with a predefined symmetry type of at least one reference symmetry center from reference data (115) to check for consistency between the at least one symmetry center (112; 112a,112 b) and the at least one reference symmetry center.
6. The method (300) according to claim 5, wherein the image data (105) read in the reading step (324) represents a camera image of at least one pattern (610; 1710, 1810) constituted by a plurality of predefined even and/or odd point symmetry regions (110; 110a, 110B), wherein in the determining step (326) a geometrical arrangement of symmetry centers (112; 112a, 112B) of the at least one pattern (610; 1710, 1810) is determined, a geometrical sequence of symmetry types of the symmetry centers (112; 112a, 112B) is determined, and/or the pattern (610; 1710, 1810) is determined from a plurality of predefined patterns using the sequence, wherein the arrangement and/or the sequence represents an identification code of the pattern (610; 1710, 1810).
7. The method (300) according to claim 6, wherein in the determining step (326) an arrangement of symmetry centers (112; 112a, 112B) of the at least one pattern (610; 1710, 1810) and/or a sequence of symmetry types of the symmetry centers (112; 112a, 112B) is used to determine implicit additional information of the at least one pattern (610; 1710, 1810) or a readout rule for reading out explicit additional information in the camera image, wherein the arrangement and/or the sequence represent the additional information in encoded form, wherein the additional information is related to detecting the movable object (100).
8. The method (300) according to any one of claims 6 to 7, wherein the step (326) of determining and/or the step (330) of comparing is performed jointly for all symmetry centers (112; 112a,112 b) independently of the symmetry type of the symmetry centers (112; 112a,112 b), or the step (330) of determining and/or the step of comparing is performed separately for symmetry centers (112; 112a,112 b) of the same symmetry type depending on the symmetry type of the symmetry centers (112; 112a,112 b).
9. A method (400) for detecting a movable object (100), wherein the method (400) has the steps of:
Evaluating (444) monitoring data (135) provided by the method (300) according to any of the preceding claims to generate a detection signal (145) dependent on the monitoring data (135); and
the detection signal (145) is output to an interface (148) to a processing unit for performing a raster function for performing a detection of the movable object (100).
10. Method (500) for manufacturing at least one predefined even and/or odd point symmetric region (110; 110a,110 b) for use in a method (300; 400) according to any of the preceding claims, wherein the method (500) has the steps of:
generating (502) design data (204) representing a graphical representation of the at least one predefined even and/or odd point symmetric region (110; 110A, 110B); and
-generating (506) the at least one predefined even and/or odd point symmetric region (110; 110a,110 b) on, at or in the display medium (600) using the design data (204) to manufacture the at least one predefined even and/or odd point symmetric region (110; 110a,110 b).
11. The method (500) according to claim 10, wherein in the generating step (502) design data (204) is generated representing the graphical representation of the at least one predefined even and/or odd point symmetric region (110; 110a,110 b) as a circle, ellipse, square, rectangle, pentagon, hexagon, polygon or torus, wherein the at least one predefined even and/or odd point symmetric region (110; 110a,110 b) has a regular or quasi-random content pattern, and/or wherein any of the at least one predefined even and/or odd point symmetric region (110; 110a,110 b) is predefined, and wherein the second half is structured by dot mirroring and/or inversion of gray values and/or color values, and/or wherein in the generating step (506) the at least one predefined even and/or odd point symmetric region (110 a,110 b) is generated by additive manufacturing process, separation, coating, shaping, primary shaping or optical display, and/or wherein the display medium (600) has a regular or quasi-random content pattern, and/or wherein the display medium (600) has glass, stone, rubber, plastic, paper, metal, cardboard, concrete, paper, or concrete.
12. The method (500) according to any one of claims 10 to 11, wherein in the generating step (502) design data (204) representing a graphical representation of at least one pattern (610; 1710, 1810) constituted by a plurality of predefined even and/or odd point symmetric regions (100; 110a,110 b) are generated, wherein at least a subset of the point symmetric regions (100; 110a,110 b) are aligned on a regular or irregular grid (1311), directly adjacent to each other and/or separated from at least one adjacent even and/or odd point symmetric region (110; 110a,110 b) by a gap portion, are identical or different from each other in their size and/or their content pattern and/or are arranged in a common plane or in different planes, wherein in the generating step (502) design data (204) representing a graphical representation of at least one pattern (610; 1710, 1810) with hierarchical symmetry is generated.
13. An apparatus (120; 140; 200) arranged to perform and/or manipulate the steps of the method (300; 400; 500) according to any of the preceding claims in a corresponding unit (124, 126, 130;144, 146;202, 206).
14. A system (2100) for detecting a movable object (100), wherein the system (2100) has the following features:
The apparatus (120; 140) according to claim 13;
at least one camera (102), wherein the camera (102) and the device (120; 140) can be or have been connected to each other in a data-transmissible manner; and
at least one predefined even and/or odd point symmetric region (110; 110A, 110B) manufactured by a method (500) according to any of claims 10 to 12, wherein the region (110; 110A, 110B) can or has been arranged in the field of view of the camera (102).
15. Computer program arranged to perform and/or manipulate the steps of a method (300; 400; 500) according to any of claims 1 to 12.
CN202180090216.XA 2020-11-12 2021-10-25 Method for providing monitoring data for detecting a movable object, method and device for manufacturing at least one predefined point-symmetrical area Pending CN116783628A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE102020214251.3 2020-11-12
DE102020214251.3A DE102020214251A1 (en) 2020-11-12 2020-11-12 Method for providing monitoring data for detecting a moving object, method for detecting a moving object, method for producing at least one predefined point-symmetrical area and device
PCT/EP2021/079475 WO2022100988A1 (en) 2020-11-12 2021-10-25 Method for providing monitoring data for detecting a movable object, method for detecting a movable object, method for producing at least one predefined point-symmetric region, and vehicle

Publications (1)

Publication Number Publication Date
CN116783628A true CN116783628A (en) 2023-09-19

Family

ID=78414026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180090216.XA Pending CN116783628A (en) 2020-11-12 2021-10-25 Method for providing monitoring data for detecting a movable object, method and device for manufacturing at least one predefined point-symmetrical area

Country Status (3)

Country Link
CN (1) CN116783628A (en)
DE (1) DE102020214251A1 (en)
WO (1) WO2022100988A1 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6841780B2 (en) * 2001-01-19 2005-01-11 Honeywell International Inc. Method and apparatus for detecting objects
EP1929333A1 (en) * 2005-08-18 2008-06-11 Datasensor S.p.A. Vision sensor for security systems and its operating method
DE102020202160A1 (en) 2020-02-19 2021-08-19 Robert Bosch Gesellschaft mit beschränkter Haftung Method for determining a symmetry property in image data, method for controlling a function and device

Also Published As

Publication number Publication date
DE102020214251A1 (en) 2022-05-12
WO2022100988A1 (en) 2022-05-19

Similar Documents

Publication Publication Date Title
US11486698B2 (en) Systems and methods for estimating depth from projected texture using camera arrays
Won et al. Sweepnet: Wide-baseline omnidirectional depth estimation
US8172407B2 (en) Camera-projector duality: multi-projector 3D reconstruction
US7231063B2 (en) Fiducial detection system
US8366003B2 (en) Methods and apparatus for bokeh codes
CN102809354B (en) Three-dimensional dual-mode scanning device and three-dimensional dual-mode scanning system
CN106796661A (en) Project system, the method and computer program product of light pattern
CN105023010A (en) Face living body detection method and system
US11398085B2 (en) Systems, methods, and media for directly recovering planar surfaces in a scene using structured light
CN111981982B (en) Multi-directional cooperative target optical measurement method based on weighted SFM algorithm
CN102017601A (en) Image processing apparatus, image division program and image synthesising method
CN101482398B (en) Fast three-dimensional appearance measuring method and device
CN110033483A (en) Based on DCNN depth drawing generating method and system
Sadeghi et al. 2DTriPnP: A robust two-dimensional method for fine visual localization using Google streetview database
CN116745812A (en) Method for providing calibration data for calibrating a camera, method and device for manufacturing at least one predefined point-symmetrical area
CN117109561A (en) Remote two-dimensional code map creation and positioning method and system integrating laser positioning
CN116783628A (en) Method for providing monitoring data for detecting a movable object, method and device for manufacturing at least one predefined point-symmetrical area
CN116710975A (en) Method for providing navigation data for controlling a robot, method and device for manufacturing at least one predefined point-symmetrical area
CN114299172B (en) Planar coding target for visual system and real-time pose measurement method thereof
CN111179347B (en) Positioning method, positioning equipment and storage medium based on regional characteristics
CN110428458A (en) Depth information measurement method based on the intensive shape coding of single frames
Schillebeeckx et al. Pose hashing with microlens arrays
US20220254151A1 (en) Upscaling triangulation scanner images to reduce noise
ARAUJO et al. Localization and Navigation with Omnidirectional Images
Schillebeeckx Geometric inference with microlens arrays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination