GB2573090A - Calibration of object position-measuring apparatus - Google Patents

Calibration of object position-measuring apparatus Download PDF

Info

Publication number
GB2573090A
GB2573090A GB1802445.5A GB201802445A GB2573090A GB 2573090 A GB2573090 A GB 2573090A GB 201802445 A GB201802445 A GB 201802445A GB 2573090 A GB2573090 A GB 2573090A
Authority
GB
United Kingdom
Prior art keywords
obtaining
measurements
coordinate system
coordinates
position measurement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1802445.5A
Other versions
GB201802445D0 (en
Inventor
David Down Christopher
Teasdale Andrew
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Openworks Eng Ltd
Original Assignee
Openworks Eng Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Openworks Eng Ltd filed Critical Openworks Eng Ltd
Priority to GB1802445.5A priority Critical patent/GB2573090A/en
Publication of GB201802445D0 publication Critical patent/GB201802445D0/en
Publication of GB2573090A publication Critical patent/GB2573090A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/28Details of pulse systems
    • G01S7/285Receivers
    • G01S7/295Means for transforming co-ordinates or for evaluating data, e.g. using computers
    • G01S7/2955Means for determining the position of the radar coordinate system for evaluating the position data of the target in another coordinate system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • G01S7/4004Means for monitoring or calibrating of parts of a radar system
    • G01S7/4026Antenna boresight

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

A method for converting coordinates of measurements obtained by an apparatus for determining the position of a remote object is disclosed. The method comprises obtaining a first position measurement of a first object located at a first position. The first position measurement is made by the apparatus and is expressed in a first coordinate system that is dependent on the position and/or orientation of the apparatus. The method further comprises obtaining a second position measurement of the first object at the first position. The second position measurement is made by a position-measuring device co-located with the first object and is expressed in a second coordinate system that is independent of the apparatus. The steps of obtaining first and second position measurements are repeated one or more times based on objects located at different positions. A determination is made of a transformation for transforming coordinates between the first and second coordinate systems based on the first and second position measurements.

Description

CALIBRATION OF OBJECT POSITION-MEASURING APPARATUS
FIELD
Certain examples of the present disclosure provide a technique for calibrating an apparatus or system for determining or measuring the position of a remote object. For example, certain examples of the present disclosure provide a technique for calibrating a local coordinate system used by the apparatus or system with respect to another coordinate system (e.g. a global coordinate system, for example earth-centred earth-fixed (ECEF) coordinates or Global Positioning System (GPS) coordinates). Certain examples of the present disclosure provide a technique for determining a transformation for transforming coordinates between the local coordinate system and the other coordinate system. Certain examples of the present disclosure provide a technique for correcting errors (e.g. manufacturing errors and/or calibration errors) in the alignment of axes used in the system or apparatus.
BACKGROUND
In many applications, it is necessary to determine the position of a remote object. For example, International Patent Application number PCT/GB2016/051139 (published as WO 2016/170367 A1) describes a system for deploying a first object (e.g. a net) for capturing, immobilising or disabling a second object (e.g. an unmanned aerial vehicle (UAV), for example a drone). UK Patent Application number GB 1606293.7 (published as GB 2548166 A) describes a system for determining whether an object (e.g. a UAV, for example a drone) has entered a certain space (e.g. a restricted airspace). In each of these systems, the position of the UAV needs to be accurately determined. Many other application exist in which the position of a remote object needs to be determined.
The position of a remote object may be determined using a number of different techniques. For example, an apparatus for determining the position of a remote object may comprise a distance sensor (e.g. rangefinder, for example a laser rangefinder) for measuring the line-of-sight distance to the remote object, and one or more direction sensors (e.g. one or more accelerometers and/or gyroscopes) for measuring the direction (e.g. the elevation and/or azimuthal angle) of the remote object. The measured distance and direction values may be used to derive the coordinates of the remote object, for example expressed in terms of spherical coordinates or cartesian coordinates. Many other techniques exist for determining the position of a remote object, for example using radar, theodolites or cameras.
The coordinates obtained using these techniques are typically defined with respect to a local coordinate system (i.e. a coordinate system that is dependent on the position and/or orientation of the apparatus used to determine the position of the remote object). For example, in the above example, spherical coordinates are obtained with respect to a spherical coordinate system that is centred on, and defined with respect to, the apparatus. In particular, the radial component is dependent on the position of the object relative to the apparatus, and the polar and azimuthal components are dependent on the orientation of the apparatus, as well as the zeros of the azimuthal and polar components defined within the apparatus.
In some situations, it may be necessary to convert the local coordinates obtained using the apparatus into another coordinate system, for example a globally agreed coordinate system (for example Earth-Centred, Earth-Fixed (ECEF) coordinates or Global Positioning System (GPS) coordinates). For example, this may be necessary if the system for determining the position of the remote object is intended to interface or otherwise interact with another system that uses the other coordinate system. ECEF is a geographical coordinate system in which the coordinates (X, Y, Z) are given in terms of a cartesian coordinate system in which the origin (0, 0, 0) is defined as the centre of mass of the earth and in units of metres. The positive z axis is defined to pass though the geographic north pole, the x axis is defined to intersect the points on the surface of the earth with 0° latitude and 0° longitude, and the y axis is defined to be perpendicular to both the x-axis and the z-axis. GPS coordinates are typically given in terms of a latitude value, a longitude value and an altitude value. Since ECEF and GPS coordinates systems are fixed, it is relatively straightforward to convert coordinates from one system to the other by applying a predetermined transformation.
However, transforming a set of local coordinates to a corresponding set of global coordinates in a certain global coordinate system, for example ECEF or GPS, may not be straightforward in some case.
For example, the position (in global coordinates) of the apparatus for determining the position (in local coordinates) of the remote object may be unknown. In this case, the amount of translation required to transform one of the coordinate systems such that the origins of the respective coordinate systems coincide may be unknown.
In addition, the apparatus may attempt to align its local coordinate system with the global coordinate system by defining certain reference directions. For example, the apparatus may measure the direction of gravity and define a reference polar angle of 0° to correspond to a horizontal direction with respect to gravity. The apparatus may also measure the direction of magnetic north and define a reference azimuthal angle of 0° to correspond to the direction of north. However, due to errors in the measurements of the directions of gravity and magnetic north, the determined reference directions may be misaligned.
Furthermore, misalignment between the respective coordinate system may arise as a result of imperfections in the manufacture of the apparatus. For example, the apparatus may be arranged so that the polar angle is set to have a value of 0° when a movable component comprising a direction sensor is maintained in a horizontal orientation. However, due to manufacturing imperfections, such an orientation may not be exactly horizontal in practice.
Therefore, what is desired is a technique for calibrating an apparatus or system for determining or measuring the position of a remote object, and in particular for calibrating a local coordinate system used by the apparatus or system with respect to another coordinate system. What is also desired in a technique for correcting errors, for example manufacturing errors and/or calibration errors, in the alignment of axes used in the system or apparatus.
SUMMARY
It is an aim of certain examples of the present disclosure to address, solve, mitigate or obviate, at least partly, at least one of the problems and/or disadvantages associated with the related art, for example at least one of the problems and/or disadvantages mentioned herein. Certain examples of the present disclosure aim to provide at least one advantage over the related art, for example at least one of the advantages mentioned herein.
The invention relating to this patent specification is defined by the independent claims. A non-exhaustive set of advantageous features that may be used in various examples of the present disclosure are defined in the dependent claims.
In accordance with an exemplary aspect of the present disclosure, there is provided a method for converting coordinates of measurements obtained by an apparatus for determining the position of a remote object, the method comprising: obtaining a first position measurement of a first object located at a first position, wherein the first position measurement is made by the apparatus and is expressed in a first coordinate system that is dependent on the position and/or orientation of the apparatus; obtaining a second position measurement of the first object at the first position, wherein the second position measurement is made by a positionmeasuring device co-located with the first object and is expressed in a second coordinate system that is independent of the apparatus; repeating the steps of obtaining first and second position measurements one or more times based on objects located at different positions; and determining a transformation for transforming coordinates between the first and second coordinate systems based on the first and second position measurements.
In accordance with another exemplary aspect of the present disclosure, there is provided a first apparatus for converting coordinates of measurements obtained by a second apparatus for determining the position of a remote object, the first apparatus comprising a processor configured for: obtaining a first position measurement of a first object located at a first position, wherein the first position measurement is made by the second apparatus and is expressed in a first coordinate system that is dependent on the position and/or orientation of the second apparatus; obtaining a second position measurement of the first object at the first position, wherein the second position measurement is made by a position-measuring device co-located with the first object and is expressed in a second coordinate system that is independent of the second apparatus; repeating the steps of obtaining first and second position measurements one or more times based on objects located at different positions; and determining a transformation for transforming coordinates between the first and second coordinate systems based on the first and second position measurements.
In certain examples, obtaining a first position measurement may comprise: measuring, by the apparatus, the line of sight distance from the apparatus to the first object; measuring, by the apparatus, the direction from the apparatus to the first object; and obtaining spherical coordinates of the position of the first object based on the line of sight distance and the direction.
In certain examples, obtaining a first position measurement may further comprise converting the spherical coordinates to cartesian coordinates.
In certain examples, the position-measuring device may comprise a Satellite Navigation System (SNS) receiver, and obtaining a second position measurement may comprise obtaining SNS coordinates of the first object.
In certain examples, obtaining first and second position measurements may be repeated at least two times to obtain three or more first and second position measurements.
In certain examples, repeating the steps of obtaining first and second position measurements may comprise at least one of: moving the first object to a second position, and obtaining first and second position measurements of the first object at the second position; and obtaining first and second position measurements of a second object located at the second position.
In certain examples, determining the transformation may comprise determining separate translation and rotation transformations that map the set of first position measurements to the set of second position measurements.
In certain examples, the translation transformation may be determined based on one or more of: the position of the apparatus in the second coordinate system, determined by a positionmeasuring device co-located with the apparatus; and a difference between the centroid of the set of first position measurements and the centroid of the set of second position measurements.
In certain examples, the rotation transformation may be determined based on Singular Value Decomposition (SVD).
In certain examples, the method may further comprise, or the processor may be further configured for: determining a difference, Δδ, between a first distance value and a second distance value, wherein the first distance value is the shortest distance between an object and a reference plane determined based on a first position measurement made by the apparatus with respect to the first coordinate system, and wherein the second distance value is the shortest distance between the object and the reference plane determined with respect to the second coordinate system; determining the length, d, of the projection of a vector, between the apparatus and the object, onto the reference plane; estimating a correction value, ε, based on the value Δδ/d, wherein the correction value corresponds to an error in the elevation angle of the object as measured by the apparatus.
In certain examples, the reference plane may be a horizontal plane that intersects the origin of the apparatus.
In certain examples, the second distance value may be determined based on at least one of: a transformation applied to the first position measurement; and a second position measurement corresponding to the first position measurement.
In certain examples, the correction value, ε, may be estimated based an average of two or more values of Δδ/d calculated based on measurements of objects at two or more positions.
In certain examples, the method may be repeated one or more times, and he step of obtaining the first position measurement in a particular iteration may comprise applying the correction factor, ε, determined in the preceding iteration to the measured value.
In certain examples, the method may be repeated until the difference in correction values determined in successive iterations is less than a certain threshold.
Certain examples of the present disclosure provide a computer program comprising instructions arranged, when executed, to implement a method, device, apparatus and/or system in accordance with any aspect, embodiment, example or claim disclosed herein. Certain examples of the present disclosure provided a machine-readable storage storing such a program.
Other aspects, advantages, and salient features of the present disclosure will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, disclose various examples of the present disclosure.
BRIEF DESCRIPTION OF THE FIGURES
Figure 1 is a schematic diagram of an apparatus for determining the position of a remote object; Figure 2 is an isometric view of a rotatable part of the apparatus illustrated in Figure 1;
Figure 3 is a side view of the part shown in Figure 2;
Figures 4 and 5 are diagrams illustrating the occurrences of errors in measurement of positions of remote objects due to a calibration error in the apparatus illustrated in Figures 1 to 3; and
Figure 6 is a flow diagram of an exemplary method for calibrating the apparatus illustrated in Figures 1 to 3.
DETAILED DESCRIPTION OF EXAMPLES
The following description, with reference to the accompanying drawings, is provided to assist in a comprehensive understanding of examples of the present disclosure. The description includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the examples described herein can be made without departing from the scope of the present disclosure.
The terms and words used in this specification are not limited to the bibliographical meanings, but, are merely used to enable a clear and consistent understanding of the present disclosure.
The same or similar components may be designated by the same or similar reference numerals, although they may be illustrated in different drawings.
Detailed descriptions of elements, features, components, structures, constructions, functions, operations, processes, characteristics, properties, integers and steps known in the art may be omitted for clarity and conciseness, and to avoid obscuring the subject matter of the present disclosure.
Throughout this specification, the words “comprises”, “includes”, “contains” and “has”, and variations of these words, for example “comprise” and “comprising”, means “including but not limited to”, and is not intended to (and does not) exclude other elements, features, components, structures, constructions, functions, operations, processes, characteristics, properties, integers, steps and/or groups thereof.
Throughout this specification, the singular forms “a”, “an” and “the” include plural referents unless the context dictates otherwise. For example, reference to “an object” includes reference to one or more of such objects.
By the term “substantially” it is meant that the recited characteristic, parameter or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement errors, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic, parameter or value was intended to provide.
Throughout this specification, language in the general form of “X for Y” (where Y is some action, process, function, activity, operation or step and X is some means for carrying out that action, process, function, activity, operation or step) encompasses means X adapted, configured or arranged specifically, but not exclusively, to do Y.
Elements, features, components, structures, constructions, functions, operations, processes, characteristics, properties, integers, steps and/or groups thereof described herein in conjunction with a particular aspect, embodiment, example or claim are to be understood to be applicable to any other aspect, embodiment, example or claim disclosed herein unless incompatible therewith.
It will be appreciated that examples of the present disclosure can be realized in the form of hardware or a combination of hardware and software. Any such software may be stored in any suitable form of volatile or non-volatile storage device or medium, for example a ROM, RAM, memory chip, integrated circuit, or an optically or magnetically readable medium (e.g. CD, DVD, magnetic disk or magnetic tape). It will also be appreciated that storage devices and media are examples of machine-readable storage that are suitable for storing a program or programs comprising instructions that, when executed, implement examples of the present disclosure.
Figure 1 is a schematic diagram of an apparatus 100 for determining the position of a remote object. For example, the apparatus 100 may be incorporated into a system for deploying a first object (e.g. a net) for capturing, immobilising or disabling a second object (e.g. a UAV) such as described in International Patent Application number PCT/GB2016/051139.
The apparatus 100 comprises a distance measuring unit 101 for measuring the line-of-sight distance to the remote object, and a direction measuring unit 103 for measuring the direction of the remote object.
In the example of Figure 1, the direction measuring unit 103 comprises a camera 105 attached to a mount 107 that is rotatably connected to a base 109 such that the mount 107, and hence the camera 103, is rotatable about two axis (e.g. perpendicular axis) independently, thereby allowing the camera 105 to be pointed in any desired orientation. An isometric view of the rotatable part of the apparatus illustrated in Figure 1 is illustrated in Figure 2. The apparatus 100 also comprises a first servo motor 111 for rotating the mount 107 about the first axis and a second servo motor 113 for rotating the mount 107 about the second axis. The apparatus 100 also comprises a processor 115 that outputs control signals to the first and second servo motors 111, 113 for adjusting the degree of rotation of the camera 103 about the first and second axis. The processor 115 may also receive first and second feedback signals respectively from the first and second servo motors 111, 113 indicating the current rotation position of respective servo motors 111, 113 and hence the respective angles of rotation of the mount about the first and second axis.
In order to determine the position of a remote object, the processor 115 outputs control signal to the servo motors 111, 113 causing the mount 107 to be rotated until the object is within the field of view (for example within a central portion of the field of view) of the camera 105. The determination of whether the object is in the field of view may be made, for example, by a human operator or automatically using object recognition techniques. When the object is within the field of view, the direction of the object may be computed by the processor 115 based on a combination of the feedback signals received from the first and second servo motors 111, 113 and the position of the object within the field of view. For example, the azimuthal angle may be determined based on the rotation position of the first servo motor 111 (indicated by the first feedback signal) together with the horizontal position of the object within the field of view of the camera 105. Similarly, the elevation angle may be determined based on the rotation position of the second servo motor 113 (indicated by the second feedback signal) together with the vertical position of the object within the field of view of the camera 105.
The skilled person will appreciate that the form of the direction measuring unit 103 described above is merely an example and that other examples are possible. For example, the direction measuring unit 103 may comprise a viewfinder or sight (for example including a reticle) and one or more direction sensors (e.g. accelerometers, gyroscopes and/or magnetometers) for measuring the direction of the viewfinder or sight. When the object is located within the field of view of the viewfinder or sight (e.g. aligned with the reticle), the direction of the object may be determined based on the values output from the direction sensors.
In the example of Figure 1, the distance measuring unit 101 comprises a laser rangefinder. The rangefinder may be co-mounted on the mount with the camera 105 such that when the object is located within the field of view of the camera 105 (e.g. within a central portion of the field of view), the laser beam emitted by the rangefinder is emitted in the direction of the object. The rangefinder outputs a measured distance to the processor 115.
The skilled person will appreciate that the form of the distance measuring unit 101 described above is merely an example and that other examples (for example based on radar, lidar, acoustics and/or optics) are possible.
The processor 115 is configured for determining a set of coordinates of the object position based on the output of the distance measuring unit 101 and the direction measuring unit 103. For example, the coordinates may be expressed as spherical coordinates in which the radial component is determined based on the output of the distance measuring unit 101, and the polar and azimuthal components are determined based on the output of the direction measuring unit 103. In certain examples, the processor 115 may be configured for converting the spherical coordinates into coordinates of another coordinate system, for example cartesian coordinates. In each case, the coordinates determined by the processor 115 are coordinates of a local coordinate system of the apparatus 100.
Figure 6 is a flow chart of an exemplary method for calibrating the apparatus 100 illustrated in Figure 1. In particular, the method of Figure 6 may be used to determine a transformation for transforming coordinates of the local coordinate system of the apparatus 100 into another coordinate system, for example a globally agreed coordinate system (e.g. ECEF or GPS). In the following example, the other coordinate system is the GPS coordinate system, although the skilled person will appreciate that other coordinate systems may be used. For example, the coordinate system of a Satellite Navigation System (SNS) other than GPS, or any other suitable agreed coordinate system may be used.
In this method, the position of a “test object” is measured by the apparatus 100 to obtain local coordinates of the test object. In addition, the position of the test object in the global coordinate system (e.g. GPS coordinates in the example of Figure 6) is also obtained. For example, the test object may be provided with a self-position measuring unit (e.g. a GPS receiver in the example of Figure 6) for measuring its own position in the global coordinate system. This process may be repeated any suitable number of times, with the same or different test objects placed in different positions, to obtain pairs of corresponding coordinates in the local and global coordinated systems. Then, any suitable mathematical technique (for example based on Singular Value Decomposition or least squares) may be applied to determine a transformation for transforming coordinates given in the local system to the corresponding coordinates given in the global system, and vice versa.
In a first step 601, a test object may be placed at a certain fixed location. The test object may be any object that the apparatus 100 is able to measure the position of. For example, if the apparatus 100 is configured to automatically recognise a certain type object (e.g. a UAV) using object recognition techniques then the test object may be an object of that type. The test object may be artificially placed specifically for the purpose of calibration. However, in the case that a suitable object is already in situ then that object may be used as the test object. In this case, step 601 may be omitted. In certain examples, the test object may be placed relatively far away from the apparatus 100. In some cases, placing the test object further away may provide more accurate calibration.
In a next step 603, the apparatus 100 determines the position of the test object in the local coordinate system. For example, when using the form of the apparatus described above, the camera 105 is oriented so that the test object is located within the field of view of the camera, and the position of the test object is determined in the manner described above. The apparatus may obtain the test object position in the form of (local) spherical coordinates, and may then convert the spherical coordinates into corresponding (local) cartesian coordinates.
In a next step 605, the position of the test object is determined in the global coordinate system. For example, a self-position measuring unit (e.g. a GPS receiver in this example) may be provided in the test object to determine its own position in the global coordinate system. Alternatively, another position-determining system may be used to determine the position of the test object in the global coordinate system.
In a next step 607 it is determined whether a sufficient number of samples (e.g. N samples) have been obtained. For example, in certain examples at least two samples may be required, while in other examples at least three samples may be required. A larger number of samples may provide greater reliability in the calibration, although at the expense of increasing the time required to perform the calibration. In practice, the measured positions are likely to have errors, and in this case it may be preferable to collect a relatively large number of samples (e.g. to obtain a ‘cloud’ of samples) so that any measurement errors can be averaged out.
If it is determined in step 607 that insufficient samples have been obtained then the test object is moved to a different fixed position and steps 603-607 are repeated for the newly positioned test object. In some examples the test object may be moved manually. Alternatively, the test object may be provided with mobility means or a propulsion system for moving the test object, or the test object may be placed on a moveable mount. In this case, the test object may be moved for example by remote control, under control of a user or automatically under control of the apparatus 100 or other entity when it is determined that another sample is required. As an alternative to moving the position of the test object, different test objects may be placed in advance at different positions, or different objects already in situ may be used.
If it is determined in step 607 that sufficient samples have been obtained then in a next step 609 the (fixed) position of the apparatus 100 in the global coordinate system is determined. In some examples, the apparatus 100 may be provided with a self-position measuring unit (e.g. a GPS receiver in this example) to determine its own position in the global coordinate system. Alternatively, the position of the apparatus 100 in the global coordinate system may be known in advance, in which case step 609 may be omitted. Furthermore, in certain embodiments, the calibration may still be possible if the position of the apparatus 100 in the global coordinate system in unknown, in which case step 609 may be omitted. In the above example, step 609 is performed when it has been determined that sufficient samples have been obtained. However, in other examples, step 609 may be performed at any suitable point in the method, for example before step 601.
As a result of step 603-607, two datasets are obtained, where the first dataset, A, comprises the coordinates of different geometric positions expressed in the local coordinate system and the second dataset, B, comprises the coordinates of the same geometric positions expressed in the global coordinate system.
In a next step 611, any suitable technique is used to determine the best transformation capable of transforming dataset A into dataset B. The resulting transformation represents the geometric transformation for converting local coordinates into corresponding global coordinates. In certain examples, the transformation may comprise a combination of a translation operation, t (for translating the origin of one coordinate system to coincide with the origin of the other) and a rotation operation, R (for rotating one coordinate system to align with the other coordinate system). In this case, it is desired to solve for R and t in the following equation: B=R*A+t
If the position of the apparatus 100 in the global coordinate system is known then the translation operation t can be obtained directly from this value. However, if the position of the apparatus 100 in the global coordinate system is unknown then the translation may be determined based on the centroids (i.e. the average points) of the respective datasets:
where Pa and Pb are points in datasets A and B respectively.
Both datasets are re-centred by subtracting the respective centroids from the dataset values and a matrix H is derived from the resulting shifted datasets. Specifically, H is calculated using the following equation:
where T denotes the transpose operator.
Next, matrices U, S and V are found using Singular Value Decomposition (SVD) that satisfy the following equation:
Then, the rotation matrix R is given by the following equation:
The translation t is given by the following equation:
The above technique requires at least three samples in each dataset. In the case that there are more than three samples, a least squares solution may be obtained such that the following error, err, is minimised:
where || · || denotes the Euclidean distance between two vectors (a scalar value). The value of err is the square distance error between the points in the two datasets.
The skilled person will appreciate that the technique described above to determine the transformation between A and B is merely exemplary and that any other suitable technique may be used.
The apparatus 100, or any other entity, may then use the determined transformation to convert from local coordinates to global coordinates. The apparatus 100, or any other entity, may use the inverse transformation to convert from global coordinates to local coordinates.
Any suitable entity in the overall system may perform certain steps in the exemplary method described above, for example the step of determining the best transformation (step 611). For example, this step may be performed by the apparatus 100 or a module provided in the apparatus 100. In this case, the apparatus may receive, via a receiver, the global measurements from the self-position measuring unit co-located with the test object, and the processor 115 may perform the calculations for determining the transformation. Alternatively, step 611 may be carried out by another entity in the system that is separate from the apparatus 100. For example, the other entity may receive, via a receiver, global measurements from the self-position measuring unit co-located with the test object, and local measurements from the apparatus 100. The other entity may be provided with a processor for performing the calculations for determining the transformation.
Figure 2 is an isometric view of a rotatable part of the apparatus 100 illustrated in Figure 1, and Figure 3 is a side view of the part shown in Figure 2. In particular, Figures 2 and 3 illustrate the mount 107 on which the camera 105 is attached and that is rotatably connected to the base 109 (not shown in Figures 2 and 3). In Figures 2 and 3, 201 denotes a first axis of rotation (slew axis), 202 denotes a second axis of rotation (pitch axis), and 203 denotes the line of sight of the rangefinder/camera. The first and second servo motors 111, 113 control rotations of the camera about the slew and pitch axis 201,202. In Figure 3, Θ denotes the angle between the slew axis 201 and the line of sight axis 203.
When calibrating the apparatus 100, the angle of the pitch axis 202 may be set to have a reference value of 0° ideally when the line of sight axis 203 is exactly perpendicular to the slew axis 201 (that is, the angle Θ illustrated in Figure 3 is exactly 90°) and the line of sight axis 203 is exactly horizontal. However due to certain errors, for example calibration errors and/or manufacturing errors, even after initial calibration, the angle Θ may not be exactly 90° when the angle of the pitch axis is registered or recorded as having a reference value of 0°. The difference from 90° of the angle Θ when the pitch angle is recorded as 0° may be denoted ε. In this case, the line of sight axis 203 may deviate slightly from the horizontal orientation, specifically by an angle ε. This situation can result in errors in the measurement of remote objects by the apparatus 100.
For example, Figure 4 illustrates an example in which the apparatus 100 is attempting to measure the position of a remote object 401. In this example, it is assumed that the precise positions of the object 401 and the apparatus 100 in the global coordinate system is known. In Figure 4, 404 denotes a horizontal plane passing through the origin of the apparatus 100. It is possible to compute, from the global positions of the apparatus 100 and the object 401, (i) the true vertical height, hG, of the object 401 above the plane 404, (ii) the true horizontal distance, dG, between the apparatus 100 and the object 401, (iii) the true line of sight distance, rG, between the apparatus 100 and the object 401, and (iv) the true elevation angle, φο, between the apparatus 100 and the object 401.
Due to the error, ε, in calibration and/or manufacture described above, the apparatus 100 measures the elevation angle between the apparatus 100 and the object as φι_, where φι_ = cpG + ε. On the other hand, the apparatus measures the line of sight distance to the object 401 as n_, where rL=rG (i.e. the measured line of sight distance is equal to the true line of sight distance). Accordingly, as shown in Figure 4, the apparatus 100 determines that the object is at a position 407, which is different from its true position 401. This difference manifests in a difference in height, Ah, between the true height and the perceived height, and a difference in horizontal distance, Ad, between the true distance and the perceived distance, as well as a difference in the elevation angle ε.
For small ε, di_ « cfe, and also the small angle approximation of the tangent function may be used:
Therefore:
Accordingly, the calibration error ε may be determined approximately by dividing (i) the difference in height, Ah, between the heights, hi. and he, determined in the local and global coordinates, respectively by (ii) the horizontal distance, di. or dc, between the apparatus 100 and the object 401 determined in either the local or global coordinates. In three dimensions, the horizontal distance d may be given by:
where x and y are the distances between the apparatus 100 and the object 401 in the x and y axis, respectively.
Alternatively, the calibration error ε may be determined by calculating the values of φο and φι_ based on the local and global coordinates of the apparatus 100 and object 401, and calculating the difference
When a number of different samples have been obtained, then the estimated calibration errors ε determined from the different samples may be averaged to obtain a more reliable estimate of the calibration error ε. Figure 4 illustrates a number of samples, including the true position of the object and the position of the same object as perceived by the apparatus due to the calibration error.
Figure 5 illustrates the result of subtracting the vertical values hG (obtained from the global coordinate system) from the corresponding vertical values hi. (obtained from the local coordinate system). If the calibration error ε were equal to zero then the resulting points would all lie within the horizontal plane 404. However, as illustrated in Figure 5, if the calibration error
ε is non-zero then the resulting points would lie approximately on a conical surface (i.e. a cone with central axis lying in the z direction, the cross-section of which is illustrated in Figure 5 as line 501).
In view of the approximate nature of the estimate of the calibration error ε, in certain examples an iterative method may be applied to refine the value of the calibration error.
For example, in a first step, two sets/groups of matching points are collected - global coordinates or “global points” (e.g. from GPS), and local coordinates or “local points” obtained from the apparatus - for example in the manner described above.
In a second step, assume that the line of sight axis 203 and the slew axis 201 are perpendicular, and calculate the transformation (e.g. represented by a rotation matrix) that best matches one group of points onto the other group of points, for example as described above. If the origins of the two groups are not coincident then calculate a lateral displacement vector by comparing the centroids of the two groups of points, for example as described above.
In a third step, transform the local coordinates into corresponding global coordinates using the calculated transformation (e.g. represented by a rotation matrix).
In a fourth step, from each local coordinate point, subtract the vertical value of its corresponding global coordinate point from its own vertical value. The resulting values correspond to the values of Ah described above.
As descried above, if the “zero” point is correct (e.g. the angle of the pitch axis 202 has a reference value of 0° when the line of sight axis 203 is exactly perpendicular to the slew axis 201) then all the points will map onto the horizontal plane 404. However if it is not then the points will map onto a conical surface, as described above.
In a fifth step, the elevation based on one global/local point pair is calculated according to the following equation:
where z is vertical and the x-y plane is horizontal. In the above equation, z represents a value resulting from the subtraction in the fourth step based on one global/local point pair and x and y denote the x and y coordinates, in the local coordinate system, of the local point.
In a sixth step, the calibration error ε in the zero pitch position is estimated from (e.g. given by) the average elevation (i.e. the average of the elevation values calculated using the above equation based on different global/local point pairs).
In a seventh step, the calibration error correction ε is applied, and the method is repeated from the first step until the change in the estimated zero pitch position is smaller than an acceptable limit. For example, the error correction may be applied to each local point by adjusting the measured pitch axis angle (or polar angle) by an amount equal to the error correction value, ε. Alternatively, the pitch axis reference value may be recalibrated by an amount equal to the error correction value, ε.
In certain example, to avoid collecting new points each iteration, the raw measured direction values (polar and azimuthal angles) obtained by the apparatus in the first step may be stored, and the global coordinate points may be calculated anew in the third step each iteration. In this way, the corrections can be applied to the raw data without needing to collect fresh points each time.
In the examples described above, a correction factor ε between the line of sight axis 203 and the slew axis 201 may be determined. However, the skilled person will appreciate that similar techniques may be applied to determine correction factors for other misalignment errors, for example a misalignment error between the pitch axis 202 and the slew axis 201.
In the above examples, a difference in height, Ah, may be determined between a first height value and a second height value. The first height value may be the height of an object above a reference height (e.g. the height of a horizontal plane 404 passing through the origin of the apparatus 100) determined based on a first position measurement made by the apparatus with respect to the first coordinate system. The second height value may be the height of the object above the reference height determined with respect to the second coordinate system. The horizontal distance, d, between the apparatus and the object may be determined. The correction value, ε, may then be estimated based on the value Ah/d, wherein the correction value corresponds to an error in the elevation angle of the object as measured by the apparatus.
In a more general case, a difference, Δδ, between a first distance value and a second distance value may be determined. The first distance value may be the shortest distance between an object and a reference plane (having any suitable orientation and position) determined based on a first position measurement made by the apparatus with respect to the first coordinate system. The second distance value may be the shortest distance between the object and the reference plane determined with respect to the second coordinate system. The length, d, of the projection of a vector, between the apparatus and the object, onto the reference plane may be determined. A correction value, s, may be estimated based on the value Δδ/d. This correction value may be used to correct a certain type of misalignment error, for example depending on the orientation and position of the reference plane.
In addition, although the technique described above may be applied to calibrate an apparatus with a certain coordinate system (e.g. a globally agreed coordinate system), the skilled person will appreciate that the techniques described herein may be applied to accurately level something, for example a snooker table.

Claims (16)

Claims
1. A method for converting coordinates of measurements obtained by an apparatus for determining the position of a remote object, the method comprising: obtaining a first position measurement of a first object located at a first position, wherein the first position measurement is made by the apparatus and is expressed in a first coordinate system that is dependent on the position and/or orientation of the apparatus; obtaining a second position measurement of the first object at the first position, wherein the second position measurement is made by a position-measuring device co-located with the first object and is expressed in a second coordinate system that is independent of the apparatus; repeating the steps of obtaining first and second position measurements one or more times based on objects located at different positions; and determining a transformation for transforming coordinates between the first and second coordinate systems based on the first and second position measurements.
2. A method according to claim 1, wherein obtaining a first position measurement comprises: measuring, by the apparatus, the line of sight distance from the apparatus to the first object; measuring, by the apparatus, the direction from the apparatus to the first object; and obtaining spherical coordinates of the position of the first object based on the line of sight distance and the direction.
3. A method according to claim 2, wherein obtaining a first position measurement further comprises converting the spherical coordinates to cartesian coordinates.
4. A method according to any preceding claim, wherein the position-measuring device comprises a Satellite Navigation System (SNS) receiver, and wherein obtaining a second position measurement comprises obtaining SNS coordinates of the first object.
5. A method according to any preceding claim, wherein the steps of obtaining first and second position measurements are repeated at least two times to obtain three or more first and second position measurements.
6. A method according to any preceding claim, wherein repeating the steps of obtaining first and second position measurements comprises at least one of: moving the first object to a second position, and obtaining first and second position measurements of the first object at the second position; and obtaining first and second position measurements of a second object located at the second position.
7. A method according to any preceding claim, wherein determining the transformation comprises determining separate translation and rotation transformations that map the set of first position measurements to the set of second position measurements.
8. A method according to claim 7, wherein the translation transformation is determined based on one or more of: the position of the apparatus in the second coordinate system, determined by a position-measuring device co-located with the apparatus; and a difference between the centroid of the set of first position measurements and the centroid of the set of second position measurements.
9. A method according to claim 7 or 8, wherein the rotation transformation is determined based on Singular Value Decomposition (SVD).
10. A method according to any preceding claim, further comprising: determining a difference, Δδ, between a first distance value and a second distance value, wherein the first distance value is the shortest distance between an object and a reference plane determined based on a first position measurement made by the apparatus with respect to the first coordinate system, and wherein the second distance value is the shortest distance between the object and the reference plane determined with respect to the second coordinate system; determining the length, d, of the projection of a vector, between the apparatus and the object, onto the reference plane; estimating a correction value, s, based on the value Δδ/d.
11. A method according to claim 10, wherein the reference plane is a horizontal plane that intersects the origin of the apparatus, and wherein the correction value corresponds to an error in the elevation angle of the object as measured by the apparatus.
12. A method according to claim 10 or 11, wherein the second distance value is determined based on at least one of: a transformation applied to the first position measurement; and a second position measurement corresponding to the first position measurement.
13. A method according to claim 10, 11 or 12, wherein the correction value, s, is estimated based an average of two or more values of Δδ/d calculated based on measurements of objects at two or more positions.
14. A method according to any of claims 10 to 13, wherein the method is repeated one or more times, and wherein the step of obtaining the first position measurement in a particular iteration comprises applying the correction factor, s, determined in the preceding iteration to the measured value.
15. A method according to claim 14, wherein the method is repeated until the difference in correction values determined in successive iterations is less than a certain threshold.
16. A first apparatus for converting coordinates of measurements obtained by a second apparatus for determining the position of a remote object, the first apparatus comprising a processor configured for: obtaining a first position measurement of a first object located at a first position, wherein the first position measurement is made by the second apparatus and is expressed in a first coordinate system that is dependent on the position and/or orientation of the second apparatus; obtaining a second position measurement of the first object at the first position, wherein the second position measurement is made by a position-measuring device co-located with the first object and is expressed in a second coordinate system that is independent of the second apparatus; repeating the steps of obtaining first and second position measurements one or more times based on objects located at different positions; and determining a transformation for transforming coordinates between the first and second coordinate systems based on the first and second position measurements.
GB1802445.5A 2018-02-14 2018-02-14 Calibration of object position-measuring apparatus Withdrawn GB2573090A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1802445.5A GB2573090A (en) 2018-02-14 2018-02-14 Calibration of object position-measuring apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1802445.5A GB2573090A (en) 2018-02-14 2018-02-14 Calibration of object position-measuring apparatus

Publications (2)

Publication Number Publication Date
GB201802445D0 GB201802445D0 (en) 2018-03-28
GB2573090A true GB2573090A (en) 2019-10-30

Family

ID=61731391

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1802445.5A Withdrawn GB2573090A (en) 2018-02-14 2018-02-14 Calibration of object position-measuring apparatus

Country Status (1)

Country Link
GB (1) GB2573090A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11592579B2 (en) * 2017-12-21 2023-02-28 Hilti Aktiengesellschaft Method for searching for a target object

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114167900B (en) * 2021-11-19 2023-06-30 北京环境特性研究所 Photoelectric tracking system calibration method and device based on unmanned aerial vehicle and differential GPS

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020162668A1 (en) * 2001-03-16 2002-11-07 Carlson David S. Blade control apparatuses and methods for an earth-moving machine
US20070276590A1 (en) * 2006-05-24 2007-11-29 Raytheon Company Beacon-Augmented Pose Estimation
US20100315286A1 (en) * 2009-06-12 2010-12-16 Trimble Navigation Limited System and Method for Site Calibration of a Surveying Device
US20130024117A1 (en) * 2011-07-18 2013-01-24 Pavetti Scott R User Navigation Guidance and Network System
US20140285631A1 (en) * 2013-03-20 2014-09-25 Trimble Navigation Limited Indoor navigation via multi-beam laser projection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020162668A1 (en) * 2001-03-16 2002-11-07 Carlson David S. Blade control apparatuses and methods for an earth-moving machine
US20070276590A1 (en) * 2006-05-24 2007-11-29 Raytheon Company Beacon-Augmented Pose Estimation
US20100315286A1 (en) * 2009-06-12 2010-12-16 Trimble Navigation Limited System and Method for Site Calibration of a Surveying Device
US20130024117A1 (en) * 2011-07-18 2013-01-24 Pavetti Scott R User Navigation Guidance and Network System
US20140285631A1 (en) * 2013-03-20 2014-09-25 Trimble Navigation Limited Indoor navigation via multi-beam laser projection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11592579B2 (en) * 2017-12-21 2023-02-28 Hilti Aktiengesellschaft Method for searching for a target object

Also Published As

Publication number Publication date
GB201802445D0 (en) 2018-03-28

Similar Documents

Publication Publication Date Title
CN108921947B (en) Method, device, equipment, storage medium and acquisition entity for generating electronic map
EP3454008B1 (en) Survey data processing device, survey data processing method, and survey data processing program
CN109883444B (en) Attitude angle coupling error compensation method and device and electronic equipment
US20190072392A1 (en) System and method for self-geoposition unmanned aerial vehicle
CN105184776A (en) Target tracking method
KR20200064542A (en) Apparatus for measuring ground control point using unmanned aerial vehicle and method thereof
CN113340277B (en) High-precision positioning method based on unmanned aerial vehicle oblique photography
Tjahjadi et al. Single frame resection of compact digital cameras for UAV imagery
CN107656286A (en) Object localization method and system under big beveled distal end observing environment
CN115760999B (en) Monocular camera calibration and target geographic position extraction method based on GIS assistance
GB2573090A (en) Calibration of object position-measuring apparatus
CN109146936B (en) Image matching method, device, positioning method and system
Han et al. A direct determination of the orientation parameters in the collinearity equations
CN116908818B (en) Laser radar calibration method and device based on RTK unmanned aerial vehicle and storage medium
JP3874363B1 (en) Position rating device, position rating method, and position rating program
CN112098926B (en) Intelligent angle measurement training sample generation method by using unmanned plane platform
Skaloud et al. Mapping with MAV: experimental study on the contribution of absolute and relative aerial position control
CN113654528B (en) Method and system for estimating target coordinates through unmanned aerial vehicle position and cradle head angle
Guntel et al. Accuracy analysis of control point distribution for different terrain types on photogrammetric block
Verykokou et al. Metric exploitation of a single low oblique aerial image
US11514597B1 (en) Single-camera stereoaerophotogrammetry using UAV sensors
CN114025320A (en) Indoor positioning method based on 5G signal
Calhoun et al. Flight test evaluation of image rendering navigation for close-formation flight
Bender et al. Ins-camera calibration without ground control points
CN117953007B (en) Linear motion compensation control method based on image matching

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)