CN111783659B - Remote lofting method and device based on machine vision, terminal equipment and storage medium - Google Patents

Remote lofting method and device based on machine vision, terminal equipment and storage medium Download PDF

Info

Publication number
CN111783659B
CN111783659B CN202010622000.0A CN202010622000A CN111783659B CN 111783659 B CN111783659 B CN 111783659B CN 202010622000 A CN202010622000 A CN 202010622000A CN 111783659 B CN111783659 B CN 111783659B
Authority
CN
China
Prior art keywords
imaging
marker
center point
video picture
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010622000.0A
Other languages
Chinese (zh)
Other versions
CN111783659A (en
Inventor
郑文
张翔
林恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Huichuan Internet Of Things Technology Science And Technology Co ltd
Original Assignee
Fujian Huichuan Internet Of Things Technology Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Huichuan Internet Of Things Technology Science And Technology Co ltd filed Critical Fujian Huichuan Internet Of Things Technology Science And Technology Co ltd
Priority to CN202010622000.0A priority Critical patent/CN111783659B/en
Publication of CN111783659A publication Critical patent/CN111783659A/en
Priority to PCT/CN2021/081145 priority patent/WO2022001193A1/en
Application granted granted Critical
Publication of CN111783659B publication Critical patent/CN111783659B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/36Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Abstract

The application provides a remote lofting method, a device, a terminal device and a storage medium based on machine vision, wherein the remote lofting method based on machine vision can automatically track easily-detected markers through a machine vision algorithm, and when an accurate positioning instruction is received, accurate positioning is realized according to the target objects, and lofting is finally completed. The application has the advantages of high lofting efficiency, high lofting precision, low cost and the like.

Description

Remote lofting method and device based on machine vision, terminal equipment and storage medium
Technical Field
The application relates to the field of measurement, in particular to a remote lofting method and device based on machine vision, terminal equipment and a storage medium.
Background
In the engineering construction process, the space coordinates of each key point need to be accurately measured, and the space coordinates are used for guiding subsequent construction work, and are important links of engineering construction. The current mainstream lofting method is operated by using a total station, two measuring staff are required to be closely matched, wherein a running staff is responsible for moving a prism rod to a key point under the guidance of an operator, the operator is responsible for accurately aligning the laser drop point of the total station with the center of the prism rod to measure and obtain the space coordinates of the key point, however, the measurement mode is required to be closely matched by the two measuring staff to be implemented, and the measurement precision is limited by the professional skill level of the operator, so that the method has the defects of low measurement efficiency, low precision and the like.
Disclosure of Invention
The embodiment of the application aims to provide a remote lofting method and device based on machine vision, terminal equipment and a storage medium, which are used for improving lofting efficiency and precision.
The application discloses a remote lofting method based on machine vision, which is applied to terminal equipment and comprises the following steps:
receiving a first activation instruction, and converting from a target detection state to a target tracking state, wherein in the target tracking state, the terminal equipment executes:
acquiring a video picture from a distance measuring device;
detecting whether the video picture contains a marker or not according to a machine vision algorithm;
when detecting that the video picture contains a marker, calculating an imaging width and an imaging height of the marker in the video picture and a first center point imaging coordinate of the marker in the video picture;
determining a first current angle of the cradle head according to the first center point imaging coordinates;
determining the imaging size of the marker according to the imaging width and the imaging height;
calculating to obtain the space coordinates of the marker according to the first current angle of the holder, the current multiplying power of the camera of the distance measuring equipment and the imaging size of the marker; receiving a second activation instruction, and converting from the target tracking state to a precise alignment state, wherein in the precise alignment state, the terminal device performs:
Acquiring a video picture from the distance measuring equipment again;
calculating the imaging coordinates of a central point of the target object in the video picture according to a machine vision algorithm;
calculating the rotation angle of the cradle head according to the center point imaging coordinates and preset equipment parameters, so that the cradle head rotates according to the rotation angle, and the laser landing point of the ranging equipment falls on the center point of the target object;
acquiring a second current angle of the holder after rotation and a laser distance between the distance measuring equipment and the target object;
and calculating the space coordinate of the center point of the target object according to the second current angle and the laser distance.
According to the remote lofting method based on machine vision, whether the video picture contains the marker or not is detected according to a machine vision algorithm by receiving a first activating instruction and acquiring the video picture from the distance measuring equipment, when the fact that the video picture contains the marker is detected, the imaging width and the imaging height of the marker in the video picture and the imaging coordinate of a first center point of the marker in the video picture are calculated, finally, the first current angle of a holder is determined according to the imaging coordinate of the first center point, the imaging size of the marker is determined according to the imaging width and the imaging height, and the space coordinate of the marker can be calculated according to the first current angle of the holder, the current multiplying power of a camera of the distance measuring equipment and the imaging size of the marker, so that tracking of the marker is finally completed.
On the other hand, after the tracking of the marker is completed, the terminal device may enter a precise alignment state, at this time, the terminal device performs obtaining a video image from the ranging device again, calculates a center point imaging coordinate of the target object in the video image according to the machine vision algorithm, calculates a rotation angle of the cradle head according to the center point imaging coordinate and a preset device parameter, so that the cradle head rotates according to the rotation angle, and makes a laser landing point of the ranging device fall on a center point of the target object, obtains a second current angle after the cradle head rotates and a laser distance between the ranging device and the target object, and calculates a spatial coordinate of the center point of the target object according to the second current angle and the laser distance, so that the terminal device may utilize the target object to accomplish precise positioning.
Compared with manual lofting in the prior art, the embodiment of the application adopts a machine vision algorithm and distance measuring equipment to automatically track the marker and finish accurate positioning according to the target object, so that repeated running of a measuring person is not needed, the measurement technology dependency on the measuring person is low, and the influence of error operation of the measuring person on the measurement precision can be reduced, so that the embodiment of the application has the advantages of high measurement efficiency and high measurement precision. Meanwhile, the target object is not very striking in a machine vision algorithm and is not beneficial to tracking, so that the embodiment of the application can accurately track through the easily-identified identification object. On the other hand, compared with the automatic lofting equipment in the prior art, the embodiment of the application has the advantage of low cost.
In a first aspect of the present application, as an optional implementation manner, the marker is one of a measurement operator, a reflective vest, and a balloon.
In this alternative embodiment, the measurement operator, reflective vest, balloon may be used as a marker due to their large volume or conspicuous color.
In a first aspect of the present application, as an optional implementation manner, the determining the first current angle of the pan-tilt according to the first center point imaging coordinate includes the sub-steps of:
comparing and calculating the first center point imaging coordinate with the center point coordinate of the video picture to obtain a pixel difference between the first center point imaging coordinate and the center point coordinate of the video picture;
calculating the horizontal angle and the vertical angle of the cradle head to be rotated according to the pixel difference;
and driving the cradle head to rotate according to the horizontal angle and the vertical angle, so that the center point of the video picture is aligned with the center point of the marker, and taking the angle of the cradle head after rotation as the first current angle of the cradle head.
In this optional embodiment, the pixel difference between the first center point imaging coordinate and the center point coordinate of the video frame is obtained by comparing the first center point imaging coordinate with the center point coordinate of the video frame, then the horizontal angle and the vertical angle of the cradle head required to rotate are calculated according to the pixel difference, and finally the cradle head is driven to rotate according to the horizontal angle and the vertical angle, so that the center point of the video frame is aligned with the center point of the identifier, and the first current angle of the cradle head is obtained.
In a first aspect of the present application, as an optional implementation manner, the determining the imaging size of the marker according to the imaging width and the imaging height includes:
comparing the imaging width and the imaging height with a preset width interval and a preset height interval respectively to obtain a comparison result;
adjusting the multiplying power of a camera of the distance measuring equipment according to the comparison result, so that the imaging width and the imaging height meet preset conditions by adjusting the video picture;
and determining the imaging size of the marker according to the imaging width and the imaging height which meet preset conditions.
In this optional embodiment, the imaging width and the imaging height are compared with the preset width interval and the preset height interval respectively to obtain a comparison result, then the camera magnification of the ranging device is adjusted according to the comparison result, so that the imaging width and the imaging height meet preset conditions by adjusting the video picture, and finally the imaging size of the marker can be determined according to the imaging width and the imaging height meeting the preset conditions.
In a first aspect of the present application, as an optional implementation manner, the adjusting the camera magnification of the ranging apparatus according to the comparison result, so that the imaging width and the imaging height meet preset conditions by adjusting the video frame includes:
When the imaging width and the imaging height are respectively smaller than the preset width interval and the preset height interval, calculating a camera multiplying power and controlling the camera zooming of the distance measuring equipment to amplify the video picture;
and when the imaging width and the imaging height are respectively larger than the preset width interval and the preset height interval, calculating the multiplying power of the camera and controlling the zooming of the camera so as to reduce the video picture.
In this alternative embodiment, the video frame can be implemented by scaling the video frame so that the imaging width and the imaging height satisfy preset conditions.
In a first aspect of the present application, as an optional implementation manner, after the calculating, according to the angle of the pan-tilt, the current magnification of the camera, and the imaging size of the marker, the method further includes:
receiving a second activation instruction, and converting from the target tracking state to a precise alignment state, wherein in the precise alignment state, the terminal device performs:
acquiring a video picture from the distance measuring equipment again;
calculating the imaging coordinates of a central point of the target object in the video picture according to a machine vision algorithm;
Calculating the rotation angle of the cradle head according to the center point imaging coordinates and preset equipment parameters, so that the cradle head rotates according to the rotation angle, and the laser landing point of the ranging equipment falls on the center point of the target object;
acquiring a second current angle of the holder after rotation and a laser distance between the distance measuring equipment and the target object;
and calculating the space coordinate of the center point of the target object according to the second current angle and the laser distance.
In the optional embodiment, the spatial coordinate of the center point of the target object can be obtained by calculating according to the second current angle and the laser distance by entering the accurate alignment state, so that the measurement accuracy of the spatial coordinate of the center point of the target object is further improved.
In a first aspect of the present application, as an optional implementation manner, after the calculating, according to the first current angle of the pan-tilt, the current magnification of the camera of the ranging device, and the imaging size of the marker, the method further includes:
calculating a difference between the spatial coordinates of the marker and the spatial coordinates of the target point;
And generating traveling direction information according to the difference value between the spatial coordinates of the marker and the spatial coordinates of the target point so as to prompt the traveling direction information to a user.
In a first aspect of the present application, as an optional implementation manner, before the receiving the first activation instruction and converting from the target detection state to the target tracking state, the method further includes the step of:
receiving a third activation instruction, and switching from a standby state to the target detection state, wherein in the target detection state, the terminal device executes:
and acquiring a video picture from the distance measuring equipment, detecting whether the identifier exists in the video picture, and if so, entering the target tracking state.
In this optional embodiment, the traveling direction information may be generated according to a difference between the spatial coordinates of the marker and the spatial coordinates of the target point, and thus the traveling direction information may be prompted to the user.
The second aspect of the present application discloses a remote lofting device based on machine vision, the device is applied to a terminal device, and the device comprises:
the receiving module is used for receiving the first activation instruction;
the state switching module is used for switching from the target detection state to the target tracking state;
The acquisition module is used for acquiring video pictures from the ranging equipment;
the identification module is used for detecting whether the video picture contains a marker or not according to a machine vision algorithm;
the first calculating module is used for calculating the imaging width and the imaging height of the marker in the video picture and the imaging coordinate of the first center point of the marker in the video picture when the identification module detects that the marker is contained in the video picture;
the first determining module is used for determining a first current angle of the cradle head according to the first center point imaging coordinates;
a second determining module, configured to determine an imaging size of the marker according to the imaging width and the imaging height;
the second calculation module is used for calculating to obtain the space coordinates of the marker according to the first current angle of the holder, the current multiplying power of the camera and the imaging size of the marker;
the receiving module is also used for receiving a second activation instruction;
the state switching module is further used for switching from the target tracking state to a precise alignment state, and the precise alignment state is achieved;
the acquisition module is further used for acquiring a video picture from the distance measuring equipment again;
The first calculation module is further used for calculating the imaging coordinates of the center point of the target object in the video picture according to a machine vision algorithm;
the first determining module is further configured to calculate a rotation angle of the pan-tilt according to the center point imaging coordinate and a preset device parameter, so that the pan-tilt rotates according to the rotation angle, and a laser landing point of the ranging device falls on a center point of the target object;
the acquisition module is further used for acquiring a second current angle of the holder after rotation and a laser distance between the distance measuring equipment and the target object;
the second calculation module is further configured to calculate, according to the second current angle and the laser distance, a spatial coordinate of a center point of the target object.
According to the machine vision-based remote lofting device, a machine vision-based remote lofting method is executed, a video picture can be obtained from a distance measuring device by receiving a first activating instruction, then whether the video picture contains a marker or not is detected according to a machine vision algorithm, when the fact that the video picture contains the marker is detected, the imaging width and the imaging height of the marker in the video picture and the imaging coordinate of a first center point of the marker in the video picture are calculated, finally, the first current angle of a holder is determined according to the imaging coordinate of the first center point, the imaging size of the marker is determined according to the imaging width and the imaging height, the space coordinate of the marker can be calculated according to the first current angle of the holder, the current multiplying power of a camera of the distance measuring device and the imaging size of the marker, and finally, tracking of the marker is completed.
On the other hand, after the tracking of the marker is completed, the terminal device may enter a precise alignment state, at this time, the terminal device performs obtaining a video image from the ranging device again, calculates a center point imaging coordinate of the target object in the video image according to the machine vision algorithm, calculates a rotation angle of the cradle head according to the center point imaging coordinate and a preset device parameter, so that the cradle head rotates according to the rotation angle, and makes a laser landing point of the ranging device fall on a center point of the target object, obtains a second current angle after the cradle head rotates and a laser distance between the ranging device and the target object, and calculates a spatial coordinate of the center point of the target object according to the second current angle and the laser distance, so that the terminal device may utilize the target object to accomplish precise positioning.
Compared with manual lofting in the prior art, the embodiment of the application adopts a machine vision algorithm and distance measuring equipment to automatically track the marker and finish accurate positioning according to the target object, so that repeated running of a measuring person is not needed, the measurement technology dependency on the measuring person is low, and the influence of error operation of the measuring person on the measurement precision can be reduced, so that the embodiment of the application has the advantages of high measurement efficiency and high measurement precision. Meanwhile, the target object is not very striking in a machine vision algorithm and is unfavorable for tracking, so that the embodiment of the application can stably track through the easily-identified identification object. On the other hand, compared with the automatic lofting equipment in the prior art, the embodiment of the application has the advantage of low cost.
A third aspect of the present application discloses a terminal device, comprising:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the machine vision based remote lofting method disclosed in the first aspect of the present application.
According to the terminal equipment, a video picture can be obtained from a distance measuring device by receiving a first activating instruction, whether the video picture contains a marker or not is detected according to a machine vision algorithm, when the fact that the video picture contains the marker is detected, the imaging width and the imaging height of the marker in the video picture and the imaging coordinate of a first center point of the marker in the video picture are calculated, finally, the first current angle of a holder is determined according to the imaging coordinate of the first center point, the imaging size of the marker is determined according to the imaging width and the imaging height, and the space coordinate of the marker can be calculated according to the first current angle of the holder, the current multiplying power of a camera and the imaging size of the marker, so that tracking of the marker is finally completed.
On the other hand, after the tracking of the marker is completed, the terminal device may enter a precise alignment state, at this time, the terminal device performs obtaining a video image from the ranging device again, calculates a center point imaging coordinate of the target object in the video image according to the machine vision algorithm, calculates a rotation angle of the cradle head according to the center point imaging coordinate and a preset device parameter, so that the cradle head rotates according to the rotation angle, and makes a laser landing point of the ranging device fall on a center point of the target object, obtains a second current angle after the cradle head rotates and a laser distance between the ranging device and the target object, and calculates a spatial coordinate of the center point of the target object according to the second current angle and the laser distance, so that the terminal device may utilize the target object to accomplish precise positioning.
Compared with manual lofting in the prior art, the embodiment of the application adopts a machine vision algorithm and distance measuring equipment to automatically track the marker and finish accurate positioning according to the target object, so that repeated running of a measuring person is not needed, the measurement technology dependency on the measuring person is low, and the influence of error operation of the measuring person on the measurement precision can be reduced, so that the embodiment of the application has the advantages of high measurement efficiency and high measurement precision. Meanwhile, the target object is not very striking in a machine vision algorithm and is unfavorable for tracking, so that the embodiment of the application can stably track through the easily-identified identification object. On the other hand, compared with the automatic lofting equipment in the prior art, the embodiment of the application has the advantage of low cost.
A fourth aspect of the application discloses a storage medium storing computer instructions that, when invoked, are used to perform the machine vision-based remote lofting method disclosed in the first aspect of the application.
The storage medium of the application is used for executing a remote lofting method based on machine vision, a video picture can be obtained from a distance measuring device by receiving a first activating instruction, then whether the video picture contains a marker or not is detected according to a machine vision algorithm, when the video picture contains the marker, the imaging width and the imaging height of the marker in the video picture and the imaging coordinate of a first center point of the marker in the video picture are calculated, finally, the first current angle of a holder is determined according to the imaging coordinate of the first center point, the imaging size of the marker is determined according to the imaging width and the imaging height, and the space coordinate of the marker can be calculated according to the first current angle of the holder, the current multiplying power of a camera of the distance measuring device and the imaging size of the marker, and finally, the tracking of the marker is completed.
On the other hand, after the tracking of the marker is completed, the terminal device may enter a precise alignment state, at this time, the terminal device performs obtaining a video image from the ranging device again, calculates a center point imaging coordinate of the target object in the video image according to the machine vision algorithm, calculates a rotation angle of the cradle head according to the center point imaging coordinate and a preset device parameter, so that the cradle head rotates according to the rotation angle, and makes a laser landing point of the ranging device fall on a center point of the target object, obtains a second current angle after the cradle head rotates and a laser distance between the ranging device and the target object, and calculates a spatial coordinate of the center point of the target object according to the second current angle and the laser distance, so that the terminal device may utilize the target object to accomplish precise positioning.
Compared with manual lofting in the prior art, the embodiment of the application adopts a machine vision algorithm and distance measuring equipment to automatically track the marker and finish accurate positioning according to the target object, so that repeated running of a measuring person is not needed, the measurement technology dependency on the measuring person is low, and the influence of error operation of the measuring person on the measurement precision can be reduced, so that the embodiment of the application has the advantages of high measurement efficiency and high measurement precision. Meanwhile, the target object is not very striking in a machine vision algorithm and is unfavorable for tracking, so that the embodiment of the application can stably track through the easily-identified identification object. On the other hand, compared with the automatic lofting equipment in the prior art, the embodiment of the application has the advantage of low cost.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a remote lofting method based on machine vision according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a remote lofting device based on machine vision according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
Example 1
Referring to fig. 1, fig. 1 is a schematic flow chart of a remote lofting method based on machine vision according to an embodiment of the present application, where the method is applied to a terminal device. As shown in fig. 1, the method of the embodiment of the present application includes the steps of:
101. receiving a first activation instruction, and converting from a target detection state to a target tracking state, wherein in the target tracking state, the terminal equipment executes the following steps:
102. Acquiring a video picture from a distance measuring device;
103. detecting whether a video picture contains a marker or not according to a machine vision algorithm;
104. when the fact that the video picture contains the marker is detected, calculating an imaging width and an imaging height of the marker in the video picture and a first center point imaging coordinate of the marker in the video picture;
105. determining a first current angle of the cradle head according to the imaging coordinates of the first center point;
106. determining the imaging size of the marker according to the imaging width and the imaging height;
107. calculating to obtain the space coordinates of the marker according to the first current angle of the holder, the current multiplying power of the camera of the distance measuring equipment and the imaging size of the marker;
108. receiving a second activation instruction, and converting from a target tracking state to a precise alignment state, wherein in the precise alignment state, the terminal equipment executes:
109. again acquiring a video picture from the ranging device;
110. calculating the imaging coordinates of a central point of the target object in the video picture according to a machine vision algorithm;
111. calculating the rotation angle of the cradle head according to the imaging coordinates of the central point and preset equipment parameters, so that the cradle head rotates according to the rotation angle, and the laser landing point of the ranging equipment falls on the central point of the target object;
112. Acquiring a second current angle of the holder after rotation and a laser distance between the ranging equipment and the target object;
113. and calculating the space coordinates of the center point of the target object according to the second current angle and the laser distance.
In an embodiment of the present application, optionally, the marker is one of a measurement operator, a reflective vest, and a balloon.
In the embodiment of the application, optionally, the marker may also be the measured object itself, or may be a specific gesture or human body gesture of the measurement operator.
In the embodiment of the application, the specific instruction sent by the mobile intelligent terminal or other equipment can be used as an activation instruction.
In the embodiment of the application, the power-on of the terminal equipment or the connection between the computing unit and the ranging equipment through the wireless network can also be used as an activation instruction.
In the embodiment of the application, the terminal equipment can recognize preset specific objects, gestures, human body gestures and illumination changes (such as stroboscopic) in the video picture and can also be used as an activation instruction.
In the embodiment of the present application, optionally, when the standby time of the terminal device reaches the preset time threshold, the standby time may also be used as an activation instruction.
In the embodiment of the application, it is to be noted that a computing unit is installed in the terminal device, and the computing unit is used for executing the remote lofting method based on machine vision disclosed in the embodiment of the application.
In the embodiment of the application, the distance measuring equipment is arranged on the cradle head and can rotate along with the rotation of the cradle head.
In the embodiment of the application, the mobile intelligent terminal can be a mobile phone, a PAD, a notebook and other mobile communication terminals, and the embodiment of the application is not limited.
In the embodiment of the application, the terminal equipment can remotely communicate with the mobile intelligent terminal through a wireless network. Further, the measurer can send an activation instruction to the terminal equipment through the mobile intelligent terminal, and meanwhile, the measurer can display a measurement result through the mobile intelligent terminal or feed back an interaction result to the measurer in a voice prompt mode.
In the embodiment of the application, the marker can be the measured object itself, other objects such as a vest decorated with specific patterns, and further, the marker can also be a specific gesture or human body gesture of a measuring person.
In the embodiment of the application, naming distinction among the first activation instruction, the second activation instruction and the third activation instruction in the embodiment of the application is convenient for describing the instructions input to the terminal equipment by the measuring personnel at different stages.
In the embodiment of the application, the terminal equipment can be divided into a standby state, a target detection state, a target tracking state and a precise alignment state according to the execution content of the terminal equipment, and the states can be converted through specified conditions. It should be noted that the division of the states is to facilitate the measurement personnel to intuitively understand the use state of the terminal device, rather than to absolutely limit that a certain step of the terminal device belongs to a certain state.
It can be seen that, according to the machine vision remote lofting method provided by the embodiment of the application, by receiving the first activation instruction and acquiring the video picture from the distance measuring equipment, detecting whether the video picture contains the identifier according to the machine vision algorithm, when detecting that the video picture contains the identifier, calculating the imaging width and the imaging height of the identifier in the video picture and the imaging coordinate of the first center point of the identifier in the video picture, finally determining the first current angle of the holder according to the imaging coordinate of the first center point, determining the imaging size of the identifier according to the imaging width and the imaging height, and calculating the space coordinate of the identifier according to the first current angle of the holder, the current multiplying power of the camera and the imaging size of the identifier, thereby finally completing the tracking of the identifier.
On the other hand, after the tracking of the marker is completed, the terminal device may enter a precise alignment state, at this time, the terminal device performs obtaining a video image from the ranging device again, calculates a center point imaging coordinate of the target object in the video image according to the machine vision algorithm, calculates a rotation angle of the cradle head according to the center point imaging coordinate and a preset device parameter, so that the cradle head rotates according to the rotation angle, and makes a laser landing point of the ranging device fall on a center point of the target object, obtains a second current angle after the cradle head rotates and a laser distance between the ranging device and the target object, and calculates a spatial coordinate of the center point of the target object according to the second current angle and the laser distance, so that the terminal device may utilize the target object to accomplish precise positioning.
Compared with manual lofting in the prior art, the embodiment of the application adopts a machine vision algorithm and distance measuring equipment to automatically track the marker and finish accurate positioning according to the target object, so that repeated running of a measuring person is not needed, the measurement technology dependency on the measuring person is low, and the influence of error operation of the measuring person on the measurement precision can be reduced, so that the embodiment of the application has the advantages of high measurement efficiency and high measurement precision. Meanwhile, the target object is not very striking in a machine vision algorithm and is unfavorable for tracking, so that the embodiment of the application can stably track through the easily-identified identification object. On the other hand, compared with the automatic lofting equipment in the prior art, the embodiment of the application has the advantage of low cost.
In this embodiment of the present application, as an optional implementation manner, a first current angle of the pan-tilt is determined according to the first center point imaging coordinate;
comparing and calculating the first center point imaging coordinate with the center point coordinate of the video picture to obtain a pixel difference between the first center point imaging coordinate and the center point coordinate of the video picture;
calculating a horizontal angle and a vertical angle of the cradle head to be rotated according to the pixel difference;
and driving the cradle head to rotate according to the horizontal angle and the vertical angle, so that the center point of the video picture is aligned with the center point of the marker, and taking the angle of the cradle head after rotation as a first current angle of the cradle head.
In this optional embodiment, the pixel difference between the first center point imaging coordinate and the center point coordinate of the video frame is obtained by comparing the first center point imaging coordinate with the center point coordinate of the video frame, then the horizontal angle and the vertical angle of the cradle head required to rotate are calculated according to the pixel difference, and finally the cradle head is driven to rotate according to the horizontal angle and the vertical angle, so that the center point of the video frame is aligned with the center point of the identifier, and the first current angle of the cradle head is obtained.
In an embodiment of the present application, as an optional implementation manner, determining an imaging size of the marker according to the imaging width and the imaging height includes the following sub-steps:
Comparing the imaging width and the imaging height with a preset width interval and a preset height interval respectively to obtain a comparison result;
adjusting the multiplying power of a camera of the ranging equipment according to the comparison result, so that the imaging width and the imaging height meet preset conditions by adjusting the video picture;
and determining the imaging size of the marker according to the imaging width and the imaging height which meet the preset conditions.
In this optional embodiment, the imaging width and the imaging height are compared with the preset width interval and the preset height interval respectively to obtain a comparison result, then the camera magnification of the ranging device is adjusted according to the comparison result, so that the imaging width and the imaging height meet preset conditions by adjusting the video picture, and finally the imaging size of the marker can be determined according to the imaging width and the imaging height meeting the preset conditions.
In an embodiment of the present application, as an optional implementation manner, adjusting a camera magnification of the ranging device according to a comparison result, so that an imaging width and an imaging height meet preset conditions by adjusting a video frame, including the following sub-steps:
when the imaging width and the imaging height are respectively smaller than a preset width interval and a preset height interval, calculating the multiplying power of the camera and controlling the zooming of the camera of the distance measuring equipment so as to amplify the video picture;
When the imaging width and the imaging height are respectively larger than a preset width interval and a preset height interval, the multiplying power of the camera is calculated and controlled to become the multiplying power of the camera so as to reduce the video picture.
In this alternative embodiment, the video frame can be implemented by scaling the video frame so that the imaging width and the imaging height satisfy preset conditions.
In the embodiment of the present application, as an alternative implementation manner, in step 107: after the spatial coordinates of the marker are obtained through calculation according to the first current angle of the holder, the current multiplying power of the camera of the distance measuring device and the imaging size of the marker, the method provided by the embodiment of the application further comprises the following steps:
calculating a difference between the spatial coordinates of the marker and the spatial coordinates of the target point;
and generating traveling direction information according to the difference value between the spatial coordinates of the marker and the spatial coordinates of the target point so as to prompt the traveling direction information to the user.
In this optional embodiment, the traveling direction information may be generated according to a difference between the spatial coordinates of the marker and the spatial coordinates of the target point, and thus the traveling direction information may be prompted to the user.
In an embodiment of the present application, as an optional implementation manner, before receiving the first activation instruction and converting from the target detection state to the target tracking state, the method further includes the steps of:
Receiving a third activation instruction, and switching from a standby state to a target detection state, wherein in the target detection state, the terminal equipment executes:
and acquiring a video picture from the distance measuring equipment, detecting whether the video picture has a marker, and if so, entering a target tracking state.
In this optional embodiment, by initially detecting whether the identifier exists in the video frame, if so, the terminal device may enter the target tracking state, and if not, may enter the standby state, so that the power consumption of the terminal device may be reduced.
Example two
Referring to fig. 2, fig. 2 is a schematic structural diagram of a remote lofting device based on machine vision according to an embodiment of the present application, where the device is applied to a terminal device. As shown in fig. 2, the apparatus includes:
a receiving module 201, configured to receive a first activation instruction;
a state switching module 202 for switching from the target detection state to the target tracking state;
an acquisition module 203, configured to acquire a video frame from a ranging apparatus;
an identification module 204, configured to detect whether the video frame contains a identifier according to a machine vision algorithm;
a first calculating module 205, configured to calculate, when the identifying module detects that the video frame contains the identifier, an imaging width and an imaging height of the identifier in the video frame and a first center point imaging coordinate of the identifier in the video frame;
A first determining module 206, configured to determine a first current angle of the pan-tilt according to the first center point imaging coordinate;
a second determining module 207 for determining an imaging size of the marker according to the imaging width and the imaging height;
the second calculation module 208 is configured to calculate a spatial coordinate of the identifier according to the first current angle of the pan-tilt, the current magnification of the camera, and the imaging size of the identifier;
the receiving module 201 is further configured to receive a second activation instruction;
the state switching module 202 is further configured to switch from the target tracking state to a precise alignment state, where the precise alignment state is;
the acquiring module 203 is further configured to acquire a video frame from the ranging device again;
the first calculation module 205 is further configured to calculate a center point imaging coordinate of the target object in the video frame according to a machine vision algorithm;
the first determining module 206 is further configured to calculate a rotation angle of the pan-tilt according to the center point imaging coordinate and a preset device parameter, so that the pan-tilt rotates according to the rotation angle, and a laser landing point of the ranging device falls on a center point of the target object;
the obtaining module 203 is further configured to obtain the second current angle after rotation of the pan-tilt and a laser distance between the ranging device and the target object;
The second calculation module 208 is further configured to calculate a spatial coordinate of the center point of the target object according to the second current angle and the laser distance.
In an embodiment of the present application, optionally, the marker is one of a measurement operator, a reflective vest, and a balloon.
In the embodiment of the application, optionally, the marker may also be the measured object itself, or may be a specific gesture or human body gesture of the measurement operator.
In the embodiment of the application, the specific instruction sent by the mobile intelligent terminal or other equipment can be used as an activation instruction.
In the embodiment of the application, the power-on of the terminal equipment or the connection between the computing unit and the ranging equipment through the wireless network can also be used as an activation instruction.
In the embodiment of the application, the terminal equipment can recognize preset specific objects, gestures, human body gestures and illumination changes (such as stroboscopic) in the video picture and can also be used as an activation instruction.
In the embodiment of the present application, optionally, when the standby time of the terminal device reaches the preset time threshold, the standby time may also be used as an activation instruction.
In the embodiment of the application, it is to be noted that a computing unit is installed in the terminal device, and the computing unit is used for executing the remote lofting method based on machine vision disclosed in the embodiment of the application.
In the embodiment of the application, the distance measuring equipment is arranged on the cradle head and can rotate along with the rotation of the cradle head.
In the embodiment of the application, the mobile intelligent terminal can be a mobile phone, a PAD, a notebook and other mobile communication terminals, and the embodiment of the application is not limited.
In the embodiment of the application, the terminal equipment can remotely communicate with the mobile intelligent terminal through a wireless network. Further, the measurer can send an activation instruction to the terminal equipment through the mobile intelligent terminal, and meanwhile, the measurer can display a measurement result through the mobile intelligent terminal or feed back an interaction result to the measurer in a voice prompt mode.
In the embodiment of the application, naming distinction among the first activation instruction, the second activation instruction and the third activation instruction in the embodiment of the application is convenient for describing the instructions input to the terminal equipment by the measuring personnel at different stages.
In the embodiment of the application, the terminal equipment can be divided into a standby state, a target detection state, a target tracking state and a precise alignment state according to the execution content of the terminal equipment, and the states can be converted through specified conditions. It should be noted that the division of the states is to facilitate the measurement personnel to intuitively understand the use state of the terminal device, rather than to absolutely limit that a certain step of the terminal device belongs to a certain state.
It can be seen that, by executing the remote lofting method based on machine vision, the remote lofting device based on machine vision according to the embodiment of the application can acquire a video picture from the ranging device by receiving the first activating instruction, then detect whether the video picture contains the marker according to the machine vision algorithm, when detecting that the video picture contains the marker, calculate the imaging width and the imaging height of the marker in the video picture and the imaging coordinate of the first center point of the marker in the video picture, finally determine the first current angle of the cradle head according to the imaging coordinate of the first center point, determine the imaging size of the marker according to the imaging width and the imaging height, calculate the space coordinate of the marker according to the first current angle of the cradle head, the current multiplying power of the camera of the ranging device and the imaging size of the marker, and finally complete the tracking of the marker.
On the other hand, after the tracking of the marker is completed, the terminal device may enter a precise alignment state, at this time, the terminal device performs obtaining a video image from the ranging device again, calculates a center point imaging coordinate of the target object in the video image according to the machine vision algorithm, calculates a rotation angle of the cradle head according to the center point imaging coordinate and a preset device parameter, so that the cradle head rotates according to the rotation angle, and makes a laser landing point of the ranging device fall on a center point of the target object, obtains a second current angle after the cradle head rotates and a laser distance between the ranging device and the target object, and calculates a spatial coordinate of the center point of the target object according to the second current angle and the laser distance, so that the terminal device may utilize the target object to accomplish precise positioning.
Compared with manual lofting in the prior art, the embodiment of the application adopts a machine vision algorithm and distance measuring equipment to automatically track the marker and finish accurate positioning according to the target object, so that repeated running of a measuring person is not needed, the measurement technology dependency on the measuring person is low, and the influence of error operation of the measuring person on the measurement precision can be reduced, so that the embodiment of the application has the advantages of high measurement efficiency and high measurement precision. Meanwhile, the target object is not very striking in a machine vision algorithm and is unfavorable for tracking, so that the embodiment of the application can stably track through the easily-identified identification object. On the other hand, compared with the automatic lofting equipment in the prior art, the embodiment of the application has the advantage of low cost.
In this embodiment of the present application, as an optional implementation manner, the first determining module 206 determines the first current angle of the pan-tilt according to the first center point imaging coordinate in the following specific manner:
comparing and calculating the first center point imaging coordinate with the center point coordinate of the video picture to obtain a pixel difference between the first center point imaging coordinate and the center point coordinate of the video picture;
calculating a horizontal angle and a vertical angle of the cradle head to be rotated according to the pixel difference;
And driving the cradle head to rotate according to the horizontal angle and the vertical angle, so that the center point of the video picture is aligned with the center point of the marker, and taking the angle of the cradle head after rotation as a first current angle of the cradle head.
In this optional embodiment, the pixel difference between the first center point imaging coordinate and the center point coordinate of the video frame is obtained by comparing the first center point imaging coordinate with the center point coordinate of the video frame, then the horizontal angle and the vertical angle of the cradle head required to rotate are calculated according to the pixel difference, and finally the cradle head is driven to rotate according to the horizontal angle and the vertical angle, so that the center point of the video frame is aligned with the center point of the identifier, and the first current angle of the cradle head is obtained.
In this embodiment of the present application, as an optional implementation manner, the second determining module 207 performs determining the imaging size of the marker according to the imaging width and the imaging height in the following specific manner:
comparing the imaging width and the imaging height with a preset width interval and a preset height interval respectively to obtain a comparison result;
adjusting the multiplying power of a camera of the ranging equipment according to the comparison result, so that the imaging width and the imaging height meet preset conditions by adjusting the video picture;
And determining the imaging size of the marker according to the imaging width and the imaging height which meet the preset conditions.
In this optional embodiment, the imaging width and the imaging height are compared with the preset width interval and the preset height interval respectively to obtain a comparison result, then the camera magnification of the ranging device is adjusted according to the comparison result, so that the imaging width and the imaging height meet preset conditions by adjusting the video picture, and finally the imaging size of the marker can be determined according to the imaging width and the imaging height meeting the preset conditions.
In this embodiment of the present application, as an optional implementation manner, the second determining module 207 performs, according to the comparison result, adjustment of the camera magnification of the ranging device, so that the imaging width and the imaging height meet the preset conditions by adjusting the video frame, where the specific manner is that:
when the imaging width and the imaging height are respectively smaller than a preset width interval and a preset height interval, calculating the multiplying power of the camera and controlling the zooming of the camera of the distance measuring equipment so as to amplify the video picture;
when the imaging width and the imaging height are respectively larger than a preset width interval and a preset height interval, the multiplying power of the camera is calculated and controlled to become the multiplying power of the camera so as to reduce the video picture.
In this alternative embodiment, the video frame can be implemented by scaling the video frame so that the imaging width and the imaging height satisfy preset conditions.
In the embodiment of the present application, as an optional implementation manner, the apparatus of the embodiment of the present application further includes a third calculation module and a generation module, where:
the third calculation module calculates the difference between the spatial coordinates of the marker and the spatial coordinates of the target point;
and the generation module is used for generating the traveling direction information according to the difference value between the spatial coordinates of the marker and the spatial coordinates of the target point so as to prompt the traveling direction information to the user.
In this optional embodiment, the traveling direction information may be generated according to a difference between the spatial coordinates of the marker and the spatial coordinates of the target point, and thus the traveling direction information may be prompted to the user.
In this embodiment of the present application, as an optional implementation manner, the receiving module 201 is further configured to receive a third activation instruction, and the state switching module 202 is further configured to switch from the standby state to the target detection state, where the terminal device performs:
the obtaining module 203 is further configured to obtain a video frame from the ranging device, and the identifying module 204 is further configured to detect whether the video frame has a identifier, and if so, control the state switching module 202 to enter a target tracking state.
In this optional embodiment, by initially detecting whether the identifier exists in the video frame, if so, the terminal device may enter the target tracking state, and if not, may enter the standby state, so that the power consumption of the terminal device may be reduced.
Example III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device includes:
a memory 301 storing executable program code;
a processor 302 coupled with the memory 301;
the processor 302 invokes the executable program code stored in the memory 301 to perform the machine vision based remote lofting method disclosed in the first embodiment of the present application.
According to the terminal equipment, a video picture can be obtained from a distance measuring device by receiving a first activating instruction, whether the video picture contains a marker or not is detected according to a machine vision algorithm, when the fact that the video picture contains the marker is detected, the imaging width and the imaging height of the marker in the video picture and the imaging coordinate of a first center point of the marker in the video picture are calculated, finally, the first current angle of a holder is determined according to the imaging coordinate of the first center point, the imaging size of the marker is determined according to the imaging width and the imaging height, and the space coordinate of the marker can be calculated according to the first current angle of the holder, the current multiplying power of a camera and the imaging size of the marker, so that tracking of the marker is finally completed.
On the other hand, after the tracking of the marker is completed, the terminal device may enter a precise alignment state, at this time, the terminal device performs obtaining a video image from the ranging device again, calculates a center point imaging coordinate of the target object in the video image according to the machine vision algorithm, calculates a rotation angle of the cradle head according to the center point imaging coordinate and a preset device parameter, so that the cradle head rotates according to the rotation angle, and makes a laser landing point of the ranging device fall on a center point of the target object, obtains a second current angle after the cradle head rotates and a laser distance between the ranging device and the target object, and calculates a spatial coordinate of the center point of the target object according to the second current angle and the laser distance, so that the terminal device may utilize the target object to accomplish precise positioning.
Compared with manual lofting in the prior art, the embodiment of the application adopts a machine vision algorithm and distance measuring equipment to automatically track the marker and finish accurate positioning according to the target object, so that repeated running of a measuring person is not needed, the measurement technology dependency on the measuring person is low, and the influence of error operation of the measuring person on the measurement precision can be reduced, so that the embodiment of the application has the advantages of high measurement efficiency and high measurement precision. Meanwhile, the target object is not very striking in a machine vision algorithm and is unfavorable for tracking, so that the embodiment of the application can stably track through the easily-identified identification object. On the other hand, compared with the automatic lofting equipment in the prior art, the embodiment of the application has the advantage of low cost.
Example IV
The embodiment of the application discloses a storage medium which stores computer instructions, and the computer instructions are used for executing the remote lofting method based on machine vision disclosed in the first embodiment of the application when being called.
According to the storage medium, by executing the remote lofting method based on machine vision, a video picture can be obtained from a distance measuring device by receiving a first activating instruction, then whether the video picture contains a marker or not is detected according to a machine vision algorithm, when the fact that the video picture contains the marker is detected, the imaging width and the imaging height of the marker in the video picture and the imaging coordinate of a first center point of the marker in the video picture are calculated, finally, the first current angle of a holder is determined according to the imaging coordinate of the first center point, the imaging size of the marker is determined according to the imaging width and the imaging height, and the space coordinate of the marker can be calculated according to the first current angle of the holder, the current multiplying power of a camera and the imaging size of the marker, so that tracking of the marker is finally completed.
On the other hand, after the tracking of the marker is completed, the terminal device may enter a precise alignment state, at this time, the terminal device performs obtaining a video image from the ranging device again, calculates a center point imaging coordinate of the target object in the video image according to the machine vision algorithm, calculates a rotation angle of the cradle head according to the center point imaging coordinate and a preset device parameter, so that the cradle head rotates according to the rotation angle, and makes a laser landing point of the ranging device fall on a center point of the target object, obtains a second current angle after the cradle head rotates and a laser distance between the ranging device and the target object, and calculates a spatial coordinate of the center point of the target object according to the second current angle and the laser distance, so that the terminal device may utilize the target object to accomplish precise positioning.
Compared with manual lofting in the prior art, the embodiment of the application adopts a machine vision algorithm and distance measuring equipment to automatically track the marker and finish accurate positioning according to the target object, so that repeated running of a measuring person is not needed, the measurement technology dependency on the measuring person is low, and the influence of error operation of the measuring person on the measurement precision can be reduced, so that the embodiment of the application has the advantages of high measurement efficiency and high measurement precision. Meanwhile, the target object is not very striking in a machine vision algorithm and is unfavorable for tracking, so that the embodiment of the application can stably track through the easily-identified identification object. On the other hand, compared with the automatic lofting equipment in the prior art, the embodiment of the application has the advantage of low cost.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
It should be noted that the functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM) random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A machine vision based remote lofting method, wherein the method is applied to a terminal device, the method comprising:
receiving a first activation instruction, and converting from a target detection state to a target tracking state, wherein in the target tracking state, the terminal equipment executes:
acquiring a video picture from a distance measuring device;
detecting whether the video picture contains a marker or not according to a machine vision algorithm;
when detecting that the video picture contains a marker, calculating an imaging width and an imaging height of the marker in the video picture and a first center point imaging coordinate of the marker in the video picture;
Determining a first current angle of the cradle head according to the first center point imaging coordinates;
determining the imaging size of the marker according to the imaging width and the imaging height;
calculating to obtain the space coordinates of the marker according to the first current angle of the holder, the current multiplying power of the camera of the distance measuring equipment and the imaging size of the marker;
receiving a second activation instruction, and converting from the target tracking state to a precise alignment state, wherein in the precise alignment state, the terminal device performs:
acquiring a video picture from the distance measuring equipment again;
calculating the imaging coordinates of a central point of the target object in the video picture according to a machine vision algorithm;
calculating the rotation angle of the cradle head according to the center point imaging coordinates and preset equipment parameters, so that the cradle head rotates according to the rotation angle, and the laser landing point of the ranging equipment falls on the center point of the target object;
acquiring a second current angle of the holder after rotation and a laser distance between the distance measuring equipment and the target object;
and calculating the space coordinate of the center point of the target object according to the second current angle and the laser distance.
2. The method of claim 1, wherein the marker is one of a measurement operator, a reflective vest, and a balloon.
3. The method of claim 1, wherein determining the first current angle of the pan-tilt based on the first center point imaging coordinates comprises:
comparing and calculating the first center point imaging coordinate with the center point coordinate of the video picture to obtain a pixel difference between the first center point imaging coordinate and the center point coordinate of the video picture;
calculating the horizontal angle and the vertical angle of the cradle head to be rotated according to the pixel difference;
and driving the cradle head to rotate according to the horizontal angle and the vertical angle, so that the center point of the video picture is aligned with the center point of the marker, and taking the angle of the cradle head after rotation as the first current angle of the cradle head.
4. The method of claim 1, wherein said determining an imaging size of the marker from the imaging width and the imaging height comprises:
comparing the imaging width and the imaging height with a preset width interval and a preset height interval respectively to obtain a comparison result;
Adjusting the multiplying power of a camera of the distance measuring equipment according to the comparison result, so that the imaging width of the marker and the imaging height of the marker meet preset conditions by adjusting the video picture;
and determining the imaging size of the marker according to the imaging width and the imaging height which meet preset conditions.
5. The method of claim 4, wherein adjusting the camera magnification of the ranging apparatus according to the comparison result to make the imaging width and the imaging height satisfy a preset condition by adjusting the video frame, comprises:
when the imaging width and the imaging height are respectively smaller than the preset width interval and the preset height interval, calculating a camera multiplying power and controlling the camera zooming of the distance measuring equipment to amplify the video picture;
and when the imaging width and the imaging height are respectively larger than the preset width interval and the preset height interval, calculating the multiplying power of the camera and controlling the zooming of the camera so as to reduce the video picture.
6. The method of claim 1, wherein after the calculating the spatial coordinates of the marker according to the first current angle of the pan-tilt, the current magnification of the camera, and the imaging size of the marker, the method further comprises:
Calculating a difference between the spatial coordinates of the marker and the spatial coordinates of the target point;
and generating traveling direction information according to the difference value between the spatial coordinates of the marker and the spatial coordinates of the target point so as to prompt the traveling direction information to a user.
7. The method of claim 1, wherein prior to said receiving the first activation instruction and transitioning from the target detection state to the target tracking state, the method further comprises:
receiving a third activation instruction, and switching from a standby state to the target detection state, wherein in the target detection state, the terminal device executes:
and acquiring a video picture from the distance measuring equipment, detecting whether the identifier exists in the video picture, and if so, entering the target tracking state.
8. A remote lofting device based on machine vision, the device being applied to a terminal device, the device comprising:
the receiving module is used for receiving the first activation instruction;
the state switching module is used for switching from the target detection state to the target tracking state;
the acquisition module is used for acquiring video pictures from the ranging equipment;
the identification module is used for detecting whether the video picture contains a marker or not according to a machine vision algorithm;
The first calculating module is used for calculating the imaging width and the imaging height of the marker in the video picture and the imaging coordinate of the first center point of the marker in the video picture when the identification module detects that the marker is contained in the video picture;
the first determining module is used for determining a first current angle of the cradle head according to the first center point imaging coordinates;
a second determining module, configured to determine an imaging size of the marker according to the imaging width and the imaging height;
the second calculation module is used for calculating to obtain the space coordinates of the marker according to the first current angle of the holder, the current multiplying power of the camera and the imaging size of the marker;
the receiving module is also used for receiving a second activation instruction;
the state switching module is further configured to switch from the target tracking state to a precise alignment state, where the precise alignment state is:
the acquisition module is further used for acquiring a video picture from the distance measuring equipment again;
the first calculation module is further used for calculating the imaging coordinates of the center point of the target object in the video picture according to a machine vision algorithm;
The first determining module is further configured to calculate a rotation angle of the pan-tilt according to the center point imaging coordinate and a preset device parameter, so that the pan-tilt rotates according to the rotation angle, and a laser landing point of the ranging device falls on a center point of the target object;
the acquisition module is further used for acquiring a second current angle of the holder after rotation and a laser distance between the distance measuring equipment and the target object;
the second calculation module is further configured to calculate, according to the second current angle and the laser distance, a spatial coordinate of a center point of the target object.
9. A terminal device, characterized in that the terminal device comprises:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the machine vision based remote lofting method of any of claims 1-7.
10. A storage medium storing computer instructions which, when invoked, are operable to perform the machine vision based remote loft method of any one of claims 1-7.
CN202010622000.0A 2020-06-30 2020-06-30 Remote lofting method and device based on machine vision, terminal equipment and storage medium Active CN111783659B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010622000.0A CN111783659B (en) 2020-06-30 2020-06-30 Remote lofting method and device based on machine vision, terminal equipment and storage medium
PCT/CN2021/081145 WO2022001193A1 (en) 2020-06-30 2021-03-16 Method and apparatus for remote setting-out based on machine vision, and terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010622000.0A CN111783659B (en) 2020-06-30 2020-06-30 Remote lofting method and device based on machine vision, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111783659A CN111783659A (en) 2020-10-16
CN111783659B true CN111783659B (en) 2023-10-20

Family

ID=72760547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010622000.0A Active CN111783659B (en) 2020-06-30 2020-06-30 Remote lofting method and device based on machine vision, terminal equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111783659B (en)
WO (1) WO2022001193A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783659B (en) * 2020-06-30 2023-10-20 福建汇川物联网技术科技股份有限公司 Remote lofting method and device based on machine vision, terminal equipment and storage medium
CN116506735B (en) * 2023-06-21 2023-11-07 清华大学 Universal camera interference method and system based on active vision camera

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102445183A (en) * 2011-10-09 2012-05-09 福建汇川数码技术科技有限公司 Apparatus of ranging laser point of remote ranging system and positioning method based on paralleling of laser and camera
CN110332854A (en) * 2019-07-25 2019-10-15 深圳市恒天伟焱科技有限公司 Localization method, gun sight and the computer readable storage medium of object
CN110956642A (en) * 2019-12-03 2020-04-03 深圳市未来感知科技有限公司 Multi-target tracking identification method, terminal and readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11080880B2 (en) * 2017-08-25 2021-08-03 Maker Trading Pte Ltd Machine vision system and method for identifying locations of target elements
CN111783659B (en) * 2020-06-30 2023-10-20 福建汇川物联网技术科技股份有限公司 Remote lofting method and device based on machine vision, terminal equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102445183A (en) * 2011-10-09 2012-05-09 福建汇川数码技术科技有限公司 Apparatus of ranging laser point of remote ranging system and positioning method based on paralleling of laser and camera
CN110332854A (en) * 2019-07-25 2019-10-15 深圳市恒天伟焱科技有限公司 Localization method, gun sight and the computer readable storage medium of object
CN110956642A (en) * 2019-12-03 2020-04-03 深圳市未来感知科技有限公司 Multi-target tracking identification method, terminal and readable storage medium

Also Published As

Publication number Publication date
CN111783659A (en) 2020-10-16
WO2022001193A1 (en) 2022-01-06

Similar Documents

Publication Publication Date Title
CN111783659B (en) Remote lofting method and device based on machine vision, terminal equipment and storage medium
US9491370B2 (en) Methods and apparatuses for providing guide information for a camera
CN103139463B (en) Method, system and mobile device for augmenting reality
CN106444777A (en) Robot automatic return charging method and system
CN100553299C (en) Face image detecting apparatus and control method thereof
CN101739567A (en) Terminal apparatus, display control method, and display control program
US11816924B2 (en) Method for behaviour recognition based on line-of-sight estimation, electronic equipment, and storage medium
CN105718052A (en) Instruction method and apparatus for correcting somatosensory interaction tracking failure
CN112162627A (en) Eyeball tracking method combined with head movement detection and related device
CN112492201A (en) Photographing method and device and electronic equipment
CN109015651A (en) A kind of visual processes integral system and its application method based on teaching machine
CN107911688A (en) A kind of homework of supplying power scene Synergistic method based on augmented reality device
CN109684935A (en) A kind of acquisition of high-precision 3D face, payment system and method
CN103900714A (en) Device and method for thermal image matching
CN103900711A (en) Infrared selecting device and infrared selecting method
JP2019188467A (en) Recording device, welding support device, recording method and program
CN115002443A (en) Image acquisition processing method and device, electronic equipment and storage medium
JP5222646B2 (en) Terminal device, display control method, and display control program
US20120223972A1 (en) Projecting system and method thereof
CN103813076A (en) Information processing method and electronic device
CN113518423A (en) Positioning method and device and electronic equipment
CN103900713A (en) Device and method for detecting thermal image
US20140062864A1 (en) Method and apparatus for extracting three-dimensional distance information from recognition target
KR20210022387A (en) Hole location updating device and operating method of hole location updating device
CN108415587A (en) Scribing line remote control and scribing line generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant