WO2022001193A1 - Method and apparatus for remote setting-out based on machine vision, and terminal device and storage medium - Google Patents

Method and apparatus for remote setting-out based on machine vision, and terminal device and storage medium Download PDF

Info

Publication number
WO2022001193A1
WO2022001193A1 PCT/CN2021/081145 CN2021081145W WO2022001193A1 WO 2022001193 A1 WO2022001193 A1 WO 2022001193A1 CN 2021081145 W CN2021081145 W CN 2021081145W WO 2022001193 A1 WO2022001193 A1 WO 2022001193A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
marker
center point
coordinates
target
Prior art date
Application number
PCT/CN2021/081145
Other languages
French (fr)
Chinese (zh)
Inventor
郑文
Original Assignee
福建汇川物联网技术科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 福建汇川物联网技术科技股份有限公司 filed Critical 福建汇川物联网技术科技股份有限公司
Publication of WO2022001193A1 publication Critical patent/WO2022001193A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/36Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the present application relates to the field of measurement, and in particular, to a machine vision-based remote stakeout method, device, terminal device, and storage medium.
  • the current mainstream stakeout method uses a total station to operate, which requires the close cooperation of two surveyors. Among them, the pole runner is responsible for moving the prism rod to the key point under the guidance of the operator, and the operator is responsible for placing the laser on the total station. Accurately align the center of the prism rod for measurement and obtain the spatial coordinates of key points.
  • this measurement method requires the close cooperation of two measurement personnel to implement, and the measurement accuracy is limited by the operator's own professional skill level, resulting in low measurement efficiency, Disadvantages such as low precision.
  • the embodiments of the present application provide a machine vision-based remote stakeout method, device, terminal device, and storage medium, so as to improve the efficiency and accuracy of stakeout.
  • the embodiment of the present application discloses a remote stakeout method based on machine vision, the method is applied to a terminal device, and the method includes the steps:
  • the video frame includes a marker
  • the rotation angle of the gimbal is calculated according to the imaging coordinates of the center point and preset device parameters, so that the gimbal rotates according to the rotation angle and the laser landing point of the ranging device falls on the target on the center of the object;
  • the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance.
  • the machine vision-based remote stakeout method of the present application receives a first activation instruction and acquires a video image from a ranging device, and then detects whether the video image contains a marker according to a machine vision algorithm. When it is detected that the video image contains a marker, calculate The imaging width and imaging height of the marker in the video screen and the imaging coordinates of the first center point of the marker in the video screen, and finally determine the first current angle of the pan/tilt according to the imaging coordinates of the first center point, according to the imaging width and imaging height Determine the imaging size of the marker, calculate the spatial coordinates of the marker according to the first current angle of the pan/tilt head, the current magnification of the camera of the ranging device and the imaging size of the marker, and finally complete the tracking of the marker.
  • the terminal device can enter the precise alignment state.
  • the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained.
  • the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
  • the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected.
  • the measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy.
  • the embodiment of the present application can accurately track the target object through an easily recognizable identification object.
  • the embodiment of the present application has the advantage of low cost.
  • the identifier is one of a measurement operator, a reflective vest, and a balloon.
  • measurement operators can be used as markers due to their large size or eye-catching colors.
  • the identifier can also be the object to be measured itself, or can be a specific gesture or human posture of the measuring operator.
  • a specific instruction sent by the mobile smart terminal or other devices can be used as the activation instruction.
  • the power-on of the terminal device, or the establishment of a connection between the computing unit and the ranging device through the wireless network can also be used as an activation instruction.
  • the terminal device recognizes a preset specific object, gesture, human body posture, and illumination change (such as strobe) in the video picture, which can also be used as an activation instruction.
  • a preset specific object gesture, human body posture, and illumination change (such as strobe) in the video picture, which can also be used as an activation instruction.
  • the standby time of the terminal device when the standby time of the terminal device reaches a preset time threshold, it can also be used as an activation instruction.
  • the determining of the first current angle of the pan/tilt head according to the imaging coordinates of the first center point includes sub-steps:
  • the pan/tilt is driven to rotate according to the horizontal angle and the vertical angle, so that the center point of the video image is aligned with the center point of the marker, and the rotated angle of the pan/tilt is taken as the first point of the pan/tilt. a current angle.
  • the pixel difference between the imaging coordinates of the first center point and the center point coordinates of the video image is calculated and obtained by comparing the imaging coordinates of the first center point with the center point coordinates of the video image, and then according to The pixel difference calculates the horizontal and vertical angles that the gimbal needs to rotate, and finally drives the gimbal to rotate according to the horizontal and vertical angles, so that the center point of the video image can be aligned with the center point of the marker, and the first current angle of the gimbal can be obtained.
  • the determining the imaging size of the marker according to the imaging width and the imaging height includes:
  • the imaging size of the marker is determined according to the imaging width and the imaging height satisfying preset conditions.
  • the imaging width and imaging height are compared with the preset width interval and the preset height interval respectively, and the comparison result is obtained, and then the camera magnification of the ranging device is adjusted according to the comparison result, so as to adjust the video
  • the image makes the imaging width and imaging height meet the preset conditions, and finally the imaging size of the marker can be determined according to the imaging width and imaging height that satisfy the preset conditions.
  • the camera magnification of the ranging device is adjusted according to the comparison result, so that the imaging width and the imaging height are adjusted by adjusting the video picture.
  • Meet pre-set conditions including:
  • the camera magnification is calculated and the camera is controlled to zoom in order to reduce the video picture.
  • the video picture can be realized so that the imaging width and the imaging height satisfy the preset conditions.
  • the The method after calculating the spatial coordinates of the marker according to the angle of the pan/tilt head, the current magnification of the camera, and the imaging size of the marker, the The method also includes:
  • the rotation angle of the gimbal is calculated according to the imaging coordinates of the center point and preset device parameters, so that the gimbal rotates according to the rotation angle and the laser landing point of the ranging device falls on the target on the center of the object;
  • the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance.
  • the spatial coordinates of the center point of the target can be calculated according to the second current angle and the laser distance, thereby further improving the spatial coordinates of the center of the target. Measurement accuracy.
  • the method further includes:
  • the travel direction information is generated according to the difference between the space coordinates of the marker and the space coordinates of the target point, so as to prompt the travel direction information to the user.
  • the method before the receiving the first activation instruction and the transition from the target detection state to the target tracking state, the method further includes the steps:
  • the terminal device executes:
  • the travel direction information may be generated according to the difference between the spatial coordinates of the marker and the spatial coordinates of the target point, and then the travel direction information may be prompted to the user.
  • the embodiment of the present application discloses a remote stakeout device based on machine vision, the device is applied to terminal equipment, and the device includes:
  • a receiving module for receiving the first activation instruction
  • the state switching module is used to switch from the target detection state to the target tracking state
  • the acquisition module is used to acquire the video image from the ranging device
  • an identification module configured to detect whether the video picture contains a marker according to a machine vision algorithm
  • the first calculation module is configured to calculate the imaging width and imaging height of the landmark in the video picture and the imaging height of the landmark in the video when the identification module detects that the video picture contains a marker.
  • a first determining module configured to determine the first current angle of the pan-tilt head according to the imaging coordinates of the first center point
  • a second determining module configured to determine the imaging size of the marker according to the imaging width and the imaging height
  • a second calculation module configured to calculate the spatial coordinates of the marker according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker;
  • the receiving module is further configured to receive a second activation instruction
  • the state switching module is further configured to switch from the target tracking state to a precise alignment state, in the precise alignment state;
  • the acquiring module is further configured to acquire video images from the ranging device again;
  • the first calculation module is further configured to calculate the imaging coordinates of the center point of the target in the video frame according to a machine vision algorithm
  • the first determining module is further configured to calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and preset device parameters, so that the gimbal rotates according to the rotation angle and makes the distance measurement The laser landing point of the device falls on the center point of the target object;
  • the acquiring module is further configured to acquire the second current angle after the pan/tilt is rotated and the laser distance between the ranging device and the target;
  • the second calculation module is further configured to calculate the spatial coordinates of the center point of the target object according to the second current angle and the laser distance.
  • the machine vision-based remote stakeout device of the present application can obtain a video image from the ranging device by receiving the first activation instruction by executing the machine vision-based remote stakeout method, and then detect whether the video image contains a marker according to the machine vision algorithm.
  • a marker is included in the video image
  • the imaging width and imaging height of the marker in the video image and the imaging coordinates of the first center point of the marker in the video image are calculated, and finally the first center point of the gimbal is determined according to the imaging coordinates of the first center point.
  • the current angle, the imaging size of the marker is determined according to the imaging width and the imaging height, the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt, the current magnification of the camera of the ranging device, and the imaging size of the marker, and the marker is finally completed. tracking of things.
  • the terminal device can enter the precise alignment state.
  • the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained.
  • the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
  • the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected.
  • the measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy.
  • the embodiment of the present application can track stably through the easily recognizable marking object.
  • the embodiment of the present application has the advantage of low cost.
  • An embodiment of the present application discloses a terminal device, where the terminal device includes:
  • a processor coupled to the memory
  • the processor invokes the executable program code stored in the memory to execute the machine vision-based remote stakeout method disclosed in the embodiment of the present application.
  • the terminal device of the present application can obtain a video image from a ranging device by receiving a first activation instruction, and then detect whether the video image contains a marker according to a machine vision algorithm.
  • a marker is used, the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture are calculated, and finally the first current angle of the gimbal is determined according to the imaging coordinates of the first center point.
  • the imaging width and imaging height determine the imaging size of the marker, and the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker, and finally the tracking of the marker is completed.
  • the terminal device can enter the precise alignment state.
  • the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained.
  • the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
  • the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected.
  • the measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy.
  • the embodiment of the present application can track stably through the easily identifiable marking object.
  • the embodiment of the present application has the advantage of low cost.
  • the embodiments of the present application disclose a storage medium, where the storage medium stores computer instructions, and when the computer instructions are invoked, the computer instructions are used to execute the machine vision-based remote stakeout method disclosed in the embodiments of the present application.
  • the storage medium of the present application executes the remote stakeout method based on machine vision, and can obtain a video picture from a ranging device by receiving a first activation instruction, and then detect whether the video picture contains a marker according to a machine vision algorithm.
  • a marker is used, the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture are calculated, and finally the first current angle of the gimbal is determined according to the imaging coordinates of the first center point.
  • the imaging width and imaging height determine the imaging size of the marker, and the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt, the current magnification of the camera of the ranging device, and the imaging size of the marker, and finally the tracking of the marker is completed.
  • the terminal device can enter the precise alignment state.
  • the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained.
  • the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
  • the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected.
  • the measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy.
  • the embodiment of the present application can track stably through the easily recognizable marking object.
  • the embodiment of the present application has the advantage of low cost.
  • FIG. 1 is a schematic flowchart of a machine vision-based remote stakeout method disclosed in an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of a machine vision-based remote stakeout device disclosed in an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of a terminal device disclosed in an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a machine vision-based remote stakeout method disclosed in an embodiment of the present application, and the method is applied to a terminal device. As shown in Figure 1, the method of the embodiment of the present application includes the steps:
  • the terminal device receives the first activation instruction, and convert from the target detection state to the target tracking state.
  • the terminal device performs the following steps:
  • the marker is one of a measurement operator, a reflective vest, and a balloon.
  • the identifier may also be the object to be measured itself, or may be a specific gesture or human body posture of the measurement operator.
  • a specific instruction sent by a mobile smart terminal or other device may be used as an activation instruction.
  • the power-on of the terminal device or the establishment of a connection between the computing unit and the ranging device through the wireless network can also be used as an activation instruction.
  • the terminal device recognizes a preset specific object, gesture, human body posture, and illumination change (such as strobe) in the video picture, which can also be used as an activation instruction.
  • the standby time of the terminal device reaches a preset time threshold, it can also be used as an activation instruction.
  • a computing unit is installed in the terminal device, and the computing unit is used to execute the machine vision-based remote stakeout method disclosed in the embodiments of the present application.
  • the ranging device is installed on the pan-tilt, and the ranging device can rotate as the pan-tilt rotates.
  • the mobile intelligent terminal may be a mobile phone, or may be other mobile communication terminals such as a PAD and a notebook, which are not limited in this embodiment of the present application.
  • the terminal device may perform remote communication with the mobile intelligent terminal through a wireless network.
  • the surveyor can send an activation instruction to the terminal device through the mobile smart terminal, and at the same time, the mobile smart terminal can display the measurement results to the surveyor or feed back the interactive results to the surveyor in the form of voice prompts.
  • the marker may be the object to be measured itself, or other objects, such as a vest decorated with a specific pattern, and optionally, the marker may also be a specific gesture or human posture of the measuring person.
  • the naming differences among the first activation instruction, the second activation instruction, and the third activation instruction in the embodiments of the present application are for the convenience of describing the instructions input by the surveyor to the terminal device at different stages.
  • the terminal device can be divided into a standby state, a target detection state, a target tracking state, and a precise alignment state according to the execution content of the terminal device, and transitions between the states can be performed according to specified conditions. It should be noted that the division of states is to facilitate the measurement personnel to intuitively understand the use state of the terminal device, rather than to absolutely limit a certain step of the terminal device to a certain state.
  • the machine vision-based remote stakeout method of the embodiment of the present application obtains the video picture from the ranging device by receiving the first activation instruction, and then detects whether the video picture contains a marker according to the machine vision algorithm.
  • the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture are calculated, and finally the first current angle of the pan/tilt is determined according to the imaging coordinates of the first center point, and the imaging coordinates are based on the imaging coordinates.
  • the width and imaging height determine the imaging size of the marker, and the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker, and finally the tracking of the marker is completed.
  • the terminal device can enter the precise alignment state.
  • the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained.
  • the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
  • the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected.
  • the measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy.
  • the embodiment of the present application can track stably through the easily recognizable marking object.
  • the embodiment of the present application has the advantage of low cost.
  • the first current angle of the gimbal is determined according to the imaging coordinates of the first center point
  • the pixel difference between the imaging coordinates of the first center point and the center point coordinates of the video image is calculated and obtained by comparing the imaging coordinates of the first center point with the center point coordinates of the video image, and then according to The pixel difference calculates the horizontal and vertical angles that the gimbal needs to rotate, and finally drives the gimbal to rotate according to the horizontal and vertical angles, so that the center point of the video image can be aligned with the center point of the marker, and the first current angle of the gimbal can be obtained.
  • the imaging size of the marker is determined according to the imaging width and imaging height, including sub-steps:
  • the imaging size of the marker is determined according to the imaging width and imaging height satisfying the preset conditions.
  • the imaging width and imaging height are compared with the preset width interval and the preset height interval respectively, and the comparison result is obtained, and then the camera magnification of the ranging device is adjusted according to the comparison result, so as to adjust the video
  • the image makes the imaging width and imaging height meet the preset conditions, and finally the imaging size of the marker can be determined according to the imaging width and imaging height that satisfy the preset conditions.
  • adjusting the camera magnification of the ranging device according to the comparison result, so that the imaging width and imaging height meet the preset conditions by adjusting the video picture including the sub-steps:
  • the camera magnification is calculated and the camera zoom is controlled to reduce the video image.
  • the video picture can be realized so that the imaging width and the imaging height satisfy the preset conditions.
  • step 107 after calculating the spatial coordinates of the marker according to the first current angle of the pan/tilt head, the current magnification of the camera of the ranging device, and the imaging size of the marker , the method of the embodiment of the present application also includes the steps:
  • the travel direction information is generated according to the difference between the space coordinates of the marker and the space coordinates of the target point, so as to prompt the travel direction information to the user.
  • the travel direction information may be generated according to the difference between the spatial coordinates of the marker and the spatial coordinates of the target point, and then the travel direction information may be prompted to the user.
  • the method before receiving the first activation instruction and transitioning from the target detection state to the target tracking state, the method further includes the steps:
  • the terminal device executes:
  • the terminal device preliminarily detected whether there is a marker in the video image. If there is, the terminal device can enter the target tracking state, and if not, it can enter the standby state, so that the power of the terminal device can be reduced. consume.
  • FIG. 2 is a schematic structural diagram of a machine vision-based remote stakeout device disclosed in an embodiment of the present application, and the device is applied to a terminal device. As shown in Figure 2, the device includes:
  • a receiving module 201 configured to receive a first activation instruction
  • a state switching module 202 configured to switch from a target detection state to a target tracking state
  • an acquisition module 203 configured to acquire a video picture from a ranging device
  • the identification module 204 is used for detecting whether the video picture contains a marker according to the machine vision algorithm
  • the first calculation module 205 is used to calculate the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture when the identification module detects that the marker is included in the video picture;
  • a first determining module 206 configured to determine the first current angle of the pan/tilt head according to the imaging coordinates of the first center point;
  • the second determining module 207 is configured to determine the imaging size of the marker according to the imaging width and imaging height;
  • the second calculation module 208 is configured to calculate and obtain the spatial coordinates of the marker according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker;
  • the receiving module 201 is further configured to receive a second activation instruction
  • the state switching module 202 is also used for converting from the target tracking state to the precise alignment state, under the precise alignment state;
  • the acquiring module 203 is further configured to acquire the video picture from the ranging device again;
  • the first calculation module 205 is further configured to calculate the imaging coordinates of the center point of the target in the video frame according to the machine vision algorithm;
  • the first determination module 206 is also used to calculate the rotation angle of the pan/tilt according to the imaging coordinates of the center point and the preset device parameters, so that the pan/tilt rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center of the target object Point;
  • the acquiring module 203 is also used to acquire the second current angle after the pan/tilt is rotated and the laser distance between the ranging device and the target;
  • the second calculation module 208 is further configured to calculate the spatial coordinates of the center point of the target object according to the second current angle and the laser distance.
  • the marker is one of a measurement operator, a reflective vest, and a balloon.
  • the identifier may also be the object to be measured itself, or may be a specific gesture or human body posture of the measurement operator.
  • a specific instruction sent by a mobile smart terminal or other device may be used as an activation instruction.
  • the power-on of the terminal device or the establishment of a connection between the computing unit and the ranging device through the wireless network can also be used as an activation instruction.
  • the terminal device recognizes a preset specific object, gesture, human body posture, and illumination change (such as strobe) in the video picture, which can also be used as an activation instruction.
  • the standby time of the terminal device reaches a preset time threshold, it can also be used as an activation instruction.
  • a computing unit is installed in the terminal device, and the computing unit is used to execute the machine vision-based remote stakeout method disclosed in the embodiments of the present application.
  • the ranging device is installed on the pan-tilt, and the ranging device can rotate as the pan-tilt rotates.
  • the mobile intelligent terminal may be a mobile phone, or may be other mobile communication terminals such as a PAD and a notebook, which are not limited in this embodiment of the present application.
  • the terminal device may perform remote communication with the mobile intelligent terminal through a wireless network.
  • the surveyor can send an activation instruction to the terminal device through the mobile smart terminal, and at the same time, the mobile smart terminal can display the measurement results to the surveyor or feed back the interactive results to the surveyor in the form of voice prompts.
  • the naming differences among the first activation instruction, the second activation instruction, and the third activation instruction in the embodiments of the present application are for the convenience of describing the instructions input by the surveyor to the terminal device at different stages.
  • the terminal device can be divided into a standby state, a target detection state, a target tracking state, and a precise alignment state according to the execution content of the terminal device, and transitions between the states can be performed according to specified conditions. It should be noted that the division of states is to facilitate the measurement personnel to intuitively understand the use state of the terminal device, rather than to absolutely limit a certain step of the terminal device to a certain state.
  • the machine vision-based remote stakeout device of the embodiment of the present application can obtain a video image from the ranging device by receiving the first activation instruction by executing the machine vision-based remote stakeout method, and then detect whether the video image contains a marker according to a machine vision algorithm.
  • the first current angle of the stage, the imaging size of the marker is determined according to the imaging width and imaging height, and the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt, the current magnification of the camera of the ranging device, and the imaging size of the marker. , and finally complete the tracking of the marker.
  • the terminal device can enter the precise alignment state.
  • the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained.
  • the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
  • the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected.
  • the measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy.
  • the embodiment of the present application can track stably through the easily recognizable marking object.
  • the embodiment of the present application has the advantage of low cost.
  • the specific manner in which the first determination module 206 determines the first current angle of the pan/tilt head according to the imaging coordinates of the first center point is as follows:
  • the pixel difference between the imaging coordinates of the first center point and the center point coordinates of the video image is calculated and obtained by comparing the imaging coordinates of the first center point with the center point coordinates of the video image, and then according to The pixel difference calculates the horizontal and vertical angles that the gimbal needs to rotate, and finally drives the gimbal to rotate according to the horizontal and vertical angles, so that the center point of the video image can be aligned with the center point of the marker, and the first current angle of the gimbal can be obtained.
  • the specific manner in which the second determining module 207 determines the imaging size of the marker according to the imaging width and imaging height is as follows:
  • the imaging size of the marker is determined according to the imaging width and imaging height satisfying the preset conditions.
  • the imaging width and imaging height are compared with the preset width interval and the preset height interval respectively, and the comparison result is obtained, and then the camera magnification of the ranging device is adjusted according to the comparison result, so as to adjust the video
  • the image makes the imaging width and imaging height meet the preset conditions, and finally the imaging size of the marker can be determined according to the imaging width and imaging height that satisfy the preset conditions.
  • the second determining module 207 adjusts the camera magnification of the ranging device according to the comparison result, so as to adjust the video image so that the imaging width and imaging height meet the specific conditions of the preset conditions.
  • the way is:
  • the camera magnification is calculated and the camera zoom is controlled to reduce the video image.
  • the video picture can be realized so that the imaging width and the imaging height satisfy the preset conditions.
  • the apparatus of the embodiment of the present application further includes a third calculation module and a generation module, wherein:
  • the third calculation module calculates the difference between the spatial coordinates of the marker and the spatial coordinates of the target point
  • the generating module is used for generating travel direction information according to the difference between the space coordinates of the marker and the space coordinates of the target point, so as to prompt the travel direction information to the user.
  • the travel direction information may be generated according to the difference between the spatial coordinates of the marker and the spatial coordinates of the target point, and then the travel direction information may be prompted to the user.
  • the receiving module 201 is further configured to receive a third activation instruction
  • the state switching module 202 is further configured to switch from the standby state to the target detection state.
  • the terminal Device execution In the target detection state, the terminal Device execution:
  • the acquisition module 203 is also used for acquiring a video picture from the ranging device, and the identification module 204 is also used for detecting whether there is a marker in the video picture, and if the state switching module 202 is used, it controls to enter the target tracking state.
  • the terminal device preliminarily detected whether there is a marker in the video image. If there is, the terminal device can enter the target tracking state, and if not, it can enter the standby state, so that the power of the terminal device can be reduced. consume.
  • FIG. 3 is a schematic structural diagram of a terminal device disclosed in an embodiment of the present application.
  • the terminal equipment includes:
  • a memory 301 storing executable program code
  • processor 302 coupled to the memory 301;
  • the processor 302 invokes the executable program code stored in the memory 301 to execute the machine vision-based remote stakeout method disclosed in the embodiments of the present application.
  • the terminal device of the present application can obtain a video image from a ranging device by receiving a first activation instruction, and then detect whether the video image contains a marker according to a machine vision algorithm.
  • a marker is used, the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture are calculated, and finally the first current angle of the gimbal is determined according to the imaging coordinates of the first center point.
  • the imaging width and imaging height determine the imaging size of the marker, and the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker, and finally the tracking of the marker is completed.
  • the terminal device can enter the precise alignment state.
  • the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained.
  • the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
  • the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected.
  • the measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy.
  • the embodiment of the present application can track stably through the easily recognizable marking object.
  • the embodiment of the present application has the advantage of low cost.
  • the embodiments of the present application disclose a storage medium, where computer instructions are stored in the storage medium, and when the computer instructions are invoked, they are used to execute the machine vision-based remote stakeout method disclosed in the embodiments of the present application.
  • the storage medium of the present application can acquire a video image from a ranging device by receiving a first activation instruction, and then detect whether the video image contains a marker according to a machine vision algorithm.
  • a marker When a marker is used, the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture are calculated, and finally the first current angle of the gimbal is determined according to the imaging coordinates of the first center point.
  • the imaging width and imaging height determine the imaging size of the marker, and the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker, and finally the tracking of the marker is completed.
  • the terminal device can enter the precise alignment state.
  • the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained.
  • the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
  • the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected.
  • the measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy.
  • the embodiment of the present application can track stably through the easily recognizable marking object.
  • the embodiment of the present application has the advantage of low cost.
  • the disclosed apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
  • units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional module in each embodiment of the present application may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
  • the functions are implemented in the form of software function modules and sold or used as independent products, they may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the technology in the field or the parts of the technical solutions.
  • the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM) random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • the present application provides a machine vision-based remote stakeout method, device, terminal equipment, and storage medium, which use machine vision algorithms and ranging equipment to automatically track markers and complete accurate positioning according to the target, which can reduce the mistakes of measuring personnel.
  • the influence of operation on measurement accuracy has the advantages of high measurement efficiency, high measurement accuracy and low cost.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

A method and an apparatus for remote setting-out based on machine vision, and a terminal device and a storage medium, the method for remote setting-out based on machine vision comprising: automatically tracking easily detectable markers by means of a machine vision algorithm and, when receiving a precise positioning command, implementing precise positioning on the basis of a target object to finally complete setting-out; the present method has the advantages of high setting-out efficiency, high setting-out precision, and low costs.

Description

基于机器视觉远程放样方法、装置及终端设备、存储介质Machine vision-based remote stakeout method, device and terminal equipment, storage medium
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请要求于2020年06月30日提交中国专利局的申请号为202010622000.0、名称为“基于机器视觉远程放样方法、装置及终端设备、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202010622000.0 and the title of "Machine Vision-Based Remote Stakeout Method, Device and Terminal Equipment, Storage Medium" filed with the China Patent Office on June 30, 2020, the entire content of which is approved by Reference is incorporated in this application.
技术领域technical field
本申请涉及测量领域,具体而言,涉及一种基于机器视觉远程放样方法、装置及终端设备、存储介质。The present application relates to the field of measurement, and in particular, to a machine vision-based remote stakeout method, device, terminal device, and storage medium.
背景技术Background technique
在工程建设过程中,需要精确测量各关键点的空间坐标,用于指导后续的施工建设工作,是工程建设的重要环节。目前主流的放样方法利用全站仪进行操作,需要两名测量人员密切配合,其中跑杆员负责在操作员的指引下将棱镜杆移动到关键点,操作员负责将全站仪的激光落点精确对准棱镜杆中心进行测量并得到关键点的空间坐标,然而这种测量方式需要两名测量人员密切配合才能实施,且测量精度受操作员自身的专业技能水平限制,从而具有测量效率低、精度低等缺点。In the process of engineering construction, it is necessary to accurately measure the spatial coordinates of each key point to guide the subsequent construction work, which is an important part of engineering construction. The current mainstream stakeout method uses a total station to operate, which requires the close cooperation of two surveyors. Among them, the pole runner is responsible for moving the prism rod to the key point under the guidance of the operator, and the operator is responsible for placing the laser on the total station. Accurately align the center of the prism rod for measurement and obtain the spatial coordinates of key points. However, this measurement method requires the close cooperation of two measurement personnel to implement, and the measurement accuracy is limited by the operator's own professional skill level, resulting in low measurement efficiency, Disadvantages such as low precision.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供了一种基于机器视觉远程放样方法、装置及终端设备、存储介质,用以提高放样的效率及精度。The embodiments of the present application provide a machine vision-based remote stakeout method, device, terminal device, and storage medium, so as to improve the efficiency and accuracy of stakeout.
本申请实施例公开了一种基于机器视觉远程放样方法,所述方法应用于终端设备,所述方法包括步骤:The embodiment of the present application discloses a remote stakeout method based on machine vision, the method is applied to a terminal device, and the method includes the steps:
接收第一激活指令,并从目标检测状态转换为目标跟踪状态,所述目标跟踪状态下,所述终端设备执行:Receive a first activation instruction, and convert from a target detection state to a target tracking state, in which the terminal device executes:
从测距设备获取视频画面;Obtain video images from ranging equipment;
根据机器视觉算法检测所述视频画面中是否包含标识物;Detecting whether the video picture contains a marker according to a machine vision algorithm;
当检测到所述视频画面中包含标识物时,计算所述标识物在所述视频画面中的成像宽度、成像高度及所述标识物在所述视频画面中的第一中心点成像坐标;When it is detected that the video frame includes a marker, calculating the imaging width and imaging height of the marker in the video frame and the imaging coordinates of the first center point of the marker in the video frame;
根据所述第一中心点成像坐标确定云台的第一当前角度;Determine the first current angle of the gimbal according to the imaging coordinates of the first center point;
根据所述成像宽度、所述成像高度确定所述标识物的成像大小;Determine the imaging size of the marker according to the imaging width and the imaging height;
根据所述云台的第一当前角度、所述测距设备的摄像头当前倍率和所述标识物的成像大小计算得到所述标识物的空间坐标;接收第二激活指令,并从所述目标跟踪状态转换为精确对准状态,所述精确对准状态下,所述终端设备执行:Calculate the spatial coordinates of the marker according to the first current angle of the pan/tilt head, the current magnification of the camera of the ranging device and the imaging size of the marker; receive a second activation instruction, and track the target from the target The state transitions to a precise alignment state. In the precise alignment state, the terminal device executes:
再次从所述测距设备中获取视频画面;Obtaining a video image from the ranging device again;
根据机器视觉算法计算目标物在所述视频画面中的中心点成像坐标;Calculate the imaging coordinates of the center point of the target in the video frame according to the machine vision algorithm;
根据所述中心点成像坐标和预设设备参数计算所述云台的转动角度,以使得所述云台根据所述转动角度转动,并使得所述测距设备的激光落点落在所述目标物体的中心点上;The rotation angle of the gimbal is calculated according to the imaging coordinates of the center point and preset device parameters, so that the gimbal rotates according to the rotation angle and the laser landing point of the ranging device falls on the target on the center of the object;
获取所述云台转动后的第二当前角度和所述测距设备与所述目标物之间的激光距离;Acquiring the second current angle after the rotation of the gimbal and the laser distance between the ranging device and the target;
根据所述第二当前角度和所述激光距离计算得出所述目标物的中心点的空间坐标。The spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance.
本申请的基于机器视觉远程放样方法通过接收第一激活指令并从测距设备获取视频画面,然后根据机器视觉算法检测视频画面中是否包含标识物,当检测到视频画面中包含标识物时,计算标识物在视频画面中的成像宽度、成像高度及标识物在视频画面中的第一中心点成像坐标,最后根据第一中心点成像坐标确定云台的第一当前角度、根据成像宽度、成像高度确定标识物的成像大小、根据云台的第一当前角度、所述测距设备的摄像头当前倍率和标识物的成像大小可计算得到标识物的空间坐标,最终完成标识物的跟踪。The machine vision-based remote stakeout method of the present application receives a first activation instruction and acquires a video image from a ranging device, and then detects whether the video image contains a marker according to a machine vision algorithm. When it is detected that the video image contains a marker, calculate The imaging width and imaging height of the marker in the video screen and the imaging coordinates of the first center point of the marker in the video screen, and finally determine the first current angle of the pan/tilt according to the imaging coordinates of the first center point, according to the imaging width and imaging height Determine the imaging size of the marker, calculate the spatial coordinates of the marker according to the first current angle of the pan/tilt head, the current magnification of the camera of the ranging device and the imaging size of the marker, and finally complete the tracking of the marker.
此外,当标识物的跟踪完成后,终端设备可进入精确对准状态,此时,终端设备执行再次从测距设备中获取视频画面,根据机器视觉算法计算目标物在视频画面中的中心点成像坐标,根据中心点成像坐标和预设设备参数计算云台的转动角度,以使得云台根据转动角度转动,并使得测距设备的激光落点落在目标物体的中心点上,获取云台转动后的第二当前角度和测距设备与目标物之间的激光距离,根据第二当前角度和激光距离计算得出目标物的中心点的空间坐标,这样一来,终端设备可利用目标物完成精确定位。In addition, when the tracking of the marker is completed, the terminal device can enter the precise alignment state. At this time, the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained. After the second current angle and the laser distance between the ranging device and the target object, the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
与本领域的人工放样相比,本申请实施例采用了机器视觉算法和测距设备对标识物进行自动跟踪并根据目标物完成精确定位,进而不需要测量人员反复跑动,且对测量人员的测量技术依赖度低,可降低测量人员的失误操作对测量精度的影响,因此本申请实施例具有测量效率高、测量精度高的优点。与此同时,由于目标物在机器视觉算法中不太醒目而不利于跟踪,因此,本申请实施例通过易于识别的标识物体可精确跟踪。此外,与本领域的自动放样设备相比,本申请实施例具有成本低的优点。Compared with manual stakeout in the field, the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected. The measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy. At the same time, since the target object is not very conspicuous in the machine vision algorithm, which is not conducive to tracking, the embodiment of the present application can accurately track the target object through an easily recognizable identification object. In addition, compared with the automatic stakeout equipment in the art, the embodiment of the present application has the advantage of low cost.
在本申请实施例中,作为一种可选的实施方式,所述标识物为测量操作人员、反光背心、气球中的一种。In the embodiment of the present application, as an optional implementation manner, the identifier is one of a measurement operator, a reflective vest, and a balloon.
在本可选的实施方式中,测量操作人员、反光背心、气球由于体积较大或者颜色醒目,可作为标识物。In this optional embodiment, measurement operators, reflective vests, and balloons can be used as markers due to their large size or eye-catching colors.
可选地,标识物还可以是被测物体本身,也可以是测量操作人员的特定的手势或人体姿态。Optionally, the identifier can also be the object to be measured itself, or can be a specific gesture or human posture of the measuring operator.
可选地,移动智能终端或其他设备发送的特定指令可作为激活指令。Optionally, a specific instruction sent by the mobile smart terminal or other devices can be used as the activation instruction.
可选地,终端设备的通电、或计算单元通过无线网络和测距设备建立起连接也可作为 激活指令。Optionally, the power-on of the terminal device, or the establishment of a connection between the computing unit and the ranging device through the wireless network can also be used as an activation instruction.
可选地,终端设备在视频画面中识别到预设的特定物体、手势、人体姿态、光照变化(如频闪),也可作为激活指令。Optionally, the terminal device recognizes a preset specific object, gesture, human body posture, and illumination change (such as strobe) in the video picture, which can also be used as an activation instruction.
可选地,当终端设备的待机时间达到预设时间阈值,也可作为激活指令。Optionally, when the standby time of the terminal device reaches a preset time threshold, it can also be used as an activation instruction.
在本申请实施例中,作为一种可选的实施方式,所述根据所述第一中心点成像坐标确定云台的第一当前角度包括子步骤:In the embodiment of the present application, as an optional implementation manner, the determining of the first current angle of the pan/tilt head according to the imaging coordinates of the first center point includes sub-steps:
将所述第一中心点成像坐标与所述视频画面的中心点坐标对比计算并得到所述第一中心点成像坐标与所述视频画面的中心点坐标之间的像素差;Comparing and calculating the imaging coordinates of the first center point with the coordinates of the center point of the video picture and obtaining the pixel difference between the imaging coordinates of the first center point and the coordinates of the center point of the video picture;
根据所述像素差计算云台需要旋转的水平角度和垂直角度;Calculate the horizontal angle and vertical angle that the gimbal needs to rotate according to the pixel difference;
根据所述水平角度和所述垂直角度驱动所述云台旋转,使得所述视频画面的中心点与所述标识物的中心点对齐,并将云台旋转后的角度作为所述云台的第一当前角度。The pan/tilt is driven to rotate according to the horizontal angle and the vertical angle, so that the center point of the video image is aligned with the center point of the marker, and the rotated angle of the pan/tilt is taken as the first point of the pan/tilt. a current angle.
在本可选的实施例方式中,通过将第一中心点成像坐标与视频画面的中心点坐标对比计算并得到第一中心点成像坐标与视频画面的中心点坐标之间的像素差,然后根据像素差计算云台需要旋转的水平角度和垂直角度,最后根据水平角度和垂直角度驱动云台旋转,可使得视频画面的中心点与标识物的中心点对齐,并得到云台的第一当前角度。In this optional embodiment, the pixel difference between the imaging coordinates of the first center point and the center point coordinates of the video image is calculated and obtained by comparing the imaging coordinates of the first center point with the center point coordinates of the video image, and then according to The pixel difference calculates the horizontal and vertical angles that the gimbal needs to rotate, and finally drives the gimbal to rotate according to the horizontal and vertical angles, so that the center point of the video image can be aligned with the center point of the marker, and the first current angle of the gimbal can be obtained. .
在本申请实施例中,作为一种可选的实施方式,所述根据所述成像宽度、所述成像高度确定所述标识物的成像大小,包括:In the embodiment of the present application, as an optional implementation manner, the determining the imaging size of the marker according to the imaging width and the imaging height includes:
将所述成像宽度、所述成像高度分别与预设宽度区间和预设高度区间进行比较并得到比较结果;comparing the imaging width and the imaging height with a preset width interval and a preset height interval, respectively, and obtaining a comparison result;
根据所述比较结果调整所述测距设备的摄像头倍率,以通过调整所述视频画面使得所述成像宽度、所述成像高度满足预设条件;Adjust the camera magnification of the distance measuring device according to the comparison result, so that the imaging width and the imaging height satisfy preset conditions by adjusting the video picture;
根据满足预设条件的所述成像宽度、所述成像高度确定所述标识物的成像大小。The imaging size of the marker is determined according to the imaging width and the imaging height satisfying preset conditions.
在本可选的实施方式中,通过将成像宽度、成像高度分别与预设宽度区间和预设高度区间进行比较并得到比较结果,然后根据比较结果调整测距设备的摄像头倍率,以通过调整视频画面使得成像宽度、成像高度满足预设条件,最后根据满足预设条件的成像宽度、成像高度能够确定标识物的成像大小。In this optional implementation manner, the imaging width and imaging height are compared with the preset width interval and the preset height interval respectively, and the comparison result is obtained, and then the camera magnification of the ranging device is adjusted according to the comparison result, so as to adjust the video The image makes the imaging width and imaging height meet the preset conditions, and finally the imaging size of the marker can be determined according to the imaging width and imaging height that satisfy the preset conditions.
在本申请实施例中,作为一种可选的实施方式,所述根据所述比较结果调整所述测距设备的摄像头倍率,以通过调整所述视频画面使得所述成像宽度、所述成像高度满足预设条件,包括:In the embodiment of the present application, as an optional implementation manner, the camera magnification of the ranging device is adjusted according to the comparison result, so that the imaging width and the imaging height are adjusted by adjusting the video picture. Meet pre-set conditions, including:
当所述成像宽度、所述成像高度分别小于所述预设宽度区间和所述预设高度区间时,计算得出摄像头倍率并控制所述测距设备的摄像头变倍,以放大所述视频画面;When the imaging width and the imaging height are respectively smaller than the preset width interval and the preset height interval, calculate the camera magnification and control the camera zoom of the ranging device to enlarge the video screen ;
当所述成像宽度、所述成像高度分别大于所述预设宽度区间和所述预设高度区间时, 计算得出摄像头倍率并控制摄像头变倍,以缩小所述视频画面。When the imaging width and the imaging height are respectively larger than the preset width interval and the preset height interval, the camera magnification is calculated and the camera is controlled to zoom in order to reduce the video picture.
在本可选的实施方式中,通过缩放视频画面能够实现视频画面使得成像宽度、成像高度满足预设条件。In this optional implementation manner, by scaling the video picture, the video picture can be realized so that the imaging width and the imaging height satisfy the preset conditions.
在本申请实施例中,作为一种可选的实施方式,在所述根据所述云台的角度、摄像头当前倍率和所述标识物的成像大小计算得到所述标识物的空间坐标之后,所述方法还包括:In the embodiment of the present application, as an optional implementation manner, after calculating the spatial coordinates of the marker according to the angle of the pan/tilt head, the current magnification of the camera, and the imaging size of the marker, the The method also includes:
接收第二激活指令,并从所述目标跟踪状态转换为精确对准状态,所述精确对准状态下,所述终端设备执行:Receive a second activation instruction, and transition from the target tracking state to a precise alignment state, in which the terminal device executes:
再次从所述测距设备中获取视频画面;Obtaining a video image from the ranging device again;
根据机器视觉算法计算目标物在所述视频画面中的中心点成像坐标;Calculate the imaging coordinates of the center point of the target in the video frame according to the machine vision algorithm;
根据所述中心点成像坐标和预设设备参数计算所述云台的转动角度,以使得所述云台根据所述转动角度转动,并使得所述测距设备的激光落点落在所述目标物体的中心点上;The rotation angle of the gimbal is calculated according to the imaging coordinates of the center point and preset device parameters, so that the gimbal rotates according to the rotation angle and the laser landing point of the ranging device falls on the target on the center of the object;
获取所述云台转动后的第二当前角度和所述测距设备与所述目标物之间的激光距离;Acquiring the second current angle after the rotation of the gimbal and the laser distance between the ranging device and the target;
根据所述第二当前角度和所述激光距离计算得出所述目标物的中心点的空间坐标。The spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance.
在本可选的实施方式中,通过进入精确对准状态,进而可根据第二当前角度和激光距离计算得出目标物的中心点的空间坐标,进而进一步提高目标物的中心点的空间坐标的测量精确度。In this optional embodiment, by entering the precise alignment state, the spatial coordinates of the center point of the target can be calculated according to the second current angle and the laser distance, thereby further improving the spatial coordinates of the center of the target. Measurement accuracy.
在本申请实施例中,作为一种可选的实施方式,在所述根据所述云台的第一当前角度、所述测距设备的摄像头当前倍率和所述标识物的成像大小计算得到所述标识物的空间坐标之后,所述方法还包括:In the embodiment of the present application, as an optional implementation manner, in the calculation method according to the first current angle of the pan/tilt head, the current magnification of the camera of the ranging device, and the imaging size of the marker, the obtained After describing the spatial coordinates of the marker, the method further includes:
计算所述标识物的空间坐标与目标点空间坐标之间的差值;Calculate the difference between the spatial coordinates of the marker and the spatial coordinates of the target point;
根据所述标识物的空间坐标与目标点空间坐标之间的差值生成行进方向信息,以向用户提示所述行进方向信息。The travel direction information is generated according to the difference between the space coordinates of the marker and the space coordinates of the target point, so as to prompt the travel direction information to the user.
在本申请实施例中,作为一种可选的实施方式,在所述接收第一激活指令,并从目标检测状态转换为目标跟踪状态之前,所述方法还包括步骤:In the embodiment of the present application, as an optional implementation manner, before the receiving the first activation instruction and the transition from the target detection state to the target tracking state, the method further includes the steps:
接收第三激活指令,并从待机状态切换为所述目标检测状态,所述目标检测状态下,所述终端设备执行:Receive a third activation instruction, and switch from the standby state to the target detection state. In the target detection state, the terminal device executes:
从所述测距设备获取视频画面,检测所述视频画面是否存在所述标识物,若是则进入所述目标跟踪状态。Acquire a video image from the ranging device, detect whether the marker exists in the video image, and if so, enter the target tracking state.
在本可选的实施方式中,可根据标识物的空间坐标与目标点空间坐标之间的差值生成行进方向信息,进而可向用户提示行进方向信息。In this optional implementation manner, the travel direction information may be generated according to the difference between the spatial coordinates of the marker and the spatial coordinates of the target point, and then the travel direction information may be prompted to the user.
本申请实施例公开了一种基于机器视觉的远程放样装置,所述装置应用于终端设备,所述装置包括:The embodiment of the present application discloses a remote stakeout device based on machine vision, the device is applied to terminal equipment, and the device includes:
接收模块,用于接收第一激活指令;a receiving module for receiving the first activation instruction;
状态切换模块,用于从目标检测状态转换为目标跟踪状态;The state switching module is used to switch from the target detection state to the target tracking state;
获取模块,用于从测距设备获取视频画面;The acquisition module is used to acquire the video image from the ranging device;
识别模块,用于根据机器视觉算法检测所述视频画面中是否包含标识物;an identification module, configured to detect whether the video picture contains a marker according to a machine vision algorithm;
第一计算模块,用于当所述识别模块检测到所述视频画面中包含标识物时,计算所述标识物在所述视频画面中的成像宽度、成像高度及所述标识物在所述视频画面中的第一中心点成像坐标;The first calculation module is configured to calculate the imaging width and imaging height of the landmark in the video picture and the imaging height of the landmark in the video when the identification module detects that the video picture contains a marker. The imaging coordinates of the first center point in the screen;
第一确定模块,用于根据所述第一中心点成像坐标确定云台的第一当前角度;a first determining module, configured to determine the first current angle of the pan-tilt head according to the imaging coordinates of the first center point;
第二确定模块,用于根据所述成像宽度、所述成像高度确定所述标识物的成像大小;a second determining module, configured to determine the imaging size of the marker according to the imaging width and the imaging height;
第二计算模块,用于根据所述云台的第一当前角度、摄像头当前倍率和所述标识物的成像大小计算得到所述标识物的空间坐标;a second calculation module, configured to calculate the spatial coordinates of the marker according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker;
所述接收模块,还用于接收第二激活指令;The receiving module is further configured to receive a second activation instruction;
所述状态切换模块,还用于从所述目标跟踪状态转换为精确对准状态,所述精确对准状态下;The state switching module is further configured to switch from the target tracking state to a precise alignment state, in the precise alignment state;
所述获取模块,还用于再次从所述测距设备中获取视频画面;The acquiring module is further configured to acquire video images from the ranging device again;
所述第一计算模块,还用于根据机器视觉算法计算目标物在所述视频画面中的中心点成像坐标;The first calculation module is further configured to calculate the imaging coordinates of the center point of the target in the video frame according to a machine vision algorithm;
所述第一确定模块,还用于根据所述中心点成像坐标和预设设备参数计算所述云台的转动角度,以使得所述云台根据所述转动角度转动,并使得所述测距设备的激光落点落在所述目标物体的中心点上;The first determining module is further configured to calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and preset device parameters, so that the gimbal rotates according to the rotation angle and makes the distance measurement The laser landing point of the device falls on the center point of the target object;
所述获取模块,还用于获取所述云台转动后的第二当前角度和所述测距设备与所述目标物之间的激光距离;The acquiring module is further configured to acquire the second current angle after the pan/tilt is rotated and the laser distance between the ranging device and the target;
所述第二计算模块,还用于根据所述第二当前角度和所述激光距离计算得出所述目标物的中心点的空间坐标。The second calculation module is further configured to calculate the spatial coordinates of the center point of the target object according to the second current angle and the laser distance.
本申请的基于机器视觉远程放样装置通过执行基于机器视觉远程放样方法,能够通过接收第一激活指令从测距设备获取视频画面,然后根据机器视觉算法检测视频画面中是否包含标识物,当检测到视频画面中包含标识物时,计算标识物在视频画面中的成像宽度、成像高度及标识物在视频画面中的第一中心点成像坐标,最后根据第一中心点成像坐标确定云台的第一当前角度、根据成像宽度、成像高度确定标识物的成像大小、根据云台的第一当前角度、测距设备的摄像头当前倍率和标识物的成像大小可计算得到标识物的空间坐标,最终完成标识物的跟踪。The machine vision-based remote stakeout device of the present application can obtain a video image from the ranging device by receiving the first activation instruction by executing the machine vision-based remote stakeout method, and then detect whether the video image contains a marker according to the machine vision algorithm. When a marker is included in the video image, the imaging width and imaging height of the marker in the video image and the imaging coordinates of the first center point of the marker in the video image are calculated, and finally the first center point of the gimbal is determined according to the imaging coordinates of the first center point. The current angle, the imaging size of the marker is determined according to the imaging width and the imaging height, the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt, the current magnification of the camera of the ranging device, and the imaging size of the marker, and the marker is finally completed. tracking of things.
此外,当标识物的跟踪完成后,终端设备可进入精确对准状态,此时,终端设备执行 再次从测距设备中获取视频画面,根据机器视觉算法计算目标物在视频画面中的中心点成像坐标,根据中心点成像坐标和预设设备参数计算云台的转动角度,以使得云台根据转动角度转动,并使得测距设备的激光落点落在目标物体的中心点上,获取云台转动后的第二当前角度和测距设备与目标物之间的激光距离,根据第二当前角度和激光距离计算得出目标物的中心点的空间坐标,这样一来,终端设备可利用目标物完成精确定位。In addition, when the tracking of the marker is completed, the terminal device can enter the precise alignment state. At this time, the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained. After the second current angle and the laser distance between the ranging device and the target object, the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
与本领域的人工放样相比,本申请实施例采用了机器视觉算法和测距设备对标识物进行自动跟踪并根据目标物完成精确定位,进而不需要测量人员反复跑动,且对测量人员的测量技术依赖度低,可降低测量人员的失误操作对测量精度的影响,因此本申请实施例具有测量效率高、测量精度高的优点。与此同时,由于目标物在机器视觉算法中不太醒目而不利于跟踪,因此,本申请实施例通过易于识别的标识物体可稳定跟踪。此外,与本领域的自动放样设备相比,本申请实施例具有成本低的优点。Compared with manual stakeout in the field, the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected. The measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy. At the same time, since the target object is not very conspicuous in the machine vision algorithm, which is not conducive to tracking, therefore, the embodiment of the present application can track stably through the easily recognizable marking object. In addition, compared with the automatic stakeout equipment in the art, the embodiment of the present application has the advantage of low cost.
本申请实施例公开一种终端设备,所述终端设备包括:An embodiment of the present application discloses a terminal device, where the terminal device includes:
存储有可执行程序代码的存储器;a memory in which executable program code is stored;
与所述存储器耦合的处理器;a processor coupled to the memory;
所述处理器调用所述存储器中存储的所述可执行程序代码,执行本申请实施例公开的基于机器视觉远程放样方法。The processor invokes the executable program code stored in the memory to execute the machine vision-based remote stakeout method disclosed in the embodiment of the present application.
本申请的终端设备通过执行基于机器视觉远程放样方法,能够通过接收第一激活指令从测距设备获取视频画面,然后根据机器视觉算法检测视频画面中是否包含标识物,当检测到视频画面中包含标识物时,计算标识物在视频画面中的成像宽度、成像高度及标识物在视频画面中的第一中心点成像坐标,最后根据第一中心点成像坐标确定云台的第一当前角度、根据成像宽度、成像高度确定标识物的成像大小、根据云台的第一当前角度、摄像头当前倍率和标识物的成像大小可计算得到标识物的空间坐标,最终完成标识物的跟踪。By executing the remote stakeout method based on machine vision, the terminal device of the present application can obtain a video image from a ranging device by receiving a first activation instruction, and then detect whether the video image contains a marker according to a machine vision algorithm. When a marker is used, the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture are calculated, and finally the first current angle of the gimbal is determined according to the imaging coordinates of the first center point. The imaging width and imaging height determine the imaging size of the marker, and the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker, and finally the tracking of the marker is completed.
此外,当标识物的跟踪完成后,终端设备可进入精确对准状态,此时,终端设备执行再次从测距设备中获取视频画面,根据机器视觉算法计算目标物在视频画面中的中心点成像坐标,根据中心点成像坐标和预设设备参数计算云台的转动角度,以使得云台根据转动角度转动,并使得测距设备的激光落点落在目标物体的中心点上,获取云台转动后的第二当前角度和测距设备与目标物之间的激光距离,根据第二当前角度和激光距离计算得出目标物的中心点的空间坐标,这样一来,终端设备可利用目标物完成精确定位。In addition, when the tracking of the marker is completed, the terminal device can enter the precise alignment state. At this time, the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained. After the second current angle and the laser distance between the ranging device and the target object, the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
与本领域的人工放样相比,本申请实施例采用了机器视觉算法和测距设备对标识物进行自动跟踪并根据目标物完成精确定位,进而不需要测量人员反复跑动,且对测量人员的测量技术依赖度低,可降低测量人员的失误操作对测量精度的影响,因此本申请实施例具有测量效率高、测量精度高的优点。与此同时,由于目标物在机器视觉算法中不太醒目而 不利于跟踪,因此,本申请实施例通过易于识别的标识物体可稳定跟踪。此外,与本领域的自动放样设备相比,本申请实施例具有成本低的优点。Compared with manual stakeout in the field, the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected. The measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy. At the same time, since the target object is not very conspicuous in the machine vision algorithm, which is not conducive to tracking, therefore, the embodiment of the present application can track stably through the easily identifiable marking object. In addition, compared with the automatic stakeout equipment in the art, the embodiment of the present application has the advantage of low cost.
本申请实施例公开了一种存储介质,所述存储介质存储有计算机指令,所述计算机指令被调用时,用于执行本申请实施例公开的基于机器视觉远程放样方法。The embodiments of the present application disclose a storage medium, where the storage medium stores computer instructions, and when the computer instructions are invoked, the computer instructions are used to execute the machine vision-based remote stakeout method disclosed in the embodiments of the present application.
本申请的存储介质过执行基于机器视觉远程放样方法,能够通过接收第一激活指令从测距设备获取视频画面,然后根据机器视觉算法检测视频画面中是否包含标识物,当检测到视频画面中包含标识物时,计算标识物在视频画面中的成像宽度、成像高度及标识物在视频画面中的第一中心点成像坐标,最后根据第一中心点成像坐标确定云台的第一当前角度、根据成像宽度、成像高度确定标识物的成像大小、根据云台的第一当前角度、测距设备的摄像头当前倍率和标识物的成像大小可计算得到标识物的空间坐标,最终完成标识物的跟踪。The storage medium of the present application executes the remote stakeout method based on machine vision, and can obtain a video picture from a ranging device by receiving a first activation instruction, and then detect whether the video picture contains a marker according to a machine vision algorithm. When a marker is used, the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture are calculated, and finally the first current angle of the gimbal is determined according to the imaging coordinates of the first center point. The imaging width and imaging height determine the imaging size of the marker, and the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt, the current magnification of the camera of the ranging device, and the imaging size of the marker, and finally the tracking of the marker is completed.
此外,当标识物的跟踪完成后,终端设备可进入精确对准状态,此时,终端设备执行再次从测距设备中获取视频画面,根据机器视觉算法计算目标物在视频画面中的中心点成像坐标,根据中心点成像坐标和预设设备参数计算云台的转动角度,以使得云台根据转动角度转动,并使得测距设备的激光落点落在目标物体的中心点上,获取云台转动后的第二当前角度和测距设备与目标物之间的激光距离,根据第二当前角度和激光距离计算得出目标物的中心点的空间坐标,这样一来,终端设备可利用目标物完成精确定位。In addition, when the tracking of the marker is completed, the terminal device can enter the precise alignment state. At this time, the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained. After the second current angle and the laser distance between the ranging device and the target object, the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
与本领域的人工放样相比,本申请实施例采用了机器视觉算法和测距设备对标识物进行自动跟踪并根据目标物完成精确定位,进而不需要测量人员反复跑动,且对测量人员的测量技术依赖度低,可降低测量人员的失误操作对测量精度的影响,因此本申请实施例具有测量效率高、测量精度高的优点。与此同时,由于目标物在机器视觉算法中不太醒目而不利于跟踪,因此,本申请实施例通过易于识别的标识物体可稳定跟踪。此外,与本领域的自动放样设备相比,本申请实施例具有成本低的优点。Compared with manual stakeout in the field, the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected. The measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy. At the same time, since the target object is not very conspicuous in the machine vision algorithm, which is not conducive to tracking, therefore, the embodiment of the present application can track stably through the easily recognizable marking object. In addition, compared with the automatic stakeout equipment in the art, the embodiment of the present application has the advantage of low cost.
附图说明Description of drawings
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to explain the technical solutions of the embodiments of the present application more clearly, the following briefly introduces the accompanying drawings that need to be used in the embodiments of the present application. It should be understood that the following drawings only show some embodiments of the present application, therefore It should not be regarded as a limitation of the scope. For those of ordinary skill in the art, other related drawings can also be obtained from these drawings without any creative effort.
图1是本申请实施例公开的一种基于机器视觉远程放样方法的流程示意图;1 is a schematic flowchart of a machine vision-based remote stakeout method disclosed in an embodiment of the present application;
图2是本申请实施例公开的一种基于机器视觉的远程放样装置的结构示意图;2 is a schematic structural diagram of a machine vision-based remote stakeout device disclosed in an embodiment of the present application;
图3是本申请实施例公开的一种终端设备的结构示意图。FIG. 3 is a schematic structural diagram of a terminal device disclosed in an embodiment of the present application.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
请参阅图1,图1是本申请实施例公开的一种基于机器视觉远程放样方法的流程示意图,该方法应用于终端设备。如图1所示,本申请实施例的方法包括步骤:Please refer to FIG. 1. FIG. 1 is a schematic flowchart of a machine vision-based remote stakeout method disclosed in an embodiment of the present application, and the method is applied to a terminal device. As shown in Figure 1, the method of the embodiment of the present application includes the steps:
101、接收第一激活指令,并从目标检测状态转换为目标跟踪状态,目标跟踪状态下,终端设备执行以下步骤:101. Receive the first activation instruction, and convert from the target detection state to the target tracking state. In the target tracking state, the terminal device performs the following steps:
102、从测距设备获取视频画面;102. Acquire a video image from a ranging device;
103、根据机器视觉算法检测视频画面中是否包含标识物;103. Detecting whether the video image contains a marker according to a machine vision algorithm;
104、当检测到视频画面中包含标识物时,计算标识物在视频画面中的成像宽度、成像高度及标识物在视频画面中的第一中心点成像坐标;104. When detecting that a marker is included in the video picture, calculate the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture;
105、根据第一中心点成像坐标确定云台的第一当前角度;105. Determine the first current angle of the gimbal according to the imaging coordinates of the first center point;
106、根据成像宽度、成像高度确定标识物的成像大小;106. Determine the imaging size of the marker according to the imaging width and imaging height;
107、根据云台的第一当前角度、测距设备的摄像头当前倍率和标识物的成像大小计算得到标识物的空间坐标;107. Calculate the spatial coordinates of the marker according to the first current angle of the PTZ, the current magnification of the camera of the ranging device, and the imaging size of the marker;
108、接收第二激活指令,并从目标跟踪状态转换为精确对准状态,精确对准状态下,终端设备执行:108. Receive the second activation instruction, and convert from the target tracking state to the precise alignment state. In the precise alignment state, the terminal device executes:
109、再次从测距设备中获取视频画面;109. Obtain the video image from the ranging device again;
110、根据机器视觉算法计算目标物在视频画面中的中心点成像坐标;110. Calculate the imaging coordinates of the center point of the target in the video image according to the machine vision algorithm;
111、根据中心点成像坐标和预设设备参数计算云台的转动角度,以使得云台根据转动角度转动,并使得测距设备的激光落点落在目标物体的中心点上;111. Calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object;
112、获取云台转动后的第二当前角度和测距设备与目标物之间的激光距离;112. Obtain the second current angle after the gimbal is rotated and the laser distance between the ranging device and the target;
113、根据第二当前角度和激光距离计算得出目标物的中心点的空间坐标。113. Calculate the spatial coordinates of the center point of the target according to the second current angle and the laser distance.
在本申请实施例中,可选地,标识物为测量操作人员、反光背心、气球中的一种。In the embodiment of the present application, optionally, the marker is one of a measurement operator, a reflective vest, and a balloon.
在本申请实施例中,可选地,标识物还可以是被测物体本身,也可以是测量操作人员的特定的手势或人体姿态。In this embodiment of the present application, optionally, the identifier may also be the object to be measured itself, or may be a specific gesture or human body posture of the measurement operator.
在本申请实施例中,移动智能终端或其他设备发送的特定指令可作为激活指令。In this embodiment of the present application, a specific instruction sent by a mobile smart terminal or other device may be used as an activation instruction.
在本申请实施例中,终端设备的通电、或计算单元通过无线网络和测距设备建立起连接也可作为激活指令。In this embodiment of the present application, the power-on of the terminal device or the establishment of a connection between the computing unit and the ranging device through the wireless network can also be used as an activation instruction.
在本申请实施例中,终端设备在视频画面中识别到预设的特定物体、手势、人体姿态、光照变化(如频闪),也可作为激活指令。In the embodiment of the present application, the terminal device recognizes a preset specific object, gesture, human body posture, and illumination change (such as strobe) in the video picture, which can also be used as an activation instruction.
在本申请实施例中,可选地,当终端设备的待机时间达到预设时间阈值,也可作为激活指令。In this embodiment of the present application, optionally, when the standby time of the terminal device reaches a preset time threshold, it can also be used as an activation instruction.
在本申请实施例中,需要说明的是,终端设备中安装有计算单元,该计算单元用于执 行本申请实施例公开的基于机器视觉远程放样方法。In the embodiments of the present application, it should be noted that a computing unit is installed in the terminal device, and the computing unit is used to execute the machine vision-based remote stakeout method disclosed in the embodiments of the present application.
在本申请实施例中,测距设备安装于云台上,该测距设备能够随着云台转动而转动。In the embodiment of the present application, the ranging device is installed on the pan-tilt, and the ranging device can rotate as the pan-tilt rotates.
在本申请实施例中,移动智能终端可以是手机,也可以是PAD、笔记本等其他可移动通信的终端,对此本申请实施例不作限定。In this embodiment of the present application, the mobile intelligent terminal may be a mobile phone, or may be other mobile communication terminals such as a PAD and a notebook, which are not limited in this embodiment of the present application.
在本申请实施例中,终端设备可以通过无线网路与移动智能终端进行远程通信。可选地,测量人员可通过移动智能终端向终端设备发送激活指令,同时可通过移动智能终端向测量人员展示测量结果或者以语音提示的方式向测量人员反馈交互结果。In this embodiment of the present application, the terminal device may perform remote communication with the mobile intelligent terminal through a wireless network. Optionally, the surveyor can send an activation instruction to the terminal device through the mobile smart terminal, and at the same time, the mobile smart terminal can display the measurement results to the surveyor or feed back the interactive results to the surveyor in the form of voice prompts.
在本申请实施例中,标识物可以是被测物体本身,也可以是其他物体,如装饰有特定图案的背心,可选地,标识物也可以是测量人员的特定的手势或人体姿态。In this embodiment of the present application, the marker may be the object to be measured itself, or other objects, such as a vest decorated with a specific pattern, and optionally, the marker may also be a specific gesture or human posture of the measuring person.
在本申请实施例中,本申请实施例中的第一激活指令、第二激活指令、第三激活指令之间的命名区别为了便于描述测量人员在不同阶段向终端设备输入的指令。In the embodiments of the present application, the naming differences among the first activation instruction, the second activation instruction, and the third activation instruction in the embodiments of the present application are for the convenience of describing the instructions input by the surveyor to the terminal device at different stages.
在本申请实施例中,根据终端设备的执行内容可将终端设备划分为待机状态、目标检测状态、目标跟踪状态、精确对准状态,状态之间可通过指定条件进行转换。需要说明的是,状态的划分是为了便于测量人员直观的了解终端设备的使用状态,而不是绝对地去限定终端设备的某一步骤属于某种状态。In the embodiment of the present application, the terminal device can be divided into a standby state, a target detection state, a target tracking state, and a precise alignment state according to the execution content of the terminal device, and transitions between the states can be performed according to specified conditions. It should be noted that the division of states is to facilitate the measurement personnel to intuitively understand the use state of the terminal device, rather than to absolutely limit a certain step of the terminal device to a certain state.
可见,本申请实施例的基于机器视觉远程放样方法通过接收第一激活指令并从测距设备获取视频画面,然后根据机器视觉算法检测视频画面中是否包含标识物,当检测到视频画面中包含标识物时,计算标识物在视频画面中的成像宽度、成像高度及标识物在视频画面中的第一中心点成像坐标,最后根据第一中心点成像坐标确定云台的第一当前角度、根据成像宽度、成像高度确定标识物的成像大小、根据云台的第一当前角度、摄像头当前倍率和标识物的成像大小可计算得到标识物的空间坐标,最终完成标识物的跟踪。It can be seen that the machine vision-based remote stakeout method of the embodiment of the present application obtains the video picture from the ranging device by receiving the first activation instruction, and then detects whether the video picture contains a marker according to the machine vision algorithm. When the object is detected, the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture are calculated, and finally the first current angle of the pan/tilt is determined according to the imaging coordinates of the first center point, and the imaging coordinates are based on the imaging coordinates. The width and imaging height determine the imaging size of the marker, and the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker, and finally the tracking of the marker is completed.
此外,当标识物的跟踪完成后,终端设备可进入精确对准状态,此时,终端设备执行再次从测距设备中获取视频画面,根据机器视觉算法计算目标物在视频画面中的中心点成像坐标,根据中心点成像坐标和预设设备参数计算云台的转动角度,以使得云台根据转动角度转动,并使得测距设备的激光落点落在目标物体的中心点上,获取云台转动后的第二当前角度和测距设备与目标物之间的激光距离,根据第二当前角度和激光距离计算得出目标物的中心点的空间坐标,这样一来,终端设备可利用目标物完成精确定位。In addition, when the tracking of the marker is completed, the terminal device can enter the precise alignment state. At this time, the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained. After the second current angle and the laser distance between the ranging device and the target object, the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
与本领域的人工放样相比,本申请实施例采用了机器视觉算法和测距设备对标识物进行自动跟踪并根据目标物完成精确定位,进而不需要测量人员反复跑动,且对测量人员的测量技术依赖度低,可降低测量人员的失误操作对测量精度的影响,因此本申请实施例具有测量效率高、测量精度高的优点。与此同时,由于目标物在机器视觉算法中不太醒目而不利于跟踪,因此,本申请实施例通过易于识别的标识物体可稳定跟踪。此外,与本领域 的自动放样设备相比,本申请实施例具有成本低的优点。Compared with manual stakeout in the field, the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected. The measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy. At the same time, since the target object is not very conspicuous in the machine vision algorithm, which is not conducive to tracking, therefore, the embodiment of the present application can track stably through the easily recognizable marking object. In addition, compared with the automatic stakeout equipment in the art, the embodiment of the present application has the advantage of low cost.
在本申请实施例中,作为一种可选的实施方式,根据第一中心点成像坐标确定云台的第一当前角度;In the embodiment of the present application, as an optional implementation manner, the first current angle of the gimbal is determined according to the imaging coordinates of the first center point;
将第一中心点成像坐标与视频画面的中心点坐标对比计算并得到第一中心点成像坐标与视频画面的中心点坐标之间的像素差;Comparing the imaging coordinates of the first center point with the coordinates of the center point of the video screen, and calculating the pixel difference between the imaging coordinates of the first center point and the coordinates of the center point of the video screen;
根据像素差计算云台需要旋转的水平角度和垂直角度;Calculate the horizontal and vertical angles that the gimbal needs to rotate according to the pixel difference;
根据水平角度和垂直角度驱动云台旋转,使得视频画面的中心点与标识物的中心点对齐,并将云台旋转后的角度作为云台的第一当前角度。Drive the gimbal to rotate according to the horizontal and vertical angles, so that the center point of the video image is aligned with the center point of the marker, and the rotated angle of the gimbal is taken as the first current angle of the gimbal.
在本可选的实施例方式中,通过将第一中心点成像坐标与视频画面的中心点坐标对比计算并得到第一中心点成像坐标与视频画面的中心点坐标之间的像素差,然后根据像素差计算云台需要旋转的水平角度和垂直角度,最后根据水平角度和垂直角度驱动云台旋转,可使得视频画面的中心点与标识物的中心点对齐,并得到云台的第一当前角度。In this optional embodiment, the pixel difference between the imaging coordinates of the first center point and the center point coordinates of the video image is calculated and obtained by comparing the imaging coordinates of the first center point with the center point coordinates of the video image, and then according to The pixel difference calculates the horizontal and vertical angles that the gimbal needs to rotate, and finally drives the gimbal to rotate according to the horizontal and vertical angles, so that the center point of the video image can be aligned with the center point of the marker, and the first current angle of the gimbal can be obtained. .
在本申请实施例中,作为一种可选的实施方式,根据成像宽度、成像高度确定标识物的成像大小,包括子步骤:In the embodiment of the present application, as an optional implementation manner, the imaging size of the marker is determined according to the imaging width and imaging height, including sub-steps:
将成像宽度、成像高度分别与预设宽度区间和预设高度区间进行比较并得到比较结果;Comparing the imaging width and imaging height with the preset width interval and the preset height interval respectively, and obtaining the comparison result;
根据比较结果调整测距设备的摄像头倍率,以通过调整视频画面使得成像宽度、成像高度满足预设条件;Adjust the camera magnification of the ranging device according to the comparison result, so that the imaging width and imaging height can meet the preset conditions by adjusting the video screen;
根据满足预设条件的成像宽度、成像高度确定标识物的成像大小。The imaging size of the marker is determined according to the imaging width and imaging height satisfying the preset conditions.
在本可选的实施方式中,通过将成像宽度、成像高度分别与预设宽度区间和预设高度区间进行比较并得到比较结果,然后根据比较结果调整测距设备的摄像头倍率,以通过调整视频画面使得成像宽度、成像高度满足预设条件,最后根据满足预设条件的成像宽度、成像高度能够确定标识物的成像大小。In this optional implementation manner, the imaging width and imaging height are compared with the preset width interval and the preset height interval respectively, and the comparison result is obtained, and then the camera magnification of the ranging device is adjusted according to the comparison result, so as to adjust the video The image makes the imaging width and imaging height meet the preset conditions, and finally the imaging size of the marker can be determined according to the imaging width and imaging height that satisfy the preset conditions.
在本申请实施例中,作为一种可选的实施方式,根据比较结果调整测距设备的摄像头倍率,以通过调整视频画面使得成像宽度、成像高度满足预设条件,包括子步骤:In the embodiment of the present application, as an optional implementation, adjusting the camera magnification of the ranging device according to the comparison result, so that the imaging width and imaging height meet the preset conditions by adjusting the video picture, including the sub-steps:
当成像宽度、成像高度分别小于预设宽度区间和预设高度区间时,计算得出摄像头倍率并控制测距设备的摄像头变倍,以放大视频画面;When the imaging width and imaging height are respectively smaller than the preset width interval and the preset height interval, calculate the camera magnification and control the camera zoom of the ranging device to enlarge the video screen;
当成像宽度、成像高度分别大于预设宽度区间和预设高度区间时,计算得出摄像头倍率并控制摄像头变倍,以缩小视频画面。When the imaging width and the imaging height are respectively greater than the preset width interval and the preset height interval, the camera magnification is calculated and the camera zoom is controlled to reduce the video image.
在本可选的实施方式中,通过缩放视频画面能够实现视频画面使得成像宽度、成像高度满足预设条件。In this optional implementation manner, by scaling the video picture, the video picture can be realized so that the imaging width and the imaging height satisfy the preset conditions.
在本申请实施例中,作为一种可选的实施方式,在步骤107:根据云台的第一当前角度、测距设备的摄像头当前倍率和标识物的成像大小计算得到标识物的空间坐标之后,本申请 实施例方法还包括步骤:In the embodiment of this application, as an optional implementation, in step 107: after calculating the spatial coordinates of the marker according to the first current angle of the pan/tilt head, the current magnification of the camera of the ranging device, and the imaging size of the marker , the method of the embodiment of the present application also includes the steps:
计算标识物的空间坐标与目标点空间坐标之间的差值;Calculate the difference between the spatial coordinates of the marker and the spatial coordinates of the target point;
根据标识物的空间坐标与目标点空间坐标之间的差值生成行进方向信息,以向用户提示行进方向信息。The travel direction information is generated according to the difference between the space coordinates of the marker and the space coordinates of the target point, so as to prompt the travel direction information to the user.
在本可选的实施方式中,可根据标识物的空间坐标与目标点空间坐标之间的差值生成行进方向信息,进而可向用户提示行进方向信息。In this optional implementation manner, the travel direction information may be generated according to the difference between the spatial coordinates of the marker and the spatial coordinates of the target point, and then the travel direction information may be prompted to the user.
在本申请实施例中,作为一种可选的实施方式,在接收第一激活指令,并从目标检测状态转换为目标跟踪状态之前,方法还包括步骤:In the embodiment of the present application, as an optional implementation manner, before receiving the first activation instruction and transitioning from the target detection state to the target tracking state, the method further includes the steps:
接收第三激活指令,并从待机状态切换为目标检测状态,目标检测状态下,终端设备执行:Receive the third activation instruction, and switch from the standby state to the target detection state. In the target detection state, the terminal device executes:
从测距设备获取视频画面,检测视频画面是否存在标识物,若是则进入目标跟踪状态。Obtain the video picture from the ranging device, detect whether there is a marker in the video picture, and if so, enter the target tracking state.
在本可选的实施方式中,通过初步检测视频画面是否存在标识物,如果有,则终端设备可进入目标跟踪状态,如果没有,则可以进入待机状态,这样一来,可降低终端设备的电能消耗。In this optional embodiment, it is preliminarily detected whether there is a marker in the video image. If there is, the terminal device can enter the target tracking state, and if not, it can enter the standby state, so that the power of the terminal device can be reduced. consume.
请参阅图2,图2是本申请实施例公开的一种基于机器视觉的远程放样装置的结构示意图,该装置应用于终端设备。如图2所示,装置包括:Please refer to FIG. 2 . FIG. 2 is a schematic structural diagram of a machine vision-based remote stakeout device disclosed in an embodiment of the present application, and the device is applied to a terminal device. As shown in Figure 2, the device includes:
接收模块201,用于接收第一激活指令;a receiving module 201, configured to receive a first activation instruction;
状态切换模块202,用于从目标检测状态转换为目标跟踪状态;a state switching module 202, configured to switch from a target detection state to a target tracking state;
获取模块203,用于从测距设备获取视频画面;an acquisition module 203, configured to acquire a video picture from a ranging device;
识别模块204,用于根据机器视觉算法检测视频画面中是否包含标识物;The identification module 204 is used for detecting whether the video picture contains a marker according to the machine vision algorithm;
第一计算模块205,用于当识别模块检测到视频画面中包含标识物时,计算标识物在视频画面中的成像宽度、成像高度及标识物在视频画面中的第一中心点成像坐标;The first calculation module 205 is used to calculate the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture when the identification module detects that the marker is included in the video picture;
第一确定模块206,用于根据第一中心点成像坐标确定云台的第一当前角度;a first determining module 206, configured to determine the first current angle of the pan/tilt head according to the imaging coordinates of the first center point;
第二确定模块207,用于根据成像宽度、成像高度确定标识物的成像大小;The second determining module 207 is configured to determine the imaging size of the marker according to the imaging width and imaging height;
第二计算模块208,用于根据云台的第一当前角度、摄像头当前倍率和标识物的成像大小计算得到标识物的空间坐标;The second calculation module 208 is configured to calculate and obtain the spatial coordinates of the marker according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker;
接收模块201,还用于接收第二激活指令;The receiving module 201 is further configured to receive a second activation instruction;
状态切换模块202,还用于从目标跟踪状态转换为精确对准状态,精确对准状态下;The state switching module 202 is also used for converting from the target tracking state to the precise alignment state, under the precise alignment state;
获取模块203,还用于再次从测距设备中获取视频画面;The acquiring module 203 is further configured to acquire the video picture from the ranging device again;
第一计算模块205,还用于根据机器视觉算法计算目标物在视频画面中的中心点成像坐标;The first calculation module 205 is further configured to calculate the imaging coordinates of the center point of the target in the video frame according to the machine vision algorithm;
第一确定模块206,还用于根据中心点成像坐标和预设设备参数计算云台的转动角度,以使得云台根据转动角度转动,并使得测距设备的激光落点落在目标物体的中心点上;The first determination module 206 is also used to calculate the rotation angle of the pan/tilt according to the imaging coordinates of the center point and the preset device parameters, so that the pan/tilt rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center of the target object Point;
获取模块203,还用于获取云台转动后的第二当前角度和测距设备与目标物之间的激光距离;The acquiring module 203 is also used to acquire the second current angle after the pan/tilt is rotated and the laser distance between the ranging device and the target;
第二计算模块208,还用于根据第二当前角度和激光距离计算得出目标物的中心点的空间坐标。The second calculation module 208 is further configured to calculate the spatial coordinates of the center point of the target object according to the second current angle and the laser distance.
在本申请实施例中,可选地,标识物为测量操作人员、反光背心、气球中的一种。In the embodiment of the present application, optionally, the marker is one of a measurement operator, a reflective vest, and a balloon.
在本申请实施例中,可选地,标识物还可以是被测物体本身,也可以是测量操作人员的特定的手势或人体姿态。In this embodiment of the present application, optionally, the identifier may also be the object to be measured itself, or may be a specific gesture or human body posture of the measurement operator.
在本申请实施例中,移动智能终端或其他设备发送的特定指令可作为激活指令。In this embodiment of the present application, a specific instruction sent by a mobile smart terminal or other device may be used as an activation instruction.
在本申请实施例中,终端设备的通电、或计算单元通过无线网络和测距设备建立起连接也可作为激活指令。In this embodiment of the present application, the power-on of the terminal device or the establishment of a connection between the computing unit and the ranging device through the wireless network can also be used as an activation instruction.
在本申请实施例中,终端设备在视频画面中识别到预设的特定物体、手势、人体姿态、光照变化(如频闪),也可作为激活指令。In the embodiment of the present application, the terminal device recognizes a preset specific object, gesture, human body posture, and illumination change (such as strobe) in the video picture, which can also be used as an activation instruction.
在本申请实施例中,可选地,当终端设备的待机时间达到预设时间阈值,也可作为激活指令。In this embodiment of the present application, optionally, when the standby time of the terminal device reaches a preset time threshold, it can also be used as an activation instruction.
在本申请实施例中,需要说明的是,终端设备中安装有计算单元,该计算单元用于执行本申请实施例公开的基于机器视觉远程放样方法。In the embodiments of the present application, it should be noted that a computing unit is installed in the terminal device, and the computing unit is used to execute the machine vision-based remote stakeout method disclosed in the embodiments of the present application.
在本申请实施例中,测距设备安装于云台上,该测距设备能够随着云台转动而转动。In the embodiment of the present application, the ranging device is installed on the pan-tilt, and the ranging device can rotate as the pan-tilt rotates.
在本申请实施例中,移动智能终端可以是手机,也可以是PAD、笔记本等其他可移动通信的终端,对此本申请实施例不作限定。In this embodiment of the present application, the mobile intelligent terminal may be a mobile phone, or may be other mobile communication terminals such as a PAD and a notebook, which are not limited in this embodiment of the present application.
在本申请实施例中,终端设备可以通过无线网路与移动智能终端进行远程通信。可选地,测量人员可通过移动智能终端向终端设备发送激活指令,同时可通过移动智能终端向测量人员展示测量结果或者以语音提示的方式向测量人员反馈交互结果。In this embodiment of the present application, the terminal device may perform remote communication with the mobile intelligent terminal through a wireless network. Optionally, the surveyor can send an activation instruction to the terminal device through the mobile smart terminal, and at the same time, the mobile smart terminal can display the measurement results to the surveyor or feed back the interactive results to the surveyor in the form of voice prompts.
在本申请实施例中,本申请实施例中的第一激活指令、第二激活指令、第三激活指令之间的命名区别为了便于描述测量人员在不同阶段向终端设备输入的指令。In the embodiments of the present application, the naming differences among the first activation instruction, the second activation instruction, and the third activation instruction in the embodiments of the present application are for the convenience of describing the instructions input by the surveyor to the terminal device at different stages.
在本申请实施例中,根据终端设备的执行内容可将终端设备划分为待机状态、目标检测状态、目标跟踪状态、精确对准状态,状态之间可通过指定条件进行转换。需要说明的是,状态的划分是为了便于测量人员直观的了解终端设备的使用状态,而不是绝对地去限定终端设备的某一步骤属于某种状态。In the embodiment of the present application, the terminal device can be divided into a standby state, a target detection state, a target tracking state, and a precise alignment state according to the execution content of the terminal device, and transitions between the states can be performed according to specified conditions. It should be noted that the division of states is to facilitate the measurement personnel to intuitively understand the use state of the terminal device, rather than to absolutely limit a certain step of the terminal device to a certain state.
可见,本申请实施例的基于机器视觉远程放样装置通过执行基于机器视觉远程放样方法,能够通过接收第一激活指令从测距设备获取视频画面,然后根据机器视觉算法检测视 频画面中是否包含标识物,当检测到视频画面中包含标识物时,计算标识物在视频画面中的成像宽度、成像高度及标识物在视频画面中的第一中心点成像坐标,最后根据第一中心点成像坐标确定云台的第一当前角度、根据成像宽度、成像高度确定标识物的成像大小、根据云台的第一当前角度、测距设备的摄像头当前倍率和标识物的成像大小可计算得到标识物的空间坐标,最终完成标识物的跟踪。It can be seen that the machine vision-based remote stakeout device of the embodiment of the present application can obtain a video image from the ranging device by receiving the first activation instruction by executing the machine vision-based remote stakeout method, and then detect whether the video image contains a marker according to a machine vision algorithm. , when it is detected that the video picture contains a marker, calculate the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture, and finally determine the cloud according to the imaging coordinates of the first center point The first current angle of the stage, the imaging size of the marker is determined according to the imaging width and imaging height, and the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt, the current magnification of the camera of the ranging device, and the imaging size of the marker. , and finally complete the tracking of the marker.
此外,当标识物的跟踪完成后,终端设备可进入精确对准状态,此时,终端设备执行再次从测距设备中获取视频画面,根据机器视觉算法计算目标物在视频画面中的中心点成像坐标,根据中心点成像坐标和预设设备参数计算云台的转动角度,以使得云台根据转动角度转动,并使得测距设备的激光落点落在目标物体的中心点上,获取云台转动后的第二当前角度和测距设备与目标物之间的激光距离,根据第二当前角度和激光距离计算得出目标物的中心点的空间坐标,这样一来,终端设备可利用目标物完成精确定位。In addition, when the tracking of the marker is completed, the terminal device can enter the precise alignment state. At this time, the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained. After the second current angle and the laser distance between the ranging device and the target object, the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
与本领域的人工放样相比,本申请实施例采用了机器视觉算法和测距设备对标识物进行自动跟踪并根据目标物完成精确定位,进而不需要测量人员反复跑动,且对测量人员的测量技术依赖度低,可降低测量人员的失误操作对测量精度的影响,因此本申请实施例具有测量效率高、测量精度高的优点。与此同时,由于目标物在机器视觉算法中不太醒目而不利于跟踪,因此,本申请实施例通过易于识别的标识物体可稳定跟踪。此外,与本领域的自动放样设备相比,本申请实施例具有成本低的优点。Compared with manual stakeout in the field, the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected. The measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy. At the same time, since the target object is not very conspicuous in the machine vision algorithm, which is not conducive to tracking, therefore, the embodiment of the present application can track stably through the easily recognizable marking object. In addition, compared with the automatic stakeout equipment in the art, the embodiment of the present application has the advantage of low cost.
在本申请实施例中,作为一种可选的实施方式,第一确定模块206执行根据第一中心点成像坐标确定云台的第一当前角度的具体方式为:In the embodiment of the present application, as an optional implementation manner, the specific manner in which the first determination module 206 determines the first current angle of the pan/tilt head according to the imaging coordinates of the first center point is as follows:
将第一中心点成像坐标与视频画面的中心点坐标对比计算并得到第一中心点成像坐标与视频画面的中心点坐标之间的像素差;Comparing the imaging coordinates of the first center point with the coordinates of the center point of the video screen, and calculating the pixel difference between the imaging coordinates of the first center point and the coordinates of the center point of the video screen;
根据像素差计算云台需要旋转的水平角度和垂直角度;Calculate the horizontal and vertical angles that the gimbal needs to rotate according to the pixel difference;
根据水平角度和垂直角度驱动云台旋转,使得视频画面的中心点与标识物的中心点对齐,并将云台旋转后的角度作为云台的第一当前角度。Drive the gimbal to rotate according to the horizontal and vertical angles, so that the center point of the video image is aligned with the center point of the marker, and the rotated angle of the gimbal is taken as the first current angle of the gimbal.
在本可选的实施例方式中,通过将第一中心点成像坐标与视频画面的中心点坐标对比计算并得到第一中心点成像坐标与视频画面的中心点坐标之间的像素差,然后根据像素差计算云台需要旋转的水平角度和垂直角度,最后根据水平角度和垂直角度驱动云台旋转,可使得视频画面的中心点与标识物的中心点对齐,并得到云台的第一当前角度。In this optional embodiment, the pixel difference between the imaging coordinates of the first center point and the center point coordinates of the video image is calculated and obtained by comparing the imaging coordinates of the first center point with the center point coordinates of the video image, and then according to The pixel difference calculates the horizontal and vertical angles that the gimbal needs to rotate, and finally drives the gimbal to rotate according to the horizontal and vertical angles, so that the center point of the video image can be aligned with the center point of the marker, and the first current angle of the gimbal can be obtained. .
在本申请实施例中,作为一种可选的实施方式,第二确定模块207执行根据成像宽度、成像高度确定标识物的成像大小的具体方式为:In this embodiment of the present application, as an optional implementation manner, the specific manner in which the second determining module 207 determines the imaging size of the marker according to the imaging width and imaging height is as follows:
将成像宽度、成像高度分别与预设宽度区间和预设高度区间进行比较并得到比较结果;Comparing the imaging width and imaging height with the preset width interval and the preset height interval respectively, and obtaining the comparison result;
根据比较结果调整测距设备的摄像头倍率,以通过调整视频画面使得成像宽度、成像 高度满足预设条件;Adjust the camera magnification of the ranging device according to the comparison result, so that the imaging width and imaging height can meet the preset conditions by adjusting the video screen;
根据满足预设条件的成像宽度、成像高度确定标识物的成像大小。The imaging size of the marker is determined according to the imaging width and imaging height satisfying the preset conditions.
在本可选的实施方式中,通过将成像宽度、成像高度分别与预设宽度区间和预设高度区间进行比较并得到比较结果,然后根据比较结果调整测距设备的摄像头倍率,以通过调整视频画面使得成像宽度、成像高度满足预设条件,最后根据满足预设条件的成像宽度、成像高度能够确定标识物的成像大小。In this optional implementation manner, the imaging width and imaging height are compared with the preset width interval and the preset height interval respectively, and the comparison result is obtained, and then the camera magnification of the ranging device is adjusted according to the comparison result, so as to adjust the video The image makes the imaging width and imaging height meet the preset conditions, and finally the imaging size of the marker can be determined according to the imaging width and imaging height that satisfy the preset conditions.
在本申请实施例中,作为一种可选的实施方式,第二确定模块207执行根据比较结果调整测距设备的摄像头倍率,以通过调整视频画面使得成像宽度、成像高度满足预设条件的具体方式为:In this embodiment of the present application, as an optional implementation, the second determining module 207 adjusts the camera magnification of the ranging device according to the comparison result, so as to adjust the video image so that the imaging width and imaging height meet the specific conditions of the preset conditions. The way is:
当成像宽度、成像高度分别小于预设宽度区间和预设高度区间时,计算得出摄像头倍率并控制测距设备的摄像头变倍,以放大视频画面;When the imaging width and imaging height are respectively smaller than the preset width interval and the preset height interval, calculate the camera magnification and control the camera zoom of the ranging device to enlarge the video screen;
当成像宽度、成像高度分别大于预设宽度区间和预设高度区间时,计算得出摄像头倍率并控制摄像头变倍,以缩小视频画面。When the imaging width and the imaging height are respectively greater than the preset width interval and the preset height interval, the camera magnification is calculated and the camera zoom is controlled to reduce the video image.
在本可选的实施方式中,通过缩放视频画面能够实现视频画面使得成像宽度、成像高度满足预设条件。In this optional implementation manner, by scaling the video picture, the video picture can be realized so that the imaging width and the imaging height satisfy the preset conditions.
在本申请实施例中,作为一种可选的实施方式,本申请实施例的装置还包括第三计算模块和生成模块,其中:In the embodiment of the present application, as an optional implementation manner, the apparatus of the embodiment of the present application further includes a third calculation module and a generation module, wherein:
第三计算模块,计算标识物的空间坐标与目标点空间坐标之间的差值;The third calculation module calculates the difference between the spatial coordinates of the marker and the spatial coordinates of the target point;
生成模块,用于根据标识物的空间坐标与目标点空间坐标之间的差值生成行进方向信息,以向用户提示行进方向信息。The generating module is used for generating travel direction information according to the difference between the space coordinates of the marker and the space coordinates of the target point, so as to prompt the travel direction information to the user.
在本可选的实施方式中,可根据标识物的空间坐标与目标点空间坐标之间的差值生成行进方向信息,进而可向用户提示行进方向信息。In this optional implementation manner, the travel direction information may be generated according to the difference between the spatial coordinates of the marker and the spatial coordinates of the target point, and then the travel direction information may be prompted to the user.
在本申请实施例中,作为一种可选的实施方式,接收模块201还用于接收第三激活指令,状态切换模块202还用于从待机状态切换为目标检测状态,目标检测状态下,终端设备执行:In this embodiment of the present application, as an optional implementation manner, the receiving module 201 is further configured to receive a third activation instruction, and the state switching module 202 is further configured to switch from the standby state to the target detection state. In the target detection state, the terminal Device execution:
获取模块203还用于从测距设备获取视频画面,识别模块204还用于检测视频画面是否存在标识物,若是状态切换模块202则控制进入目标跟踪状态。The acquisition module 203 is also used for acquiring a video picture from the ranging device, and the identification module 204 is also used for detecting whether there is a marker in the video picture, and if the state switching module 202 is used, it controls to enter the target tracking state.
在本可选的实施方式中,通过初步检测视频画面是否存在标识物,如果有,则终端设备可进入目标跟踪状态,如果没有,则可以进入待机状态,这样一来,可降低终端设备的电能消耗。In this optional embodiment, it is preliminarily detected whether there is a marker in the video image. If there is, the terminal device can enter the target tracking state, and if not, it can enter the standby state, so that the power of the terminal device can be reduced. consume.
请参阅图3,图3是本申请实施例公开的一种终端设备的结构示意图。如图3所示,该 终端设备包括:Please refer to FIG. 3, which is a schematic structural diagram of a terminal device disclosed in an embodiment of the present application. As shown in Figure 3, the terminal equipment includes:
存储有可执行程序代码的存储器301;a memory 301 storing executable program code;
与存储器301耦合的处理器302;a processor 302 coupled to the memory 301;
处理器302调用存储器301中存储的可执行程序代码,执行本申请实施例公开的基于机器视觉远程放样方法。The processor 302 invokes the executable program code stored in the memory 301 to execute the machine vision-based remote stakeout method disclosed in the embodiments of the present application.
本申请的终端设备通过执行基于机器视觉远程放样方法,能够通过接收第一激活指令从测距设备获取视频画面,然后根据机器视觉算法检测视频画面中是否包含标识物,当检测到视频画面中包含标识物时,计算标识物在视频画面中的成像宽度、成像高度及标识物在视频画面中的第一中心点成像坐标,最后根据第一中心点成像坐标确定云台的第一当前角度、根据成像宽度、成像高度确定标识物的成像大小、根据云台的第一当前角度、摄像头当前倍率和标识物的成像大小可计算得到标识物的空间坐标,最终完成标识物的跟踪。By executing the remote stakeout method based on machine vision, the terminal device of the present application can obtain a video image from a ranging device by receiving a first activation instruction, and then detect whether the video image contains a marker according to a machine vision algorithm. When a marker is used, the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture are calculated, and finally the first current angle of the gimbal is determined according to the imaging coordinates of the first center point. The imaging width and imaging height determine the imaging size of the marker, and the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker, and finally the tracking of the marker is completed.
此外,当标识物的跟踪完成后,终端设备可进入精确对准状态,此时,终端设备执行再次从测距设备中获取视频画面,根据机器视觉算法计算目标物在视频画面中的中心点成像坐标,根据中心点成像坐标和预设设备参数计算云台的转动角度,以使得云台根据转动角度转动,并使得测距设备的激光落点落在目标物体的中心点上,获取云台转动后的第二当前角度和测距设备与目标物之间的激光距离,根据第二当前角度和激光距离计算得出目标物的中心点的空间坐标,这样一来,终端设备可利用目标物完成精确定位。In addition, when the tracking of the marker is completed, the terminal device can enter the precise alignment state. At this time, the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained. After the second current angle and the laser distance between the ranging device and the target object, the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
与本领域的人工放样相比,本申请实施例采用了机器视觉算法和测距设备对标识物进行自动跟踪并根据目标物完成精确定位,进而不需要测量人员反复跑动,且对测量人员的测量技术依赖度低,可降低测量人员的失误操作对测量精度的影响,因此本申请实施例具有测量效率高、测量精度高的优点。与此同时,由于目标物在机器视觉算法中不太醒目而不利于跟踪,因此,本申请实施例通过易于识别的标识物体可稳定跟踪。此外,与本领域的自动放样设备相比,本申请实施例具有成本低的优点。Compared with manual stakeout in the field, the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected. The measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy. At the same time, since the target object is not very conspicuous in the machine vision algorithm, which is not conducive to tracking, therefore, the embodiment of the present application can track stably through the easily recognizable marking object. In addition, compared with the automatic stakeout equipment in the art, the embodiment of the present application has the advantage of low cost.
本申请实施例公开一种存储介质,该存储介质存储有计算机指令,计算机指令被调用时,用于执行本申请实施例公开的基于机器视觉远程放样方法。The embodiments of the present application disclose a storage medium, where computer instructions are stored in the storage medium, and when the computer instructions are invoked, they are used to execute the machine vision-based remote stakeout method disclosed in the embodiments of the present application.
本申请的存储介质通过执行基于机器视觉远程放样方法,能够通过接收第一激活指令从测距设备获取视频画面,然后根据机器视觉算法检测视频画面中是否包含标识物,当检测到视频画面中包含标识物时,计算标识物在视频画面中的成像宽度、成像高度及标识物在视频画面中的第一中心点成像坐标,最后根据第一中心点成像坐标确定云台的第一当前角度、根据成像宽度、成像高度确定标识物的成像大小、根据云台的第一当前角度、摄像头当前倍率和标识物的成像大小可计算得到标识物的空间坐标,最终完成标识物的跟踪。By executing the machine vision-based remote stakeout method, the storage medium of the present application can acquire a video image from a ranging device by receiving a first activation instruction, and then detect whether the video image contains a marker according to a machine vision algorithm. When a marker is used, the imaging width and imaging height of the marker in the video picture and the imaging coordinates of the first center point of the marker in the video picture are calculated, and finally the first current angle of the gimbal is determined according to the imaging coordinates of the first center point. The imaging width and imaging height determine the imaging size of the marker, and the spatial coordinates of the marker can be calculated according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker, and finally the tracking of the marker is completed.
此外,当标识物的跟踪完成后,终端设备可进入精确对准状态,此时,终端设备执行再次从测距设备中获取视频画面,根据机器视觉算法计算目标物在视频画面中的中心点成像坐标,根据中心点成像坐标和预设设备参数计算云台的转动角度,以使得云台根据转动角度转动,并使得测距设备的激光落点落在目标物体的中心点上,获取云台转动后的第二当前角度和测距设备与目标物之间的激光距离,根据第二当前角度和激光距离计算得出目标物的中心点的空间坐标,这样一来,终端设备可利用目标物完成精确定位。In addition, when the tracking of the marker is completed, the terminal device can enter the precise alignment state. At this time, the terminal device obtains the video image from the ranging device again, and calculates the image of the center point of the target in the video image according to the machine vision algorithm. Coordinates, calculate the rotation angle of the gimbal according to the imaging coordinates of the center point and the preset device parameters, so that the gimbal rotates according to the rotation angle, and the laser landing point of the ranging device falls on the center point of the target object, and the rotation of the gimbal is obtained. After the second current angle and the laser distance between the ranging device and the target object, the spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance, so that the terminal device can use the target object to complete accurate locating.
与本领域的人工放样相比,本申请实施例采用了机器视觉算法和测距设备对标识物进行自动跟踪并根据目标物完成精确定位,进而不需要测量人员反复跑动,且对测量人员的测量技术依赖度低,可降低测量人员的失误操作对测量精度的影响,因此本申请实施例具有测量效率高、测量精度高的优点。与此同时,由于目标物在机器视觉算法中不太醒目而不利于跟踪,因此,本申请实施例通过易于识别的标识物体可稳定跟踪。此外,与本领域的自动放样设备相比,本申请实施例具有成本低的优点。Compared with manual stakeout in the field, the embodiment of the present application adopts machine vision algorithm and ranging equipment to automatically track the marker and complete accurate positioning according to the target object, so that the measurement personnel do not need to run repeatedly, and the measurement personnel's accuracy is not affected. The measurement technology dependence is low, which can reduce the influence of the wrong operation of the measurement personnel on the measurement accuracy. Therefore, the embodiment of the present application has the advantages of high measurement efficiency and high measurement accuracy. At the same time, since the target object is not very conspicuous in the machine vision algorithm, which is not conducive to tracking, therefore, the embodiment of the present application can track stably through the easily recognizable marking object. In addition, compared with the automatic stakeout equipment in the art, the embodiment of the present application has the advantage of low cost.
在本申请所提供的实施例中,应该理解到,所揭露装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
另外,作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。In addition, units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
再者,在本申请各个实施例中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。Furthermore, each functional module in each embodiment of the present application may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
需要说明的是,功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对本领域的技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。It should be noted that, if the functions are implemented in the form of software function modules and sold or used as independent products, they may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the technology in the field or the parts of the technical solutions. The computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM) random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个 实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。In this document, relational terms such as first and second, etc. are used only to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any such existence between these entities or operations. The actual relationship or sequence.
以上所述仅为本申请的实施例而已,并不用于限制本申请的保护范围,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above descriptions are merely examples of the present application, and are not intended to limit the protection scope of the present application. For those skilled in the art, various modifications and changes may be made to the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included within the protection scope of this application.
工业实用性Industrial Applicability
本申请提供了一种基于机器视觉远程放样方法、装置及终端设备、存储介质,采用了机器视觉算法和测距设备对标识物进行自动跟踪并根据目标物完成精确定位,可降低测量人员的失误操作对测量精度的影响,具有测量效率高、测量精度高、成本低的优点。The present application provides a machine vision-based remote stakeout method, device, terminal equipment, and storage medium, which use machine vision algorithms and ranging equipment to automatically track markers and complete accurate positioning according to the target, which can reduce the mistakes of measuring personnel. The influence of operation on measurement accuracy has the advantages of high measurement efficiency, high measurement accuracy and low cost.

Claims (15)

  1. 一种基于机器视觉远程放样方法,其特征在于,所述方法应用于终端设备,所述方法包括:A method for remote stakeout based on machine vision, characterized in that the method is applied to terminal equipment, and the method includes:
    接收第一激活指令,并从目标检测状态转换为目标跟踪状态,所述目标跟踪状态下,所述终端设备执行:Receive a first activation instruction, and convert from a target detection state to a target tracking state, in which the terminal device executes:
    从测距设备获取视频画面;Obtain video images from ranging equipment;
    根据机器视觉算法检测所述视频画面中是否包含标识物;Detecting whether the video picture contains a marker according to a machine vision algorithm;
    当检测到所述视频画面中包含标识物时,计算所述标识物在所述视频画面中的成像宽度、成像高度及所述标识物在所述视频画面中的第一中心点成像坐标;When it is detected that the video frame includes a marker, calculating the imaging width and imaging height of the marker in the video frame and the imaging coordinates of the first center point of the marker in the video frame;
    根据所述第一中心点成像坐标确定云台的第一当前角度;Determine the first current angle of the gimbal according to the imaging coordinates of the first center point;
    根据所述成像宽度、所述成像高度确定所述标识物的成像大小;Determine the imaging size of the marker according to the imaging width and the imaging height;
    根据所述云台的第一当前角度、所述测距设备的摄像头当前倍率和所述标识物的成像大小计算得到所述标识物的空间坐标;Calculate the spatial coordinates of the marker according to the first current angle of the pan/tilt, the current magnification of the camera of the ranging device, and the imaging size of the marker;
    接收第二激活指令,并从所述目标跟踪状态转换为精确对准状态,所述精确对准状态下,所述终端设备执行:Receive a second activation instruction, and transition from the target tracking state to a precise alignment state, in which the terminal device executes:
    再次从所述测距设备中获取视频画面;Obtaining a video image from the ranging device again;
    根据机器视觉算法计算目标物在所述视频画面中的中心点成像坐标;Calculate the imaging coordinates of the center point of the target in the video frame according to the machine vision algorithm;
    根据所述中心点成像坐标和预设设备参数计算所述云台的转动角度,以使得所述云台根据所述转动角度转动,并使得所述测距设备的激光落点落在所述目标物体的中心点上;The rotation angle of the gimbal is calculated according to the imaging coordinates of the center point and preset device parameters, so that the gimbal rotates according to the rotation angle and the laser landing point of the ranging device falls on the target on the center of the object;
    获取所述云台转动后的第二当前角度和所述测距设备与所述目标物之间的激光距离;Acquiring the second current angle after the rotation of the gimbal and the laser distance between the ranging device and the target;
    根据所述第二当前角度和所述激光距离计算得出所述目标物的中心点的空间坐标。The spatial coordinates of the center point of the target object are calculated according to the second current angle and the laser distance.
  2. 如权利要求1所述的方法,其特征在于,所述标识物为测量操作人员、反光背心、气球中的一种。The method of claim 1, wherein the marker is one of a measurement operator, a reflective vest, and a balloon.
  3. 如权利要求1所述的方法,其特征在于,所述标识物为被测物体本身,或测量操作人员的特定的手势或人体姿态。The method according to claim 1, characterized in that, the identifier is the object to be measured itself, or a specific gesture or human body posture of the measuring operator.
  4. 如权利要求1-3中的任一项所述的方法,其特征在于,所述激活指令为移动智能终端或其他设备发送的特定指令。The method according to any one of claims 1-3, wherein the activation instruction is a specific instruction sent by a mobile smart terminal or other device.
  5. 如权利要求1-3中的任一项所述的方法,其特征在于,所述激活指令为所述终端设备的通电、或计算单元通过无线网络和所述测距设备建立起连接。The method according to any one of claims 1-3, wherein the activation instruction is to power on the terminal device, or to establish a connection between the computing unit and the ranging device through a wireless network.
  6. 如权利要求1-3中的任一项所述的方法,其特征在于,所述激活指令为所述终端设备在视频画面中识别到预设的特定物体、手势、人体姿态、光照变化。The method according to any one of claims 1-3, wherein the activation instruction is that the terminal device recognizes a preset specific object, gesture, human posture, and illumination change in the video picture.
  7. 如权利要求1-3中的任一项所述的方法,其特征在于,所述激活指令为当所述终端设备的待机时间达到预设时间阈值。The method according to any one of claims 1-3, wherein the activation instruction is when the standby time of the terminal device reaches a preset time threshold.
  8. 如权利要求1-7中的任一项所述的方法,其特征在于,所述根据所述第一中心点成像坐标确定云台的第一当前角度,包括:The method according to any one of claims 1-7, wherein, determining the first current angle of the pan/tilt head according to the first center point imaging coordinates, comprising:
    将所述第一中心点成像坐标与所述视频画面的中心点坐标对比计算并得到所述第一中心点成像坐标与所述视频画面的中心点坐标之间的像素差;Comparing and calculating the imaging coordinates of the first center point with the coordinates of the center point of the video picture and obtaining the pixel difference between the imaging coordinates of the first center point and the coordinates of the center point of the video picture;
    根据所述像素差计算云台需要旋转的水平角度和垂直角度;Calculate the horizontal angle and vertical angle that the gimbal needs to rotate according to the pixel difference;
    根据所述水平角度和所述垂直角度驱动所述云台旋转,使得所述视频画面的中心点与所述标识物的中心点对齐,并将云台旋转后的角度作为所述云台的第一当前角度。The pan/tilt is driven to rotate according to the horizontal angle and the vertical angle, so that the center point of the video image is aligned with the center point of the marker, and the rotated angle of the pan/tilt is taken as the first point of the pan/tilt. a current angle.
  9. 如权利要求1-7中的任一项所述的方法,其特征在于,所述根据所述成像宽度、所述成像高度确定所述标识物的成像大小,包括:The method according to any one of claims 1-7, wherein the determining the imaging size of the marker according to the imaging width and the imaging height comprises:
    将所述成像宽度、所述成像高度分别与预设宽度区间和预设高度区间进行比较并得到比较结果;comparing the imaging width and the imaging height with a preset width interval and a preset height interval, respectively, and obtaining a comparison result;
    根据所述比较结果调整所述测距设备的摄像头倍率,以通过调整所述视频画面使得所述标识物的成像宽度、所述标识物的成像高度满足预设条件;Adjust the camera magnification of the distance measuring device according to the comparison result, so that the imaging width of the marker and the imaging height of the marker meet preset conditions by adjusting the video image;
    根据满足预设条件的所述成像宽度、所述成像高度确定所述标识物的成像大小。The imaging size of the marker is determined according to the imaging width and the imaging height satisfying preset conditions.
  10. 如权利要求9所述的方法,其特征在于,所述根据所述比较结果调整所述测距设备的摄像头倍率,以通过调整所述视频画面使得所述成像宽度、所述成像高度满足预设条件,包括:The method according to claim 9, characterized in that, adjusting the camera magnification of the ranging device according to the comparison result, so that the imaging width and the imaging height satisfy a preset by adjusting the video picture. conditions, including:
    当所述成像宽度、所述成像高度分别小于所述预设宽度区间和所述预设高度区间时,计算得出摄像头倍率并控制所述测距设备的摄像头变倍,以放大所述视频画面;When the imaging width and the imaging height are respectively smaller than the preset width interval and the preset height interval, calculate the camera magnification and control the camera of the ranging device to zoom in, so as to enlarge the video screen ;
    当所述成像宽度、所述成像高度分别大于所述预设宽度区间和所述预设高度区间时,计算得出摄像头倍率并控制摄像头变倍,以缩小所述视频画面。When the imaging width and the imaging height are respectively larger than the preset width interval and the preset height interval, the camera magnification is calculated and the camera is controlled to zoom in order to reduce the video picture.
  11. 如权利要求1-7中的任一项所述的方法,其特征在于,在所述根据所述云台的第一当前角度、摄像头当前倍率和所述标识物的成像大小计算得到所述标识物的空间坐标之后,所述方法还包括:The method according to any one of claims 1-7, characterized in that, in the calculation, the marker is obtained according to the first current angle of the pan/tilt head, the current magnification of the camera, and the imaging size of the marker. After the spatial coordinates of the object are determined, the method further includes:
    计算所述标识物的空间坐标与目标点空间坐标之间的差值;Calculate the difference between the spatial coordinates of the marker and the spatial coordinates of the target point;
    根据所述标识物的空间坐标与目标点空间坐标之间的差值生成行进方向信息,以向用户提示所述行进方向信息。The travel direction information is generated according to the difference between the space coordinates of the marker and the space coordinates of the target point, so as to prompt the travel direction information to the user.
  12. 如权利要求1-7中的任一项所述的方法,其特征在于,在所述接收第一激活指 令,并从目标检测状态转换为目标跟踪状态之前,所述方法还包括:The method according to any one of claims 1-7, characterized in that, before the first activation instruction is received, and before the target detection state is converted to the target tracking state, the method further comprises:
    接收第三激活指令,并从待机状态切换为所述目标检测状态,所述目标检测状态下,所述终端设备执行:Receive a third activation instruction, and switch from the standby state to the target detection state. In the target detection state, the terminal device executes:
    从所述测距设备获取视频画面,检测所述视频画面是否存在所述标识物,若是则进入所述目标跟踪状态。Acquire a video image from the ranging device, detect whether the marker exists in the video image, and if so, enter the target tracking state.
  13. 一种基于机器视觉的远程放样装置,其特征在于,所述装置应用于终端设备,所述装置包括:A machine vision-based remote stakeout device, characterized in that the device is applied to terminal equipment, and the device includes:
    接收模块,用于接收第一激活指令;a receiving module for receiving the first activation instruction;
    状态切换模块,用于从目标检测状态转换为目标跟踪状态;The state switching module is used to switch from the target detection state to the target tracking state;
    获取模块,用于从测距设备获取视频画面;The acquisition module is used to acquire the video image from the ranging device;
    识别模块,用于根据机器视觉算法检测所述视频画面中是否包含标识物;an identification module, used for detecting whether the video picture contains a marker according to a machine vision algorithm;
    第一计算模块,用于当所述识别模块检测到所述视频画面中包含标识物时,计算所述标识物在所述视频画面中的成像宽度、成像高度及所述标识物在所述视频画面中的第一中心点成像坐标;The first calculation module is configured to calculate the imaging width and imaging height of the landmark in the video picture and the imaging height of the landmark in the video when the identification module detects that the video picture contains a marker. The imaging coordinates of the first center point in the screen;
    第一确定模块,用于根据所述第一中心点成像坐标确定云台的第一当前角度;a first determining module, configured to determine the first current angle of the pan-tilt head according to the imaging coordinates of the first center point;
    第二确定模块,用于根据所述成像宽度、所述成像高度确定所述标识物的成像大小;a second determining module, configured to determine the imaging size of the marker according to the imaging width and the imaging height;
    第二计算模块,用于根据所述云台的第一当前角度、摄像头当前倍率和所述标识物的成像大小计算得到所述标识物的空间坐标;a second calculation module, configured to calculate the spatial coordinates of the marker according to the first current angle of the pan/tilt head, the current magnification of the camera and the imaging size of the marker;
    所述接收模块,还用于接收第二激活指令;The receiving module is further configured to receive a second activation instruction;
    所述状态切换模块,还用于从所述目标跟踪状态转换为精确对准状态,所述精确对准状态下;The state switching module is further configured to switch from the target tracking state to a precise alignment state, in the precise alignment state;
    所述获取模块,还用于再次从所述测距设备中获取视频画面;The acquiring module is further configured to acquire video images from the ranging device again;
    所述第一计算模块,还用于根据机器视觉算法计算目标物在所述视频画面中的中心点成像坐标;The first calculation module is further configured to calculate the imaging coordinates of the center point of the target in the video frame according to a machine vision algorithm;
    所述第一确定模块,还用于根据所述中心点成像坐标和预设设备参数计算所述云台的转动角度,以使得所述云台根据所述转动角度转动,并使得所述测距设备的激光落点落在所述目标物体的中心点上;The first determining module is further configured to calculate the rotation angle of the pan/tilt according to the imaging coordinates of the center point and preset device parameters, so that the pan/tilt rotates according to the rotation angle and makes the ranging The laser landing point of the device falls on the center point of the target object;
    所述获取模块,还用于获取所述云台转动后的第二当前角度和所述测距设备与所述目标物之间的激光距离;The acquiring module is further configured to acquire the second current angle after the pan/tilt is rotated and the laser distance between the ranging device and the target;
    所述第二计算模块,还用于根据所述第二当前角度和所述激光距离计算得出所述目标物的中心点的空间坐标。The second calculation module is further configured to calculate the spatial coordinates of the center point of the target object according to the second current angle and the laser distance.
  14. 一种终端设备,其特征在于,所述终端设备包括:A terminal device, characterized in that the terminal device includes:
    存储有可执行程序代码的存储器;a memory in which executable program code is stored;
    与所述存储器耦合的处理器;a processor coupled to the memory;
    所述处理器调用所述存储器中存储的所述可执行程序代码,执行如权利要求1-12中的任一项所述的基于机器视觉远程放样方法。The processor invokes the executable program code stored in the memory to execute the machine vision-based remote stakeout method according to any one of claims 1-12.
  15. 一种存储介质,其特征在于,所述存储介质存储有计算机指令,所述计算机指令被调用时,用于执行如权利要求1-12中的任一项所述的基于机器视觉远程放样方法。A storage medium, characterized in that, the storage medium stores computer instructions, and when the computer instructions are invoked, they are used to execute the machine vision-based remote stakeout method according to any one of claims 1-12.
PCT/CN2021/081145 2020-06-30 2021-03-16 Method and apparatus for remote setting-out based on machine vision, and terminal device and storage medium WO2022001193A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010622000.0A CN111783659B (en) 2020-06-30 2020-06-30 Remote lofting method and device based on machine vision, terminal equipment and storage medium
CN202010622000.0 2020-06-30

Publications (1)

Publication Number Publication Date
WO2022001193A1 true WO2022001193A1 (en) 2022-01-06

Family

ID=72760547

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/081145 WO2022001193A1 (en) 2020-06-30 2021-03-16 Method and apparatus for remote setting-out based on machine vision, and terminal device and storage medium

Country Status (2)

Country Link
CN (1) CN111783659B (en)
WO (1) WO2022001193A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506735A (en) * 2023-06-21 2023-07-28 清华大学 Universal camera interference method and system based on active vision camera

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783659B (en) * 2020-06-30 2023-10-20 福建汇川物联网技术科技股份有限公司 Remote lofting method and device based on machine vision, terminal equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102445183A (en) * 2011-10-09 2012-05-09 福建汇川数码技术科技有限公司 Apparatus of ranging laser point of remote ranging system and positioning method based on paralleling of laser and camera
US20190096080A1 (en) * 2017-08-25 2019-03-28 Maker Trading Pte Ltd Machine vision system and method for identifying locations of target elements
CN110332854A (en) * 2019-07-25 2019-10-15 深圳市恒天伟焱科技有限公司 Localization method, gun sight and the computer readable storage medium of object
CN111783659A (en) * 2020-06-30 2020-10-16 福建汇川物联网技术科技股份有限公司 Machine vision-based remote lofting method and device, terminal equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956642A (en) * 2019-12-03 2020-04-03 深圳市未来感知科技有限公司 Multi-target tracking identification method, terminal and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102445183A (en) * 2011-10-09 2012-05-09 福建汇川数码技术科技有限公司 Apparatus of ranging laser point of remote ranging system and positioning method based on paralleling of laser and camera
US20190096080A1 (en) * 2017-08-25 2019-03-28 Maker Trading Pte Ltd Machine vision system and method for identifying locations of target elements
CN110332854A (en) * 2019-07-25 2019-10-15 深圳市恒天伟焱科技有限公司 Localization method, gun sight and the computer readable storage medium of object
CN111783659A (en) * 2020-06-30 2020-10-16 福建汇川物联网技术科技股份有限公司 Machine vision-based remote lofting method and device, terminal equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506735A (en) * 2023-06-21 2023-07-28 清华大学 Universal camera interference method and system based on active vision camera
CN116506735B (en) * 2023-06-21 2023-11-07 清华大学 Universal camera interference method and system based on active vision camera

Also Published As

Publication number Publication date
CN111783659B (en) 2023-10-20
CN111783659A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
US11640235B2 (en) Additional object display method and apparatus, computer device, and storage medium
WO2022001193A1 (en) Method and apparatus for remote setting-out based on machine vision, and terminal device and storage medium
WO2022078467A1 (en) Automatic robot recharging method and apparatus, and robot and storage medium
TWI574223B (en) Navigation system using augmented reality technology
WO2019128109A1 (en) Face tracking based dynamic projection method, device and electronic equipment
US10659753B2 (en) Photogrammetry system and method of operation
US20130208005A1 (en) Image processing device, image processing method, and program
US20210227144A1 (en) Target tracking method and device, movable platform, and storage medium
CN110967014B (en) Machine room indoor navigation and equipment tracking method based on augmented reality technology
WO2014044161A1 (en) Target tracking method and system for intelligent tracking high speed dome camera
CN112509057A (en) Camera external parameter calibration method and device, electronic equipment and computer readable medium
CN108319918B (en) Embedded tracker and target tracking method applied to same
WO2020042968A1 (en) Method for acquiring object information, device, and storage medium
US11801602B2 (en) Mobile robot and driving method thereof
CN105635570B (en) Shooting preview method and system
TW202001892A (en) Indoor positioning system and method based on geomagnetic signals in combination with computer vision
WO2015093130A1 (en) Information processing device, information processing method, and program
CN112422653A (en) Scene information pushing method, system, storage medium and equipment based on location service
KR20210114838A (en) Method and device for detecting body temperature, electronic apparatus and storage medium
KR101103923B1 (en) Camera robot for taking moving picture and method for taking moving picture using camera robot
CN114600162A (en) Scene lock mode for capturing camera images
CN108572734A (en) A kind of gestural control system based on infrared laser associated image
CN107911688A (en) A kind of homework of supplying power scene Synergistic method based on augmented reality device
WO2022052409A1 (en) Automatic control method and system for multi-camera filming
KR101358064B1 (en) Method for remote controlling using user image and system of the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21832390

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21832390

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21832390

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.06.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21832390

Country of ref document: EP

Kind code of ref document: A1