CN110517221B - Gap positioning method and device based on real coordinates and storage medium - Google Patents

Gap positioning method and device based on real coordinates and storage medium Download PDF

Info

Publication number
CN110517221B
CN110517221B CN201910605642.7A CN201910605642A CN110517221B CN 110517221 B CN110517221 B CN 110517221B CN 201910605642 A CN201910605642 A CN 201910605642A CN 110517221 B CN110517221 B CN 110517221B
Authority
CN
China
Prior art keywords
coordinates
gap
coordinate
endpoint
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910605642.7A
Other languages
Chinese (zh)
Other versions
CN110517221A (en
Inventor
黄永祯
徐栋
于仕琪
王凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Shuidi Technology Shenzhen Co ltd
Watrix Technology Beijing Co ltd
Original Assignee
Zhongke Shuidi Technology Shenzhen Co ltd
Watrix Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Shuidi Technology Shenzhen Co ltd, Watrix Technology Beijing Co ltd filed Critical Zhongke Shuidi Technology Shenzhen Co ltd
Priority to CN201910605642.7A priority Critical patent/CN110517221B/en
Publication of CN110517221A publication Critical patent/CN110517221A/en
Application granted granted Critical
Publication of CN110517221B publication Critical patent/CN110517221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application relates to a gap positioning method and device based on real coordinates, computer equipment and a storage medium. The method comprises the following steps: the method comprises the steps of obtaining an image containing a laser line and a gap, extracting image coordinates of the laser line in the image, converting the image coordinates of the laser line into world coordinates in a world coordinate system, inputting the world coordinates to a preset gap endpoint prediction model, obtaining target coordinates of an endpoint pair of the gap, and calculating according to the target coordinates of the endpoint pair to obtain width information of the gap. The method comprises the steps of predicting coordinate data of a laser line through a trained preset gap endpoint prediction model to obtain target coordinates of an endpoint pair, calculating the gap width according to the target coordinates of the endpoint pair, and directly determining corresponding operation according to the gap width, so that inconvenience in operation caused by classifying gaps is avoided, and convenience in operation is improved.

Description

Gap positioning method and device based on real coordinates and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for positioning a gap based on real coordinates, a computer device, and a storage medium.
Background
The existing gap positioning algorithm needs to classify the gap types, different gap types use different gap positioning algorithms, and the gap types are various in the actual situation. In order to realize the automatic welding, gluing and other gap repairing operations of various types of gaps, various gap types need to be defined in advance. Taking welding as an example, when welding a gap, the gap needs to be classified according to a defined gap type, and the defined gap type is limited and cannot be completely matched with the defined gap type, which results in a complex welding operation process.
Disclosure of Invention
In order to solve the technical problem, the application provides a gap positioning method and device based on real coordinates, computer equipment and a storage medium.
In a first aspect, the present application provides a gap positioning method based on real coordinates, including:
acquiring an image containing a laser line and a gap;
extracting the image coordinates of the laser line in the image, and converting the image coordinates of the laser line into world coordinates in a world coordinate system;
inputting world coordinates to a preset gap endpoint prediction model to obtain target coordinates of an endpoint pair of a gap;
And calculating to obtain the width information of the gap according to the target coordinates of the end point pairs.
In a second aspect, the present application provides a gap positioning apparatus based on real coordinates, including:
the data acquisition module is used for acquiring an image containing a laser line and a gap;
the coordinate conversion module is used for extracting the image coordinates of the laser lines in the image and converting the image coordinates of the laser lines into world coordinates in a world coordinate system;
the prediction module is used for inputting world coordinates to a preset gap endpoint prediction model to obtain target coordinates of an endpoint pair of a gap;
and the width calculation module is used for calculating the width information of the gap according to the target coordinates of the end point pairs.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring an image containing a laser line and a gap;
extracting the image coordinates of the laser line in the image, and converting the image coordinates of the laser line into world coordinates in a world coordinate system;
inputting world coordinates to a preset gap endpoint prediction model to obtain target coordinates of an endpoint pair of a gap;
And calculating to obtain the width information of the gap according to the target coordinates of the end point pairs.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring an image containing a laser line and a gap;
extracting the image coordinates of the laser line in the image, and converting the image coordinates of the laser line into world coordinates in a world coordinate system;
inputting world coordinates to a preset gap endpoint prediction model to obtain target coordinates of an endpoint pair of a gap;
and calculating to obtain the width information of the gap according to the target coordinates of the end point pairs.
The gap positioning method, the gap positioning device, the computer equipment and the storage medium based on the real coordinate comprise the following steps: acquiring an image containing a laser line and a gap; extracting the image coordinates of the laser line in the image, and converting the image coordinates of the laser line into world coordinates in a world coordinate system; inputting world coordinates to a preset gap endpoint prediction model to obtain target coordinates of an endpoint pair of a gap; and calculating to obtain the width information of the gap according to the target coordinates of the end point pairs. The method comprises the steps of directly predicting coordinate data of a laser line through a trained preset gap endpoint prediction model to obtain target coordinates of the endpoint pairs, calculating gap width according to the target coordinates of the endpoint pairs, and executing gap repairing operations such as automatic gap welding and gluing according to gap width information, so that inconvenience in operation caused by classifying the gaps is avoided, and the operation is simple and convenient.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a diagram of an embodiment of an application environment of a gap location method based on real-world coordinates;
FIG. 2 is a schematic flow chart diagram illustrating a gap location method based on real-world coordinates according to an embodiment;
FIG. 3 is a schematic diagram of an embodiment including laser lines and slits;
FIG. 4 is a block diagram of a gap locating device based on real coordinates according to an embodiment;
FIG. 5 is a diagram of the internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is an application environment diagram of a gap locating method based on real coordinates in an embodiment. Referring to fig. 1, the gap positioning method based on real coordinates is applied to a gap positioning system based on real coordinates. The gap positioning system based on real coordinates comprises a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network. Acquiring an image containing a laser line and a gap; extracting the image coordinates of the laser line in the image, and converting the image coordinates of the laser line into world coordinates in a world coordinate system; inputting world coordinates to a preset gap endpoint prediction model to obtain target coordinates of an endpoint pair of a gap; and calculating to obtain the width information of the gap according to the target coordinates of the end point pairs.
The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a gap locating method based on real coordinates is provided. The embodiment is mainly illustrated by applying the method to the terminal 110 (or the server 120) in fig. 1. Referring to fig. 2, the gap positioning method based on real coordinates specifically includes the following steps:
Step S201, an image including a laser line and a slit is acquired.
Specifically, the image is acquired by an image acquisition device. The image capture device may be a 3D line laser camera, a 2D camera, or the like. The 3D line laser camera is a camera comprising a laser component and a 2D camera component, wherein the laser component is used for emitting laser, and the camera component is used for shooting an image containing the laser to obtain the image. The gaps contained in the image are determined from the laser lines. Such as a gap in the location of the laser line in the image. Wherein the laser assembly and the 2D camera assembly do not belong to the same device, the laser assembly is adopted to irradiate the surface of the device or the building containing the crack, and the image 2D camera collects the image containing the laser and the gap.
Referring to FIG. 3, FIG. 3 is an image containing laser lines and apertures, laser lines 020 and apertures 040 of the image.
Step S202, extracting the image coordinate of the laser line in the image, and converting the image coordinate of the laser line into the world coordinate in the world coordinate system.
Specifically, the method for extracting the laser line may be a common straight line extraction algorithm, such as using morphology, hough transform, wavelet transform, and the like. The laser line in the image is extracted through a straight line extraction algorithm, the image coordinate of the laser line in the image is obtained, the image coordinate is converted into the camera coordinate of a three-dimensional camera coordinate system according to the imaging principle of a camera, if the camera coordinate system is defined as a world coordinate system, the camera coordinate is the world coordinate, and if other coordinate systems are defined as the world coordinate system, the world coordinate corresponding to the camera coordinate is obtained according to the relation between the camera coordinate system and the world coordinate system. The conversion parameters between the image coordinate system and the world coordinate system comprise internal parameters and external parameters, wherein the internal parameters refer to imaging parameters of the image acquisition equipment and parameters for performing coordinate conversion on the image coordinate system and the equipment coordinate system of the image acquisition equipment, and the external parameters refer to parameters for performing coordinate conversion on the equipment coordinate system and the world coordinate system.
Step S203, inputting world coordinates to a preset gap endpoint prediction model to obtain target coordinates of the endpoint pair of the gap.
Specifically, the preset gap endpoint prediction model may be a random forest model, a support vector machine model, or other common machine learning models, and the predicted coordinates of the input world coordinate output endpoint pair may be output through training. The target coordinates of the end point pairs refer to coordinates of the end point pairs consisting of two end point coordinates of the gaps obtained by prediction according to a preset gap end point prediction model.
In one embodiment, generating the preset slot endpoint prediction model comprises: the method comprises the steps of obtaining a plurality of images to be trained containing laser lines, extracting image coordinates of the laser lines to be trained in the images to be trained, converting the image coordinates of the laser lines to be trained into world coordinates to be trained in a world coordinate system, inputting each group of world coordinates to be trained to an initial gap endpoint prediction model to obtain predicted coordinates of the endpoint pairs, calculating the difference between each group of the labels of the world coordinates to be trained and the predicted coordinates of the corresponding endpoint pairs, and obtaining the preset gap endpoint prediction model when the difference is smaller than the preset difference.
Specifically, the image to be trained is data for training the initial slot endpoint prediction model. The image to be trained contains a laser line, the laser line in the image to be trained is extracted through a linear algorithm to obtain the position information of the laser line, and the image coordinate of the laser line is determined according to the position information of the laser line. And converting the image coordinate of the laser line into a world coordinate system through the conversion relation between the image coordinate system and the world coordinate system to obtain the world coordinate of the laser line, namely the world coordinate to be trained, wherein the world coordinate to be trained comprises a label of the coordinate of the end point pair to be trained, namely the identifier of the coordinate of two end points of at least one gap in each group of the world coordinate to be trained. And inputting each group of world coordinates to be trained to the initial gap endpoint prediction model, outputting the prediction coordinates of the endpoint pairs, and calculating the difference degree of the prediction coordinates of the endpoint pairs of each group of world coordinates to be trained and the identification of the endpoint coordinates of the corresponding gap. The calculation method of the difference degree can be determined by using a conventional data difference calculation method, such as directly calculating the difference value of the endpoint coordinates, or calculating the ratio of the gap widths, the logarithm of the difference value and the ratio, and the like. The preset difference is a preset critical value used for judging whether the initial gap endpoint prediction model converges, and when the difference is smaller than the preset difference, the initial gap endpoint prediction model converges to obtain the preset gap endpoint prediction model.
In one embodiment, inputting each set of world coordinates to be trained into the initial slot endpoint prediction model to obtain the predicted coordinates of the endpoint pair, including: and obtaining the interval coordinate in the prediction interval corresponding to the current initialization coordinate in the world coordinate to be trained according to the current initialization coordinate and the prediction interval in the initial gap endpoint prediction model, calculating the gradient between the coordinates of two adjacent points in the interval coordinate, and taking the adjacent coordinates larger than the initial preset gradient as the prediction coordinates of the endpoint pair when the gradient is larger than the initial preset gradient.
Specifically, the training sample set is a set composed of preset sets of world coordinates to be trained. One or more initialization coordinates may be included in the initial slot endpoint prediction model. The first initialization coordinate is a preset initialization coordinate, and the initialization coordinate can be defined according to requirements. For example, the coordinates of the 100 th point or 150 th point of the input world coordinates to be trained are selected as the first initialization coordinates. The current initialization coordinate is any one of the initialization coordinates in the initial slot endpoint prediction model. The prediction interval is a preset data acquisition interval, and if the prediction interval is 5, the prediction interval corresponding to the current initialization coordinate is an interval with the current initialization coordinate as a starting point, a middle point or an end point. If the current coordinate is the coordinate of the 100 th point of the world coordinate to be trained, the interval coordinate in the prediction interval is the coordinates of the 100 th point, the 98-102 th point or the 96-100 th point. Calculating the gradient between two adjacent coordinate points, and judging whether the gradient between each two adjacent coordinate points is greater than an initial preset gradient, wherein the initial preset gradient is a preset critical value for judging whether the adjacent coordinate points are end point pairs of the gap. And when the gradient between the adjacent coordinate points is larger than the initial preset gradient, taking the adjacent coordinate points as the end point pairs, and taking the coordinates of the adjacent coordinate points as the predicted coordinates of the end point pairs.
In one embodiment, when the gradient is smaller than or equal to the initial preset gradient, determining a next initialization coordinate according to the gradient and the current initialization coordinate, taking the next initialization coordinate as the current initialization coordinate, and acquiring an interval coordinate in a prediction interval corresponding to the current initialization coordinate in the world coordinate to be trained according to the current initialization coordinate and the prediction interval in the initial gap endpoint prediction model until the prediction coordinate of the endpoint pair is determined.
Specifically, when the gradient is less than or equal to the initial preset gradient, it indicates that the predicted coordinates of the endpoint pair do not exist in the interval coordinates in the prediction interval corresponding to the current initialization coordinates. The next initial coordinate is determined from the current initial coordinate and the gradient. And determining the displacement variation through the gradient transformation rule of the coordinates of all two adjacent points in the prediction interval, wherein the displacement variation comprises the variation distance and the variation direction. The change distance determined from the gradient and the current initialization coordinate (x, y) is (Δ x, Δ y), then there is the next initialization coordinate (x ± Δ x, y ± Δ y), the "+" sign represents the first direction, and the "-" sign represents the direction opposite to the first direction. And when the gradient is greater than the initial preset gradient, the adjacent coordinates greater than the initial preset gradient are taken as the predicted coordinates of the endpoint pair. Otherwise, the predicted coordinates of the end point pair are not obtained, the next initialized coordinate is determined according to the next initialized coordinate and the gradient, and the processes of obtaining the interval coordinates, calculating the gradient and judging the gradient are repeatedly executed until the predicted coordinates of the end point pair are obtained. The initialized coordinates are adjusted for multiple times, so that the initialized coordinates are gradually close to the end point coordinates of the gap, the calculation is convenient, and the calculation principle is simple.
In one embodiment, when the difference is greater than or equal to the preset difference, updating model parameters of the initial gap endpoint prediction model according to the difference, predicting each group of world coordinates to be trained by using the initial gap endpoint prediction model with the updated model parameters to obtain predicted coordinates of the endpoint pair, and calculating the difference between the predicted coordinates of the endpoint pair and the corresponding labels until the difference is less than the preset difference to obtain the preset gap endpoint prediction model.
Specifically, when the difference is greater than or equal to the preset difference, the initial gap endpoint prediction model is not converged, and the model parameters in the initial gap endpoint prediction model are updated. Which automatically updates the model parameters according to the degree of difference. The model parameters refer to parameters to be determined in the initial gap endpoint prediction model, and include but are not limited to initial preset gradients, initialization coordinates and prediction intervals. The adjustment range of each model parameter is determined according to the difference degree, and a common parameter adjustment method of machine learning can be adopted. And adopting the initial gap endpoint prediction model with updated model parameters, predicting the endpoint pairs for each group of world coordinates to be trained again, obtaining the predicted coordinates of the endpoint pairs again, calculating the difference between the predicted coordinates of the endpoint pairs and the corresponding labels, judging whether the difference is smaller than the preset difference, obtaining the preset gap endpoint prediction model if the difference is smaller than the preset difference, updating the model parameters again if the difference is larger than the preset difference, and repeatedly executing the data processing processes of prediction, difference calculation, difference judgment and the like until the preset gap endpoint prediction model is obtained.
In one embodiment, step S203 includes: according to the preset initialization coordinate and the preset interval in the preset gap endpoint prediction model, the current interval coordinate in the preset interval corresponding to the preset initialization coordinate in the world coordinate is obtained, the current gradient between the coordinates of two adjacent points in the current interval coordinate is calculated, and when the current gradient is larger than the preset gradient, the adjacent coordinates larger than the preset gradient are used as the target coordinates of the endpoint pair.
Specifically, the preset initialization coordinates and the preset interval are initialization coordinates and a prediction interval determined in the preset gap endpoint prediction model. Obtaining a current interval coordinate in a preset interval from the world coordinate according to a preset initialization coordinate and the preset interval, calculating a current gradient between coordinates of two adjacent points in the current interval coordinate in the preset interval, and judging whether the current gradient has two adjacent points which are larger than the preset gradient, wherein the preset gradient is gradient information determined by a preset gap endpoint prediction model. And if so, taking the coordinates of two adjacent points which are larger than the preset gradient as the target coordinates of the end point pair. Otherwise, determining the next initialization coordinate of the preset initialization coordinate according to the gradient, acquiring the next interval coordinate in the preset interval from the world coordinate according to the next initialization coordinate of the preset initialization coordinate and the preset interval, calculating the next gradient between the coordinates of two adjacent points in the next interval coordinate in the preset interval, judging whether the next gradient has two adjacent points which are larger than the preset gradient, and if so, taking the coordinates of the two adjacent points which are larger than the preset gradient as the target coordinates of the endpoint pair. Otherwise, the above process is repeated until the target coordinates of the endpoint pairs are obtained.
And step S204, calculating according to the target coordinates of the end point pairs to obtain the width information of the gap.
Specifically, coordinates of two end points of the slit are known, and the width information of the slit is obtained by calculating distance information between the two points.
The gap positioning method based on the real coordinate comprises the following steps: acquiring an image containing a laser line and a gap; extracting the image coordinates of the laser line in the image, and converting the image coordinates of the laser line into world coordinates in a world coordinate system; inputting world coordinates to a preset gap endpoint prediction model to obtain target coordinates of an endpoint pair of a gap; and calculating to obtain the width information of the gap according to the target coordinates of the end point pairs. The method comprises the steps of directly predicting coordinate data of a laser line through a trained preset gap endpoint prediction model to obtain target coordinates of an endpoint pair, calculating the gap width according to the target coordinates of the endpoint pair, directly determining operation according to the gap width, avoiding inconvenient operation caused by classifying gaps, and being simple and convenient to operate.
In a specific embodiment, the above-mentioned gap location algorithm includes:
first, data is collected. And collecting various welding seam data as much as possible by using the 3D line laser camera, acquiring the world coordinates of the laser line and/or acquiring a picture taken by the 3D line laser camera, extracting the obtained image coordinates of the laser line from the picture, and converting the image coordinates into the world coordinates in a world coordinate system. The method comprises the steps of obtaining world coordinates of each point on a laser line, marking an end point of a welding seam, mapping the end point to a world coordinate system, marking two end points of the welding seam, positioning the position of the welding seam through the two end points of the marked welding seam, and calculating the width of the welding seam.
And secondly, using an initial gap endpoint prediction model, wherein the initial gap endpoint prediction model adopts a random forest algorithm to perform regression training on the calibrated data. The input data of the model is the world coordinate of each point on the laser line, the output is the endpoint world coordinate, the marked world coordinate is compared with the prediction coordinate, the common error punishment method is used for judging the convergence state of the model of the initial gap endpoint prediction model, and if a norm is adopted as the error punishment, the preset gap endpoint prediction model is obtained through training.
And finally, deploying the preset gap endpoint prediction model obtained by training, inputting the world coordinates of each point on the laser line extracted from the acquired image data acquired by the 3D line laser camera, directly predicting two endpoints of the gap by the preset gap endpoint prediction model, returning the prediction result to obtain the target coordinates of the endpoint pairs of the gap, and calculating the width information of the gap according to the target coordinates.
The position of the welding seam can be quickly and accurately positioned by the gap positioning, the operation is simple and efficient, the type definition of the welding seam is not needed, and the compatibility is strong.
Fig. 2 is a schematic flowchart of a gap positioning method based on real coordinates in an embodiment. It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4, there is provided a real coordinate-based gap locating apparatus 200, including:
and a data acquisition module 201 for acquiring an image containing the laser line and the slit.
And the coordinate conversion module 202 is used for extracting the image coordinates of the laser line in the image and converting the image coordinates of the laser line into world coordinates in a world coordinate system.
And the prediction module 203 is configured to input the world coordinates to a preset gap endpoint prediction model to obtain target coordinates of an endpoint pair of the gap.
And the width calculating module 204 is configured to calculate width information of the gap according to the target coordinates of the endpoint pairs.
In one embodiment, the gap locating device based on real coordinates further comprises:
the model generation module is used for generating a preset gap endpoint prediction model, and comprises:
and the data acquisition unit is used for acquiring a plurality of images to be trained containing laser lines and extracting the image coordinates of the laser lines to be trained in the images to be trained.
And the data identification unit is used for converting the image coordinates of each laser line to be trained into world coordinates to be trained in the world coordinate system, and the world coordinates to be trained comprise the labels of the coordinates of the end point pairs to be trained.
And the prediction unit is used for inputting each group of world coordinates to be trained to the initial slot endpoint prediction model to obtain the predicted coordinates of the endpoint pairs.
And the difference degree calculation unit is used for calculating the difference degree of the labels of all the groups of world coordinates to be trained and the predicted coordinates of the corresponding endpoint pairs.
And the model determining unit is used for obtaining the preset gap endpoint prediction model when the difference degree is smaller than the preset difference degree.
In an embodiment, the prediction unit is specifically configured to obtain, according to a current initialized coordinate and a prediction interval in the initial slot endpoint prediction model, an interval coordinate in the prediction interval corresponding to the current initialized coordinate in the world coordinate to be trained, calculate a gradient between coordinates of two adjacent points in the interval coordinate, and when the gradient is greater than an initial preset gradient, use an adjacent coordinate greater than the initial preset gradient as the predicted coordinate of the endpoint pair.
In an embodiment, the prediction unit is further configured to, when the gradient is less than or equal to an initial preset gradient, determine a next initialization coordinate according to the gradient and the current initialization coordinate, use the next initialization coordinate as the current initialization coordinate, and perform, according to the current initialization coordinate and the prediction interval in the initial slot endpoint prediction model, obtaining an interval coordinate in the prediction interval corresponding to the current initialization coordinate in the world coordinate to be trained until the prediction coordinate of the endpoint pair is determined.
In an embodiment, the model determining unit is further configured to, when the difference is greater than or equal to the preset difference, update the model parameters of the initial gap endpoint prediction model according to the difference, predict each group of world coordinates to be trained by using the initial gap endpoint prediction model with the updated model parameters to obtain predicted coordinates of the endpoint pair, and calculate the difference between the predicted coordinates of the endpoint pair and the corresponding label until the difference is less than the preset difference to obtain the preset gap endpoint prediction model.
In an embodiment, the prediction module is specifically configured to obtain a current interval coordinate in a preset interval corresponding to a preset initialization coordinate in the world coordinate according to the preset initialization coordinate and the preset interval in the preset gap endpoint prediction model, calculate a current gradient between coordinates of two adjacent points in the current interval coordinate, and when the current gradient is greater than the preset gradient, use the coordinates of the two adjacent points greater than the preset gradient as target coordinates of the endpoint pair.
FIG. 5 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the terminal 110 (or the server 120) in fig. 1. As shown in fig. 5, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program which, when executed by the processor, causes the processor to implement a method for gap location based on real coordinates. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform a gap location method based on real coordinates. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the gap locating apparatus based on real-world coordinates provided in the present application may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in fig. 5. The memory of the computer device may store various program modules constituting the real coordinate-based gap locating apparatus, such as a data acquisition module 201, a coordinate conversion module 202, a prediction module 203, and a width calculation module 204 shown in fig. 4. The program modules constitute computer programs that cause the processor to execute the steps of the gap locating method based on real coordinates according to the embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 5 can be used to acquire an image containing the laser line and the slit by the data acquisition module 201 in the real-world coordinate-based slit positioning apparatus shown in fig. 4. The computer device may perform extracting the image coordinates of the laser line in the image by the coordinate conversion module 202, converting the image coordinates of the laser line to world coordinates in a world coordinate system. The computer device may input the world coordinates to a preset gap endpoint prediction model through the prediction module 203 to obtain target coordinates of the endpoint pair of the gap. The computer device may calculate width information of the gap according to the target coordinates of the end point pair through the width calculation module 204.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: acquiring an image containing a laser line and a gap; extracting an image coordinate of a laser line in the image, and converting the image coordinate of the laser line into a world coordinate in a world coordinate system; inputting world coordinates to a preset gap endpoint prediction model to obtain target coordinates of an endpoint pair of a gap; and calculating to obtain the width information of the gap according to the target coordinates of the end point pairs.
In one embodiment, the processor, when executing the computer program, further performs the steps of: generating a preset gap endpoint prediction model, comprising: acquiring a plurality of images to be trained containing laser lines, and extracting image coordinates of the laser lines to be trained in the images to be trained; converting the image coordinates of each laser line to be trained into world coordinates to be trained in the world coordinate system, wherein the world coordinates to be trained comprise labels of coordinates of end points to be trained; inputting each group of world coordinates to be trained to an initial gap endpoint prediction model to obtain prediction coordinates of an endpoint pair; calculating the difference degree of the labels of all groups of world coordinates to be trained and the predicted coordinates of the corresponding endpoint pairs; and when the difference degree is smaller than the preset difference degree, obtaining a preset gap endpoint prediction model.
In one embodiment, inputting each set of world coordinates to be trained into an initial slot endpoint prediction model to obtain predicted coordinates of an endpoint pair, including: acquiring interval coordinates in a prediction interval corresponding to the current initialization coordinate in world coordinates to be trained according to the current initialization coordinate and the prediction interval in the initial gap endpoint prediction model; calculating the gradient between the coordinates of two adjacent points in the interval coordinates; and when the gradient is greater than the initial preset gradient, taking the adjacent coordinates greater than the initial preset gradient as the predicted coordinates of the endpoint pair.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and when the gradient is smaller than or equal to the initial preset gradient, determining a next initialization coordinate according to the gradient and the current initialization coordinate, taking the next initialization coordinate as the current initialization coordinate, executing the current initialization coordinate and the prediction interval according to the initial gap endpoint prediction model, and acquiring the interval coordinate in the prediction interval corresponding to the current initialization coordinate in the world coordinate to be trained until the prediction coordinate of the endpoint pair is determined.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and when the difference degree is greater than or equal to the preset difference degree, updating model parameters of the initial gap endpoint prediction model according to the difference degree, predicting each group of world coordinates to be trained by adopting the initial gap endpoint prediction model with updated model parameters to obtain predicted coordinates of the endpoint pair, and calculating the difference degree between the predicted coordinates of the endpoint pair and the corresponding label until the difference degree is less than the preset difference degree to obtain the preset gap endpoint prediction model.
In one embodiment, inputting world coordinates into a preset gap endpoint prediction model to obtain target coordinates of an endpoint pair of a gap, includes: according to the preset initialization coordinate and the preset interval in the preset gap endpoint prediction model, the current interval coordinate in the preset interval corresponding to the preset initialization coordinate in the world coordinate is obtained, the current gradient between the coordinates of two adjacent points in the current interval coordinate is calculated, and when the current gradient is larger than the preset gradient, the coordinates of the two adjacent points larger than the preset gradient are used as the target coordinates of the endpoint pair.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring an image containing a laser line and a gap; extracting the image coordinates of the laser line in the image, and converting the image coordinates of the laser line into world coordinates in a world coordinate system; inputting world coordinates to a preset gap endpoint prediction model to obtain target coordinates of an endpoint pair of a gap; and calculating to obtain the width information of the gap according to the target coordinates of the end point pairs.
In one embodiment, the computer program when executed by the processor further performs the steps of: generating a preset gap endpoint prediction model, comprising: acquiring a plurality of images to be trained containing laser lines, and extracting image coordinates of the laser lines to be trained in the images to be trained; converting the image coordinates of each laser line to be trained into world coordinates to be trained in the world coordinate system, wherein the world coordinates to be trained comprise labels of coordinates of end points to be trained; inputting each group of world coordinates to be trained to an initial gap endpoint prediction model to obtain prediction coordinates of an endpoint pair; calculating the difference degree of the labels of all groups of world coordinates to be trained and the predicted coordinates of the corresponding endpoint pairs; and when the difference degree is smaller than the preset difference degree, obtaining a preset gap endpoint prediction model.
In one embodiment, inputting each set of world coordinates to be trained into an initial slot endpoint prediction model to obtain predicted coordinates of an endpoint pair, including: acquiring interval coordinates in a prediction interval corresponding to the current initialization coordinate in world coordinates to be trained according to the current initialization coordinate and the prediction interval in the initial gap endpoint prediction model; calculating the gradient between the coordinates of two adjacent points in the interval coordinates; and when the gradient is greater than the initial preset gradient, taking the adjacent coordinates greater than the initial preset gradient as the predicted coordinates of the endpoint pair.
In one embodiment, the computer program when executed by the processor further performs the steps of: and when the gradient is smaller than or equal to the initial preset gradient, determining a next initialization coordinate according to the gradient and the current initialization coordinate, taking the next initialization coordinate as the current initialization coordinate, executing the current initialization coordinate and the prediction interval according to the initial gap endpoint prediction model, and acquiring the interval coordinate in the prediction interval corresponding to the current initialization coordinate in the world coordinate to be trained until the prediction coordinate of the endpoint pair is determined.
In one embodiment, the computer program when executed by the processor further performs the steps of: and when the difference degree is greater than or equal to the preset difference degree, updating model parameters of the initial gap endpoint prediction model according to the difference degree, predicting each group of world coordinates to be trained by adopting the initial gap endpoint prediction model with updated model parameters to obtain predicted coordinates of the endpoint pair, and calculating the difference degree between the predicted coordinates of the endpoint pair and the corresponding label until the difference degree is less than the preset difference degree to obtain the preset gap endpoint prediction model.
In one embodiment, inputting world coordinates to a preset gap endpoint prediction model to obtain target coordinates of an endpoint pair of a gap includes: according to the preset initialization coordinate and the preset interval in the preset gap endpoint prediction model, the current interval coordinate in the preset interval corresponding to the preset initialization coordinate in the world coordinate is obtained, the current gradient between the coordinates of two adjacent points in the current interval coordinate is calculated, and when the current gradient is larger than the preset gradient, the coordinates of the two adjacent points larger than the preset gradient are used as the target coordinates of the endpoint pair.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A gap positioning method based on real coordinates is characterized by comprising the following steps:
acquiring an image containing a laser line and a gap;
extracting the image coordinate of the laser line in the image, and converting the image coordinate of the laser line into a world coordinate in a world coordinate system;
inputting the world coordinate to a preset gap endpoint prediction model to obtain a target coordinate of an endpoint pair of the gap;
calculating to obtain the width information of the gap according to the target coordinates of the end point pairs;
the inputting the world coordinate to a preset gap endpoint prediction model to obtain a target coordinate of an endpoint pair of the gap includes:
acquiring a current interval coordinate in a preset interval corresponding to a preset initialization coordinate in the world coordinate according to the preset initialization coordinate and the preset interval in the preset gap endpoint prediction model;
calculating a current gradient between coordinates of two adjacent points in the current interval coordinates;
and when the current gradient is greater than a preset gradient, taking the coordinates of two adjacent points greater than the preset gradient as the target coordinates of the endpoint pair.
2. The method of claim 1, further comprising:
Acquiring a plurality of images to be trained containing laser lines, and extracting image coordinates of the laser lines to be trained in the images to be trained;
converting the image coordinates of each laser line to be trained into world coordinates to be trained in the world coordinate system, wherein the world coordinates to be trained comprise labels of coordinates of end points to be trained;
inputting each group of world coordinates to be trained to an initial gap endpoint prediction model to obtain prediction coordinates of an endpoint pair;
calculating the difference degree of the labels of all groups of world coordinates to be trained and the predicted coordinates of the corresponding endpoint pairs;
and when the difference degree is smaller than a preset difference degree, obtaining the preset gap endpoint prediction model.
3. The method according to claim 2, wherein the inputting each set of the world coordinates to be trained into an initial slot endpoint prediction model to obtain predicted coordinates of an endpoint pair comprises:
acquiring interval coordinates in a prediction interval corresponding to the current initialization coordinates in the world coordinates to be trained according to the current initialization coordinates and the prediction interval in the initial gap endpoint prediction model;
calculating a gradient between coordinates of two adjacent points in the interval coordinates;
And when the gradient is larger than an initial preset gradient, taking the adjacent coordinates larger than the initial preset gradient as the predicted coordinates of the endpoint pair.
4. The method of claim 3, further comprising:
and when the gradient is smaller than or equal to an initial preset gradient, determining a next initialization coordinate according to the gradient and the current initialization coordinate, taking the next initialization coordinate as the current initialization coordinate, and acquiring an interval coordinate in a prediction interval corresponding to the current initialization coordinate in the world coordinate to be trained according to the current initialization coordinate and the prediction interval in the initial gap endpoint prediction model until the prediction coordinate of the endpoint pair is determined.
5. The method of claim 2, further comprising:
when the difference degree is larger than or equal to the preset difference degree, updating model parameters of the initial gap endpoint prediction model according to the difference degree;
predicting each group of world coordinates to be trained by adopting an initial gap endpoint prediction model with updated model parameters to obtain predicted coordinates of an endpoint pair, and calculating the difference between the predicted coordinates of the endpoint pair and a corresponding label until the difference is smaller than the preset difference to obtain the preset gap endpoint prediction model.
6. A gap positioning apparatus based on real coordinates, the apparatus comprising:
the data acquisition module is used for acquiring an image containing a laser line and a gap;
the coordinate conversion module is used for extracting the image coordinate of the laser line in the image and converting the image coordinate of the laser line into a world coordinate in a world coordinate system;
the prediction module is used for inputting the world coordinates to a preset gap endpoint prediction model to obtain target coordinates of the endpoint pairs of the gap;
the width calculation module is used for calculating the width information of the gap according to the target coordinates of the endpoint pairs;
the prediction module is specifically configured to obtain a current interval coordinate in a preset interval corresponding to a preset initialization coordinate in world coordinates according to the preset initialization coordinate and the preset interval in the preset gap endpoint prediction model, calculate a current gradient between coordinates of two adjacent points in the current interval coordinate, and when the current gradient is greater than the preset gradient, use the coordinates of the two adjacent points greater than the preset gradient as target coordinates of an endpoint pair.
7. The apparatus of claim 6, further comprising:
A model generation module, configured to generate the preset gap endpoint prediction model, where the model generation module includes:
the data acquisition unit is used for acquiring a plurality of images to be trained containing laser lines and extracting image coordinates of the laser lines to be trained in the images to be trained;
the data identification unit is used for converting the image coordinates of each laser line to be trained into world coordinates to be trained, and the world coordinates to be trained comprise labels of coordinates of end points to be trained;
the prediction unit is used for inputting each group of world coordinates to be trained to an initial gap endpoint prediction model to obtain the prediction coordinates of the endpoint pairs;
the difference degree calculation unit is used for calculating the difference degree of the labels of the world coordinates to be trained and the predicted coordinates of the corresponding endpoint pairs;
and the model determining unit is used for obtaining the preset gap endpoint prediction model when the difference degree is smaller than a preset difference degree.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 5 are implemented when the computer program is executed by the processor.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN201910605642.7A 2019-07-05 2019-07-05 Gap positioning method and device based on real coordinates and storage medium Active CN110517221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910605642.7A CN110517221B (en) 2019-07-05 2019-07-05 Gap positioning method and device based on real coordinates and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910605642.7A CN110517221B (en) 2019-07-05 2019-07-05 Gap positioning method and device based on real coordinates and storage medium

Publications (2)

Publication Number Publication Date
CN110517221A CN110517221A (en) 2019-11-29
CN110517221B true CN110517221B (en) 2022-05-03

Family

ID=68622593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910605642.7A Active CN110517221B (en) 2019-07-05 2019-07-05 Gap positioning method and device based on real coordinates and storage medium

Country Status (1)

Country Link
CN (1) CN110517221B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111174706A (en) * 2019-12-30 2020-05-19 广东博智林机器人有限公司 Floor installation detection method, electronic device and storage medium
US20220080519A1 (en) * 2020-09-16 2022-03-17 T Bailey, Inc. Welding tracking and/or motion system, device and/or process
CN113129455B (en) * 2021-04-28 2023-05-12 南昌虚拟现实研究院股份有限公司 Image processing method, device, storage medium and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109285190A (en) * 2018-09-06 2019-01-29 广东天机工业智能系统有限公司 Object positioning method, device, electronic equipment and storage medium
EP3495771A1 (en) * 2017-12-11 2019-06-12 Hexagon Technology Center GmbH Automated surveying of real world objects
CN109903327A (en) * 2019-03-04 2019-06-18 西安电子科技大学 A kind of object dimension measurement method of sparse cloud
CN110998659A (en) * 2017-08-14 2020-04-10 乐天株式会社 Image processing system, image processing method, and program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022238B (en) * 2017-08-09 2020-07-03 深圳科亚医疗科技有限公司 Method, computer storage medium, and system for detecting object in 3D image
JP6858101B2 (en) * 2017-08-31 2021-04-14 株式会社Pfu Coordinate detector and trained model
CN109376659A (en) * 2018-10-26 2019-02-22 北京陌上花科技有限公司 Training method, face critical point detection method, apparatus for face key spot net detection model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110998659A (en) * 2017-08-14 2020-04-10 乐天株式会社 Image processing system, image processing method, and program
EP3495771A1 (en) * 2017-12-11 2019-06-12 Hexagon Technology Center GmbH Automated surveying of real world objects
CN109285190A (en) * 2018-09-06 2019-01-29 广东天机工业智能系统有限公司 Object positioning method, device, electronic equipment and storage medium
CN109903327A (en) * 2019-03-04 2019-06-18 西安电子科技大学 A kind of object dimension measurement method of sparse cloud

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
3D reconstruction of GMAW pool surface using composite sensor technology;Zeshi Jin et al;《Measurement》;20181016;第133卷;第508-521页 *
A_Fast_and_Accurate_Automated_Pavement_Crack_Detection_Algorithm;Anirban Chatterjee et al;《2018 26th European Signal Processing Conference (EUSIPCO)》;20181203;第2140-2144页 *
An active vision sensing method for welded seams location using "circle–depth relation"algorithm;Xu Peiquan et al;《ORIGINAL ARTICLE》;20060318;第32卷;第918-926页 *
大型结构体裂缝检测中的定位方法;尚砚娜等;《仪器仪表学报》;20170331;第38卷(第3期);第681-688页 *
斜率渐变的图像断点插值算法;冯兴乐等;《激光杂志》;20131231;第34卷(第4期);第28-30页 *

Also Published As

Publication number Publication date
CN110517221A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN109241903B (en) Sample data cleaning method, device, computer equipment and storage medium
US11030471B2 (en) Text detection method, storage medium, and computer device
US11244435B2 (en) Method and apparatus for generating vehicle damage information
CN110517221B (en) Gap positioning method and device based on real coordinates and storage medium
CN108805898B (en) Video image processing method and device
CN109493417B (en) Three-dimensional object reconstruction method, device, equipment and storage medium
CN108256479B (en) Face tracking method and device
CN111079632A (en) Training method and device of text detection model, computer equipment and storage medium
CN111126339A (en) Gesture recognition method and device, computer equipment and storage medium
CN110991649A (en) Deep learning model building method, device, equipment and storage medium
CN110751149B (en) Target object labeling method, device, computer equipment and storage medium
CN110956131B (en) Single-target tracking method, device and system
CN109934196A (en) Human face posture parameter evaluation method, apparatus, electronic equipment and readable storage medium storing program for executing
US11443481B1 (en) Reconstructing three-dimensional scenes portrayed in digital images utilizing point cloud machine-learning models
CN110992243B (en) Intervertebral disc cross-section image construction method, device, computer equipment and storage medium
CN108304243B (en) Interface generation method and device, computer equipment and storage medium
CN109740487B (en) Point cloud labeling method and device, computer equipment and storage medium
CN112749723A (en) Sample labeling method and device, computer equipment and storage medium
US10089764B2 (en) Variable patch shape synthesis
CN109448018B (en) Tracking target positioning method, device, equipment and storage medium
CN114332457A (en) Image instance segmentation model training method, image instance segmentation method and device
CN112579810B (en) Printed circuit board classification method, device, computer equipment and storage medium
CN113052143A (en) Handwritten digit generation method and device
CN111179337B (en) Method, device, computer equipment and storage medium for measuring spatial linear orientation
CN115861393A (en) Image matching method, spacecraft landing point positioning method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant