CN116309799A - Target visual positioning method, device and system - Google Patents
Target visual positioning method, device and system Download PDFInfo
- Publication number
- CN116309799A CN116309799A CN202310119961.3A CN202310119961A CN116309799A CN 116309799 A CN116309799 A CN 116309799A CN 202310119961 A CN202310119961 A CN 202310119961A CN 116309799 A CN116309799 A CN 116309799A
- Authority
- CN
- China
- Prior art keywords
- circle
- target
- levelness
- image
- point set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000000007 visual effect Effects 0.000 title claims abstract description 12
- 238000012937 correction Methods 0.000 claims abstract description 25
- 238000001914 filtration Methods 0.000 claims description 13
- 238000004891 communication Methods 0.000 claims description 12
- 238000010438 heat treatment Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 8
- 239000011521 glass Substances 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 5
- 239000007788 liquid Substances 0.000 description 12
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 12
- 239000003595 mist Substances 0.000 description 11
- 238000003384 imaging method Methods 0.000 description 8
- 238000013461 design Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 239000012736 aqueous medium Substances 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000011229 interlayer Substances 0.000 description 1
- 210000001503 joint Anatomy 0.000 description 1
- 239000005355 lead glass Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003801 milling Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B17/00—Details of cameras or camera bodies; Accessories therefor
- G03B17/55—Details of cameras or camera bodies; Accessories therefor with provision for heating or cooling, e.g. in aircraft
-
- G06T5/70—
-
- G06T5/80—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a target visual positioning method, a device and a system, which comprise the following steps: performing levelness calibration on the image acquisition device and defogging on the lens, wherein the horizontally calibrating on the image acquisition device through the levelness correction module and defogging on the lens through the defogging device; performing levelness calibration on the image acquisition device and defogging on the lens, and acquiring a target image through the image acquisition device; image enhancement is carried out on the acquired target image through an image enhancement algorithm to obtain an enhanced target image, high-precision edge acquisition is carried out on the enhanced target image to obtain a point set to be fitted of the target edge, circle fitting positioning is carried out according to the point set to be fitted to obtain an alternative circle set, a target circle is obtained according to the alternative circle set through constraint conditions, and a target position is obtained according to the target circle.
Description
Technical Field
The invention relates to the field of industrial automation, in particular to a target visual positioning method, device and system.
Background
In the field of existing industrial automation, the main body of most solutions is still technical design and development around the production line environment. While automation solutions for e.g. turbid underwater objects, in fog environments are still in a relatively blank phase at present. The high-precision implementation of the traditional positioning and detection technology mainly depends on the high quality of an image obtained by imaging, and the design of related structures of equipment is designed by a target of 'high-quality imaging'. The existing industrial vision field is also based on the detection accuracy to perform comparison, such as millimeter level, mu level and the like. The technology core of its implementation is the visual module, while the technology of edge implementation is not the core of its consideration.
In the environment of turbid liquid and water mist, the imaging step relied on by the traditional detection is directly affected to a great extent. Suspended matters in turbid liquid lead to that light cannot comprehensively reach the surface of an underwater target (light information transmitted back by target reflection is missing), meanwhile, the conductivity intersection air environment of the light in an aqueous medium is weaker, and further imaging quality cannot be guaranteed. Similarly, the water mist environment has an influence on the imaging quality. If the front-end equipment is used for imaging after being immersed into water, the service life or effect of the equipment can be seriously affected by factors such as corrosiveness or radiation possibly brought by different turbid liquids. And the front-end equipment cannot be guaranteed to normally operate in a low-temperature or high-temperature environment.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a target visual positioning method, which comprises the following steps:
step one, performing levelness calibration on an image acquisition device and defogging a lens, wherein the step one comprises the steps of performing horizontal calibration on the image acquisition device through a levelness correction module and defogging the lens through a defogging device;
step two, performing levelness calibration on the image acquisition device and defogging on the lens, and acquiring a target image through the image acquisition device;
and thirdly, carrying out image enhancement on the acquired target image through an image enhancement algorithm to obtain an enhanced target image, carrying out high-precision edge acquisition on the enhanced target image to obtain a point set to be fitted of the target edge, carrying out circle fitting positioning according to the point set to be fitted to obtain an alternative circle set, obtaining a target circle according to the alternative circle set through constraint conditions, and obtaining a target position according to the target circle.
Further, the horizontal calibration of the image acquisition device is performed by the levelness correction module, which comprises the following steps:
the levelness detection is carried out on the image acquisition device through the two-axis inclination sensor module in the levelness correction module, the levelness of the image acquisition module is acquired, the deviation between the levelness of the image acquisition module and the standard levelness is judged, if the deviation is not in the range of the difference threshold value, the levelness of the image acquisition module is adjusted through the levelness adjustment module in the levelness correction module, and the levelness of the image acquisition device is adjusted to be in the range of the difference threshold value, so that the horizontal correction of the image acquisition device is completed.
Further, the defogging device defogging the lens, including: the heating module in the defogging device is used for heating the set temperature, and the fan in the defogging device is used for heating the lens glass of the image acquisition device with the heated air flow to finish defogging of the lens glass.
Further, the step of obtaining the high-precision edge of the enhanced target image to obtain the point set to be fitted of the target edge includes:
carrying out Gaussian filtering according to the enhanced target image to obtain a smooth image after Gaussian filtering, and carrying out X-axis direction third-order partial guide filtering on the obtained smooth image to obtain an alternative point set X; performing Y-axis three-order partial guide filtering on the obtained smooth image to obtain an alternative point set Y; and merging the obtained alternative point set X and the alternative point set Y to obtain a point set to be fitted.
Further, the performing circle fitting positioning according to the point set to be fitted to obtain an alternative circle set, obtaining a target circle according to the alternative circle set through constraint conditions, and obtaining a target position according to the target circle, including:
randomly selecting four points from a point set to be fitted, and selecting three points from the four selected points to determine a circle to be selected;
judging whether the fourth point is on the circle to be selected, if so, the edge point set of the circle to be selected is a true point in the point set to be fitted;
judging whether the circle to be selected accords with the constraint of a circle threshold value, and directly discarding the circle to be selected when the radius of the fitted circle to be selected is not within the radius range of a preset target circle;
judging the number of points in the to-be-fitted point set falling on the to-be-selected circle, taking the number of points in the to-be-fitted point set of the to-be-selected circle as a voting response result, and discarding the to-be-selected circle if the number of points in the to-be-fitted point set does not meet a point threshold;
when the circle center distances of the multiple circles to be selected are in the constraint range, reserving the circle to be selected with the highest number of votes, discarding the rest circles to be selected to obtain a target circle, and obtaining the target position according to the target circle.
The target vision centering device applying the target vision positioning method comprises an image acquisition device, a levelness correction module, a data processing module, a defogging device, a communication device and a power interface; the image acquisition device, the levelness correction module, the defogging device, the power interface and the communication device are respectively connected with the data processing module.
A target vision positioning system comprises a target vision centering device, wherein the target vision centering device comprises a control module and a power supply module; the control module is in communication connection with the communication device, and the power supply module is connected with the power supply interface.
The beneficial effects of the invention are as follows: in an application scene with strict precision requirements, the front-end data acquisition equipment can be independently aligned with the horizontal reference, and the reference alignment problem in a high-precision environment is solved without an additional mechanical motion structure. The problem of high-precision positioning of the target in the turbid liquid and water mist environments can be solved.
Drawings
FIG. 1 is a schematic flow chart of a method for visual localization of a target;
FIG. 2 is a schematic diagram of a levelness correction module;
FIG. 3 is a schematic diagram of a levelness adjustment control flow;
FIG. 4 is a schematic diagram of a high-precision edge acquisition flow;
fig. 5 is a flow chart of circle fitting positioning.
Detailed Description
The technical solution of the present invention will be described in further detail with reference to the accompanying drawings, but the scope of the present invention is not limited to the following description.
For the purpose of making the technical solution and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the particular embodiments described herein are illustrative only and are not intended to limit the invention, i.e., the embodiments described are merely some, but not all, of the embodiments of the invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention. It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The features and capabilities of the present invention are described in further detail below in connection with the examples.
As shown in fig. 1, a visual target positioning method includes the following steps:
step one, performing levelness calibration on an image acquisition device and defogging a lens, wherein the step one comprises the steps of performing horizontal calibration on the image acquisition device through a levelness correction module and defogging the lens through a defogging device;
step two, performing levelness calibration on the image acquisition device and defogging on the lens, and acquiring a target image through the image acquisition device;
and thirdly, carrying out image enhancement on the acquired target image through an image enhancement algorithm to obtain an enhanced target image, carrying out high-precision edge acquisition on the enhanced target image to obtain a point set to be fitted of the target edge, carrying out circle fitting positioning according to the point set to be fitted to obtain an alternative circle set, obtaining a target circle according to the alternative circle set through constraint conditions, and obtaining a target position according to the target circle.
Further, the horizontal calibration of the image acquisition device is performed by the levelness correction module, which comprises the following steps:
the levelness detection is carried out on the image acquisition device through the two-axis inclination sensor module in the levelness correction module, the levelness of the image acquisition module is acquired, the deviation between the levelness of the image acquisition module and the standard levelness is judged, if the deviation is not in the range of the difference threshold value, the levelness of the image acquisition module is adjusted through the levelness adjustment module in the levelness correction module, and the levelness of the image acquisition device is adjusted to be in the range of the difference threshold value, so that the horizontal correction of the image acquisition device is completed.
Further, the defogging device defogging the lens, including: the heating module in the defogging device is used for heating the set temperature, and the fan in the defogging device is used for heating the lens glass of the image acquisition device with the heated air flow to finish defogging of the lens glass.
Further, the image enhancement is performed on the acquired target image through an image enhancement algorithm to obtain an enhanced target image, which comprises the following steps: by means of an image enhancement algorithm.
Further, the step of obtaining the high-precision edge of the enhanced target image to obtain the point set to be fitted of the target edge includes:
carrying out Gaussian filtering according to the enhanced target image to obtain a smooth image after Gaussian filtering, and carrying out X-axis direction third-order partial guide filtering on the obtained smooth image to obtain an alternative point set X; performing Y-axis three-order partial guide filtering on the obtained smooth image to obtain an alternative point set Y; and merging the obtained alternative point set X and the alternative point set Y to obtain a point set to be fitted.
Further, the performing circle fitting positioning according to the point set to be fitted to obtain an alternative circle set, obtaining a target circle according to the alternative circle set through constraint conditions, and obtaining a target position according to the target circle, including:
randomly selecting four points from a point set to be fitted, and selecting three points from the four selected points to determine a circle to be selected;
judging whether the fourth point is on the circle to be selected, if so, the edge point set of the circle to be selected is a true point in the point set to be fitted;
judging whether the circle to be selected accords with the constraint of a circle threshold value, and directly discarding the circle to be selected when the radius of the fitted circle to be selected is not within the radius range of a preset target circle;
judging the number of points in the to-be-fitted point set falling on the to-be-selected circle, taking the number of points in the to-be-fitted point set of the to-be-selected circle as a voting response result, and discarding the to-be-selected circle if the number of points in the to-be-fitted point set does not meet a point threshold;
when the circle center distances of the multiple circles to be selected are in the constraint range, reserving the circle to be selected with the highest number of votes, discarding the rest circles to be selected to obtain a target circle, and obtaining the target position according to the target circle.
The target vision centering device applying the target vision positioning method comprises an image acquisition device, a levelness correction module, a data processing module, a defogging device, a communication device and a power interface; the image acquisition device, the levelness correction module, the defogging device, the power interface and the communication device are respectively connected with the data processing module.
A target vision positioning system comprises a target vision centering device, wherein the target vision centering device comprises a control module and a power supply module; the control module is in communication connection with the communication device, and the power supply module is connected with the power supply interface.
Specifically, aiming at the severe environments of turbid liquid and water mist, a millimeter-level target center positioning device (called centering for short) is designed, and can be positioned in the water mist environment with the turbidity of not more than 10NTU and the liquid temperature of not more than 60 ℃. Meanwhile, a levelness correction function module with the accuracy of 0.004 degrees is covered on the structural design, so that the detection accuracy error caused by horizontal offset can be dealt with in a detection space with a larger fall; the integrated data acquisition, the image acquisition front end, the levelness sensor and other modules are integrated, so that all edge functions are integrated, the installation problem of the parts in different environments is also facilitated, and the structural problem of equipment erection is not considered.
1. Levelness high-precision fine adjustment
And acquiring real-time two-axis levelness data through a high-precision inclination angle sensor. And four micro motors are controlled to lift the two shafts through embedded programs, so that levelness calibration is realized.
2. Demisting
The air duct is formed by using the matching structure of the micro fans, heat is generated by the conductive heating metal wires, the temperature of the air flow is raised, and the demisting of the equipment butt joint surface by the circulating convection hot air duct is realized.
3. Visual centering
Repairing and enhancing the image acquired in the turbid liquid environment through an image enhancement algorithm; positioning is then achieved using a target positioning algorithm for the design and a physical center deviation (target-image center) is calculated.
4. Structure-module integration
And each functional module is integrated into a whole through structural design, so that functional standardization and integration are realized. The functions of all functional modules can be controlled through an external data interface provided by the equipment, including data acquisition, levelness leveling and the like
1. Levelness high-precision fine adjustment
The schematic block diagram of the levelness correction module is shown in fig. 2, and a screw rod and a stepping motor control the fine adjustment of the levelness of the two axes; the double-shaft inclination angle sensor acquires real-time data of levelness of the two shaft directions, and realizes control fine adjustment on the stepping motor through the stepping motor control board and the stepping motor driving board according to the data. The detailed flow of the levelness fine adjustment control is shown in fig. 3.
2. Defogging function module:
the defogging hot air channel is formed by milling the structural part. Wherein the wind flow is provided by a fan or a small fan, and the heat energy is provided by electrifying and heating the metal wires. The lead glass interlayer in the center of the arrow is heated by hot air, and water mist on the surface of the glass is evaporated, which is the first step for reducing the influence of the water mist, namely the influence of the water mist on the equipment end.
3. Visual centering function module:
the vision centering function is mainly composed of two parts, namely (1) image enhancement processing; (2) target positioning.
(1) Image enhancement:
the purpose of image enhancement is to eliminate the influence of turbid liquid and water mist on imaging. In principle, the suspended matters in the turbid liquid and the water mist environment have similar influence on imaging, namely uneven light transmission or shielding exists, so that the loss of the light information fed back by the reflection of the target exists, and the image degradation caused by the environmental influence is also realized. The image enhancement can restore the degraded image to the greatest extent according to the existing information, and enhance the effective part in the image information at the same time, so that the effective information is higher in duty ratio.
The image enhancement algorithm source used in the technology is used for image enhancement and restoration in the foggy weather in the natural environment. From the above analysis, it is known that the cause of image degradation in a turbid liquid environment is substantially similar, and thus the technology is expanded from the application scene of natural fog weather to the turbid liquid environment.
(2) Target positioning:
the target positioning step mainly comprises two parts: high-precision edge acquisition and high-robustness circle positioning.
A. High-precision edge acquisition:
high precision edge acquisition flow chart 4 shows: the image data is mathematically analyzed, namely three-dimensional matrix data, and the dimension is color channel-length-width, so that the image data is segmented from the length-width dimension, three-order partial guide filtering is respectively carried out, and three-order gradient data in a single-dimension direction is obtained; and then binarizing (global or local) the gradient data by adopting a specific threshold value, and dividing the peaks and troughs of the gradient data to form coarse edge data. Finally, a specific maximum value filter is used for extracting the high-precision edge.
B. High robustness circle positioning:
the circle fitting location flow chart 5 shows that the specific fitting location logic is as follows:
1) Three-point fitting circle: from the characteristics of the circle itself, three points define a circle. Thus a circle to be selected is determined at random three points.
2) Constraint 1-whether the fourth point is on a circle: when the fourth point is also on the circle to be selected, the side surface indicates that the edge point set of the circle to be selected is a true point in the point set to be fitted, and is not a noise point.
3) Constraint 2-whether the circle to be selected meets the circle threshold constraint: the radius range of the pre-input target circle is the primary reference for this constraint. And when the radius of the fitted circle to be selected does not accord with the constraint range, directly discarding the circle to be selected.
4) Constraint 3-number of points in the point set that fall on the candidate circle: voting mechanism constraints. And taking the number of relevant points of the circle to be selected in the set of points to be fitted as a voting response result, and directly discarding the circle to be selected which does not meet the threshold value.
5) Constraint 4-screen out near concentric circles: when the circle center distances of the multiple alternative circles are in the constraint range, only the circle with high voting number in the previous constraint is reserved, and the rest circles are screened out. Thereby avoiding repetitive responses of the circle fit.
The position of the target circle is determined through the steps. Compared with the traditional circle fitting algorithm, the method can fit the edge of the incomplete circle, namely, the complete circle can still be fitted and positioned under the incomplete conditions of discontinuous edge points or continuous circular arcs and the like of the circle, and meanwhile, concentric or invalid circles can be sunk according to the number of votes. Compared with the prior art, the method has higher universality and robustness. The characteristics are consistent with the problems existing in the turbid liquid and water mist environments, and the information missing condition can be effectively dealt with.
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.
Claims (7)
1. A method for visually locating a target, comprising the steps of:
step one, performing levelness calibration on an image acquisition device and defogging a lens, wherein the step one comprises the steps of performing horizontal calibration on the image acquisition device through a levelness correction module and defogging the lens through a defogging device;
step two, performing levelness calibration on the image acquisition device and defogging on the lens, and acquiring a target image through the image acquisition device;
and thirdly, carrying out image enhancement on the acquired target image through an image enhancement algorithm to obtain an enhanced target image, carrying out high-precision edge acquisition on the enhanced target image to obtain a point set to be fitted of the target edge, carrying out circle fitting positioning according to the point set to be fitted to obtain an alternative circle set, obtaining a target circle according to the alternative circle set through constraint conditions, and obtaining a target position according to the target circle.
2. The method for locating a target according to claim 1, wherein the step of horizontally calibrating the image acquisition device by the levelness correction module comprises the steps of:
the levelness detection is carried out on the image acquisition device through the two-axis inclination sensor module in the levelness correction module, the levelness of the image acquisition module is acquired, the deviation between the levelness of the image acquisition module and the standard levelness is judged, if the deviation is not in the range of the difference threshold value, the levelness of the image acquisition module is adjusted through the levelness adjustment module in the levelness correction module, and the levelness of the image acquisition device is adjusted to be in the range of the difference threshold value, so that the horizontal correction of the image acquisition device is completed.
3. The method for visual positioning of a target according to claim 2, wherein the defogging the lens by the defogging device comprises: the heating module in the defogging device is used for heating the set temperature, and the fan in the defogging device is used for heating the lens glass of the image acquisition device with the heated air flow to finish defogging of the lens glass.
4. The method for locating a target according to claim 3, wherein the step of obtaining the target edge to be fitted point set by performing high-precision edge acquisition on the enhanced target image includes:
carrying out Gaussian filtering according to the enhanced target image to obtain a smooth image after Gaussian filtering, and carrying out X-axis direction third-order partial guide filtering on the obtained smooth image to obtain an alternative point set X; performing Y-axis direction third-order partial guide filtering on the obtained smooth image to obtain an alternative point set Y; and merging the obtained alternative point set X and the alternative point set Y to obtain a point set to be fitted.
5. The method for visual positioning of a target according to claim 4, wherein the performing circle fitting positioning according to the point set to be fitted to obtain an alternative circle set, obtaining a target circle according to the alternative circle set through constraint conditions, and obtaining a target position according to the target circle includes:
randomly selecting four points from a point set to be fitted, and selecting three points from the four selected points to determine a circle to be selected;
judging whether the fourth point is on the circle to be selected, if so, the edge point set of the circle to be selected is a true point in the point set to be fitted;
judging whether the circle to be selected accords with the constraint of a circle threshold value, and directly discarding the circle to be selected when the radius of the fitted circle to be selected is not within the radius range of a preset target circle;
judging the number of points in the to-be-fitted point set falling on the to-be-selected circle, taking the number of points in the to-be-fitted point set of the to-be-selected circle as a voting response result, and discarding the to-be-selected circle if the number of points in the to-be-fitted point set does not meet a point threshold;
when the circle center distances of the multiple circles to be selected are in the constraint range, reserving the circle to be selected with the highest number of votes, discarding the rest circles to be selected to obtain a target circle, and obtaining the target position according to the target circle.
6. A target vision centering device applying the target vision positioning method as claimed in claim 5, which is characterized by comprising an image acquisition device, a levelness correction module, a data processing module, a defogging device, a communication device and a power interface; the image acquisition device, the levelness correction module, the defogging device, the power interface and the communication device are respectively connected with the data processing module.
7. The target vision positioning system comprises a target vision centering device and is characterized by comprising a control module and a power supply module; the control module is in communication connection with the communication device, and the power supply module is connected with the power supply interface.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310119961.3A CN116309799A (en) | 2023-02-10 | 2023-02-10 | Target visual positioning method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310119961.3A CN116309799A (en) | 2023-02-10 | 2023-02-10 | Target visual positioning method, device and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116309799A true CN116309799A (en) | 2023-06-23 |
Family
ID=86791524
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310119961.3A Pending CN116309799A (en) | 2023-02-10 | 2023-02-10 | Target visual positioning method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116309799A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107333043A (en) * | 2017-07-31 | 2017-11-07 | 重庆电子工程职业学院 | IMAQ and identifying system |
CN209657065U (en) * | 2019-03-20 | 2019-11-19 | 闽南理工学院 | A kind of outdoor image acquisition equipment |
CN111354047A (en) * | 2018-12-20 | 2020-06-30 | 精锐视觉智能科技(深圳)有限公司 | Camera module positioning method and system based on computer vision |
CN112184765A (en) * | 2020-09-18 | 2021-01-05 | 西北工业大学 | Autonomous tracking method of underwater vehicle based on vision |
CN114205537A (en) * | 2021-11-04 | 2022-03-18 | 北京建筑大学 | Multifunctional underwater cultural relic auxiliary searching and high-definition image acquisition equipment and method |
-
2023
- 2023-02-10 CN CN202310119961.3A patent/CN116309799A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107333043A (en) * | 2017-07-31 | 2017-11-07 | 重庆电子工程职业学院 | IMAQ and identifying system |
CN111354047A (en) * | 2018-12-20 | 2020-06-30 | 精锐视觉智能科技(深圳)有限公司 | Camera module positioning method and system based on computer vision |
CN209657065U (en) * | 2019-03-20 | 2019-11-19 | 闽南理工学院 | A kind of outdoor image acquisition equipment |
CN112184765A (en) * | 2020-09-18 | 2021-01-05 | 西北工业大学 | Autonomous tracking method of underwater vehicle based on vision |
CN114205537A (en) * | 2021-11-04 | 2022-03-18 | 北京建筑大学 | Multifunctional underwater cultural relic auxiliary searching and high-definition image acquisition equipment and method |
Non-Patent Citations (1)
Title |
---|
王亮: "《智能光电感知》", 31 July 2022, 北京:中国青年出版社, pages: 275 - 283 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110497187B (en) | Sun flower pattern assembly system based on visual guidance | |
CN104315978A (en) | Method and device for measuring pipeline end face central points | |
CN109029299B (en) | Dual-camera measuring device and method for butt joint corner of cabin pin hole | |
Luna et al. | Calibration of line-scan cameras | |
CN107797560B (en) | Visual recognition system and method for robot tracking | |
CN106846352B (en) | Knife edge picture acquisition method and device for lens analysis force test | |
US20190049238A1 (en) | Device and method for measuring a surface topography, and calibration method | |
CN204946113U (en) | A kind of optical axis verticality adjusting gear | |
CN110966956A (en) | Binocular vision-based three-dimensional detection device and method | |
CN110751693B (en) | Method, apparatus, device and storage medium for camera calibration | |
Yu et al. | Camera calibration of thermal-infrared stereo vision system | |
CN113516716B (en) | Monocular vision pose measuring and adjusting method and system | |
CN110738644A (en) | automobile coating surface defect detection method and system based on deep learning | |
CN109191527A (en) | A kind of alignment method and device based on minimum range deviation | |
CN105014240A (en) | LED wafer laser cutting device and LED wafer laser cutting levelness adjustment method | |
CN112767338A (en) | Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision | |
CN112729112A (en) | Engine cylinder bore diameter and hole site detection method based on robot vision | |
CN110766761A (en) | Method, device, equipment and storage medium for camera calibration | |
CN111738971B (en) | Circuit board stereoscopic scanning detection method based on line laser binocular stereoscopic vision | |
CN116309799A (en) | Target visual positioning method, device and system | |
CN205486309U (en) | Through whether correct device of test capacitor installation of shooing | |
CN111104812B (en) | Two-dimensional code recognition device and two-dimensional code detection equipment | |
Ye et al. | Extrinsic calibration of a monocular camera and a single line scanning Lidar | |
CN106441310B (en) | A kind of solar azimuth calculation method based on CMOS | |
CN115909075A (en) | Power transmission line identification and positioning method based on depth vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |