CN116645476B - Rod three-dimensional data model reconstruction method and system based on multi-view vision - Google Patents

Rod three-dimensional data model reconstruction method and system based on multi-view vision Download PDF

Info

Publication number
CN116645476B
CN116645476B CN202310851546.7A CN202310851546A CN116645476B CN 116645476 B CN116645476 B CN 116645476B CN 202310851546 A CN202310851546 A CN 202310851546A CN 116645476 B CN116645476 B CN 116645476B
Authority
CN
China
Prior art keywords
bar
bundle
image information
face image
scene picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310851546.7A
Other languages
Chinese (zh)
Other versions
CN116645476A (en
Inventor
李同
曾彬彬
刘钟
陈斌
王冰锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaoyu Internet Intelligent Technology Changsha Co ltd
Original Assignee
Xiaoyu Internet Intelligent Technology Changsha Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaoyu Internet Intelligent Technology Changsha Co ltd filed Critical Xiaoyu Internet Intelligent Technology Changsha Co ltd
Priority to CN202310851546.7A priority Critical patent/CN116645476B/en
Publication of CN116645476A publication Critical patent/CN116645476A/en
Application granted granted Critical
Publication of CN116645476B publication Critical patent/CN116645476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a bar three-dimensional data model reconstruction method and system based on multi-view, comprising the following steps: acquiring multi-frame current bar bundle scene pictures of a target bar bundle at the same moment, wherein the multi-frame current bar bundle scene pictures comprise a reference bar bundle scene picture and a calibration bar bundle scene picture; acquiring all unit bar end face image information in a scene picture of a reference bar bundle based on a preset image coordinate system and a preset deep learning target detection network model; and acquiring corresponding matched image information from the calibration bar bundle scene picture based on the unit bar end face image information to be the searching bar end face image information, and acquiring corresponding current end face three-dimensional coordinate data of the single bar to be positioned through multi-view geometric calculation according to the unit bar end face image information and the searching bar end face image information. The bar three-dimensional data model reconstruction method based on the multi-view vision has the advantages of being small in environmental interference and capable of accurately determining the bar point cloud data of the end face of the whole bundle of bars.

Description

Rod three-dimensional data model reconstruction method and system based on multi-view vision
Technical Field
The invention relates to the technical field of automatic detection of bars, in particular to a bar three-dimensional data model reconstruction method and system based on multi-view vision.
Background
The bar welding robot is widely applied to a bar production bundling end assembly line and is used for replacing a manual label welding mode. The important information such as the model number, the size specification, the count, the whole bundle weight, the production date and the like of the produced bars are recorded on the label, so that the bars can be managed, transported, the quality traced and the like.
In the prior art, the following methods are adopted to acquire point cloud data of the whole bundle of bars and perform optimal welding plate point calculation: the method is characterized in that the position between the end face of the whole bundle of bars and a monocular camera is limited through a tool, the position is calculated through an image spot searching and morphological filtering method in cooperation with polishing, searching on a 2D image layer is completed, point cloud data mapping of the end face of the whole bundle of uneven bars cannot be solved, and robot collision is easy to cause to damage a welding fixture; one is to adopt the scheme that the binocular line sweeps the laser, the laser rotates the original point cloud information of the scanning area relative to the binocular vertical axis, but there is scanning speed low, produce the data cavity under the specular refraction of the end surface of the bar or high reflection of light, still there is the problem that can't see the laser beam at the same time by left and right eyes under certain angles; the method adopts a projection structured light method to reconstruct, equipment is more expensive, the equipment is more heavy and cannot be installed on a small mechanical arm, the installation position of the equipment is required, and in addition, the service life of a projector bulb is reduced due to the fact that the equipment is in a high-temperature link on a bar production line; a traditional binocular matching method is adopted, a traditional multi-view matching method is adopted to obtain the end face of a simulated whole bundle of bars, and end face welding spots are determined based on the end face of the simulated whole bundle of bars, but the traditional multi-view matching method is limited by a multi-view image consistency assumption, is extremely easy to be interfered by light rays in an actual scene, requires additional shading equipment, is serious in data noise of the obtained point cloud of bars of the end face of the simulated whole bundle of bars, cannot guarantee precision, cannot accurately obtain depth information of the bars, cannot determine the end face welding spots on the end face of the staggered whole bundle of bars, and causes excessive interference in the process of automatic welding through a robot, so that the technical problem that a welding fixture is damaged by robot collision is easily caused.
In view of the foregoing, it is necessary to propose a method and a system for reconstructing a three-dimensional data model of a rod based on multi-view, so as to solve or at least alleviate the above-mentioned drawbacks.
Disclosure of Invention
The invention mainly aims to provide a bar three-dimensional data model reconstruction method and system based on multi-view, and aims to solve the technical problems that in the prior art, a traditional multi-view matching method is adopted to restore and acquire end face information of a whole bundle of bars, the method is easy to be interfered by environment, and bar point cloud data of the end face of the whole bundle of bars cannot be accurately determined.
In order to achieve the above purpose, the invention provides a bar three-dimensional data model reconstruction method based on multi-view, comprising the following steps: s100, acquiring multi-frame current bar bundle scene pictures of a target bar bundle at the same moment, taking one frame of current bar bundle scene picture as a reference bar bundle scene picture, taking the rest frames of current bar bundle scene pictures as calibration bar bundle scene pictures, wherein an overlapping view angle and current mutual calibration parameters are arranged between the reference bar bundle scene picture and the calibration bar bundle scene picture; s200, acquiring all unit bar end face image information in a reference bar bundle scene picture based on a preset image coordinate system and a preset deep learning target detection network model, wherein the unit bar end face image information comprises a pixel starting point parameter Xi, a pixel end point parameter Yi, a pixel width parameter Wi and a pixel height parameter Hi, and one unit bar end face image information is used for calibrating a corresponding single bar I to be positioned; s300, acquiring corresponding matched image information from the calibrated bar bundle scene picture based on unit bar end face image information to be search bar end face image information according to a preset image coordinate system, current mutual calibration parameters and a preset similarity detection algorithm, and acquiring corresponding current end face three-dimensional coordinate data Ixyz of a single bar I to be positioned through multi-view geometric calculation according to the unit bar end face image information and the search bar end face image information.
Further, in step S100, the priority of the current bar bundle scene pictures of the plurality of frames is set according to the preset condition, and the current bar bundle scene picture with the highest priority is determined as the reference bar bundle scene picture; the preset conditions comprise one or more of quantity information of the shot bars, definition information of the shot bars and position information of the shot bars of the current bar bundle scene picture.
Further, the step S30 specifically includes: s301, acquiring a target search range of a single rod I to be positioned in a reference rod bundle scene picture mapped in a calibration rod bundle scene picture according to the current mutual calibration parameters; s302, acquiring the searched bar end face image information correspondingly matched with the unit bar end face image information from a target searching range according to a preset similarity detection algorithm, and calibrating a mapping unit bar by one piece of the searched bar end face image information; s303, fitting and matching unit bar end face image information and searched bar end face image information to a sub-pixel level; s304, acquiring current end face three-dimensional coordinate data Ixyz of a single bar I to be positioned through multi-view geometric calculation based on a preset image coordinate system, current mutual calibration parameters, fitted unit bar end face image information and searched bar end face image information, and combining a plurality of the current end face three-dimensional coordinate data Ixyz to form bar bundle end face three-dimensional coordinate set data of a target bar bundle.
Further, the step S30 specifically includes: s311, sorting importance degree of the multi-frame calibration bar bundle scene pictures according to a preset rule from high importance degree to low importance degree; s312, according to the current mutual calibration parameters, acquiring a target search range of the single rod I to be positioned in the reference rod bundle scene picture, which is mapped in the calibration rod bundle scene picture with the highest current importance; s313, acquiring the searched bar end face image information correspondingly matched with the unit bar end face image information from a target searching range according to a preset similarity detection algorithm, and calibrating a mapping unit bar by one piece of the searched bar end face image information; s314, fitting and matching the unit bar end face image information and the searched bar end face image information to a sub-pixel level; s315, acquiring current end face three-dimensional coordinate data Ixyz of a single bar I to be positioned based on a preset image coordinate system, current mutual calibration parameters, fitted unit bar end face image information and searched bar end face image information; s316, marking a single bar I to be positioned with current end face three-dimensional coordinate data Ixyz as a single positioned bar, and judging whether all single bars I to be positioned in a scene picture of a reference bar bundle are single positioned bars or not; s317, if all the single bars I to be positioned in the reference bar bundle scene picture are the positioned single bars, acquiring three-dimensional coordinate set data of the end face of the bar bundle of the target bar bundle, wherein the three-dimensional coordinate set data of the end face of the bar bundle of the target bar bundle is formed by combining all the three-dimensional coordinate data Ixyz of the current end face; s318, if the single rod I to be positioned in the reference rod bundle scene picture is not all the positioned single rod, eliminating the calibration rod bundle scene picture with the highest current importance and updating, and sorting the updated calibration rod bundle scene picture according to a preset rule; steps S312 to S318 are repeated until three-dimensional coordinate set data of the bundle end face of the target bundle of bars is acquired.
Further, according to the current mutual calibration parameters, calculating a target search range of unit bar end face image information obj_i of the ith single bar I to be positioned in the reference bar bundle scene picture mapped under the calibration bar bundle scene picture, and all target sets { obj_X }; inputting the unit bar end face image information obj_i and the target set { obj_x } together into a preset deep learning target detection network model, and obtaining a comparison cosine value between each numerical value in the unit bar end face image information obj_i and the target set { obj_x }; and determining the searching bar end face image information which is matched with the unit bar end face image information obj_i in a corresponding way and has the smallest cosine value.
Further, the greater the range of overlapping view angles with the reference bundle scene picture, the higher the importance of the calibrated bundle scene picture.
Further, a preset deep learning target detection network model is established based on the bar three-dimensional modeling system; the bar three-dimensional modeling system comprises a double-light-source flash lamp system, a multi-eye industrial camera system, auxiliary 3D measuring equipment and an edge calculating unit, wherein the multi-eye industrial camera system is used for acquiring multi-frame modeling bar bundle scene pictures at corresponding positions at the same moment, the double-light-source flash lamp system is used for providing flash lamp support when the multi-eye industrial camera system shoots, the auxiliary 3D measuring equipment is used for acquiring actual point cloud data information data, the multi-eye industrial camera system has current mutual calibration parameters, and the multi-eye industrial camera system and the auxiliary 3D measuring equipment have associated external parameters; the edge calculation unit is used for establishing a preliminary deep learning target detection network model by adopting machine learning based on the current mutual calibration parameters, taking one of the modeling bar bundle scene pictures as an input data set in the model training set, taking the remaining modeling bar bundle scene pictures as an output data set in the model training set; the edge computing unit is used for acquiring actual point cloud data information data of the modeling bar bundle scene pictures of the output data set, carrying out iterative correction on the preliminary deep learning target detection network model based on the actual point cloud data information data, and establishing a preset deep learning target detection network model.
The invention also provides a bar three-dimensional data model reconstruction system based on multi-view, which comprises a double-light-source flash lamp system, a current multi-view camera system and an edge calculation unit, wherein the current multi-view camera system is used for acquiring multi-frame current bar bundle scene pictures of a target bar bundle at the same moment, the edge calculation unit is electrically connected with the current multi-view camera system, and the edge calculation unit is used for executing the steps of the bar three-dimensional data model reconstruction method based on multi-view.
Compared with the prior art, the bar three-dimensional data model reconstruction method based on the multi-view has the following beneficial effects:
according to the bar three-dimensional data model reconstruction method based on the multi-view, a frame of reference bar bundle scene picture is obtained, at least one frame of calibration bar bundle scene picture is obtained, and an overlapping view angle and known current mutual calibration parameters are arranged between the reference bar bundle scene picture and the calibration bar bundle scene picture; acquiring pixel starting point parameters Xi, pixel end point parameters Yi, pixel width parameters Wi and pixel height parameters Hi of each single bar to be positioned in a reference bar bundle scene picture through a preset image coordinate system and a preset deep learning target detection network model, and acquiring end face image information of all unit bars in the reference bar bundle scene picture in advance; searching mapping unit bars corresponding to and matched with the single bar to be positioned from the calibrated bar bundle scene picture through a preset image coordinate system, current mutual calibration parameters and a preset similarity detection algorithm, wherein each mapping unit bar has the image information of the end face of the searched bar; acquiring current end face three-dimensional coordinate data Ixyz of a corresponding single bar I to be positioned through multi-view geometric calculation according to unit bar end face image information and search bar end face image information; finally, the beneficial effects that the single bar to be positioned in the target bar bundle is back projected into the space through the collection of various technologies and directly through the multi-frame current bar bundle scene picture, the environmental interference is small, and the bar point cloud data of the end face of the whole bundle of bars can be accurately determined are achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a method for reconstructing a three-dimensional data model of a rod based on multi-view according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for reconstructing a three-dimensional data model of a rod based on multi-view according to another embodiment of the present invention;
FIG. 3 is a flow chart of a method for reconstructing a three-dimensional data model of a rod based on multi-view according to still another embodiment of the present invention;
fig. 4 is a schematic structural diagram of the bar three-dimensional modeling system of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Furthermore, the description of "first," "second," etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
Referring to fig. 1, the invention provides a bar three-dimensional data model reconstruction method based on multi-view, comprising the following steps:
S100, acquiring multi-frame current bar bundle scene pictures of a target bar bundle at the same moment, taking one frame of current bar bundle scene picture as a reference bar bundle scene picture, taking the rest frames of current bar bundle scene pictures as calibration bar bundle scene pictures, wherein an overlapping view angle and current mutual calibration parameters are arranged between the reference bar bundle scene picture and the calibration bar bundle scene picture; s200, acquiring all unit bar end face image information in a reference bar bundle scene picture based on a preset image coordinate system and a preset deep learning target detection network model, wherein the unit bar end face image information comprises a pixel starting point parameter Xi, a pixel end point parameter Yi, a pixel width parameter Wi and a pixel height parameter Hi, and one unit bar end face image information is used for calibrating a corresponding single bar I to be positioned; s300, acquiring corresponding matched image information from the calibrated bar bundle scene picture based on unit bar end face image information to be search bar end face image information according to a preset image coordinate system, current mutual calibration parameters and a preset similarity detection algorithm, and acquiring corresponding current end face three-dimensional coordinate data Ixyz of a single bar I to be positioned through multi-view geometric calculation according to the unit bar end face image information and the search bar end face image information.
According to the bar three-dimensional data model reconstruction method based on the multi-view, a frame of reference bar bundle scene picture is obtained, at least one frame of calibration bar bundle scene picture is obtained, and an overlapping view angle and known current mutual calibration parameters are arranged between the reference bar bundle scene picture and the calibration bar bundle scene picture; acquiring pixel starting point parameters Xi, pixel end point parameters Yi, pixel width parameters Wi and pixel height parameters Hi of each single bar to be positioned in a reference bar bundle scene picture through a preset image coordinate system and a preset deep learning target detection network model, and acquiring end face image information of all unit bars in the reference bar bundle scene picture in advance; searching mapping unit bars corresponding to and matched with the single bar to be positioned from the calibrated bar bundle scene picture through a preset image coordinate system, current mutual calibration parameters and a preset similarity detection algorithm, wherein each mapping unit bar has the image information of the end face of the searched bar; acquiring current end face three-dimensional coordinate data Ixyz of a corresponding single bar I to be positioned through multi-view geometric calculation according to unit bar end face image information and search bar end face image information; finally, the beneficial effects that the single bar to be positioned in the target bar bundle is back projected into the space through the collection of various technologies and directly through the multi-frame current bar bundle scene picture, the environmental interference is small, and the bar point cloud data of the end face of the whole bundle of bars can be accurately determined are achieved.
It can be understood that the multi-frame current bar bundle scene picture of the target bar bundle at the same moment is obtained through the current multi-view camera system. Specifically, two frames of current bar bundle scene pictures can be acquired simultaneously through the binocular camera system, three frames of current bar bundle scene pictures can be acquired simultaneously through the three-view camera system, and four frames of current bar bundle scene pictures can be acquired simultaneously through the four-view camera system. Optionally, in the invention, the current bar bundle scene picture is more than two frames.
It can be appreciated that in the scheme of the invention, a preset deep learning target detection network model is established based on the bar three-dimensional modeling system. It can be understood that in the invention, a preset deep learning target detection network model is pre-established according to various environmental scenes, and then all unit bar end face image information in the reference bar bundle scene picture is acquired based on the preset deep learning target detection network model according to the current bar bundle scene picture.
Further, in step S100, the priority of the current bar bundle scene pictures of the plurality of frames is set according to the preset condition, and the current bar bundle scene picture with the highest priority is determined as the reference bar bundle scene picture; the preset conditions comprise one or more of quantity information of the shot bars, definition information of the shot bars and position information of the shot bars of the current bar bundle scene picture. As can be appreciated, in the present invention, if the priority is set by the number of bars shot by the current bar bundle scene picture, a frame of the current bar bundle scene picture with the largest number of bars shot by the current bar bundle scene picture is set as the reference bar bundle scene picture (i.e. the panoramic image is set as the reference bar bundle scene picture through boundary recognition), so that the point cloud data of all the single bars to be positioned of the target bar bundle can be restored by the reference bar bundle scene picture.
More preferably, if the multi-frame panoramic image is provided, weighting processing is performed by setting the shot bar definition information and shot bar position information, so that one frame of panoramic image with definition meeting the requirements and shooting angle meeting the requirements is obtained from the multi-frame panoramic image as a reference bar bundle scene picture, and after the reference bar bundle scene picture is obtained, the processing speed and processing precision of obtaining unit bar end face image information are improved by being beneficial to rapid identification.
Further, the step S30 specifically includes: s301, acquiring a target search range of a single rod I to be positioned in a reference rod bundle scene picture mapped in a calibration rod bundle scene picture according to the current mutual calibration parameters; s302, acquiring the searched bar end face image information correspondingly matched with the unit bar end face image information from a target searching range according to a preset similarity detection algorithm, and calibrating a mapping unit bar by one piece of the searched bar end face image information; s303, fitting and matching unit bar end face image information and searched bar end face image information to a sub-pixel level; s304, acquiring current end face three-dimensional coordinate data Ixyz of a single bar I to be positioned through multi-view geometric calculation based on a preset image coordinate system, current mutual calibration parameters, fitted unit bar end face image information and searched bar end face image information, and combining a plurality of the current end face three-dimensional coordinate data Ixyz to form bar bundle end face three-dimensional coordinate set data of a target bar bundle.
Optionally, if the current bundle scene picture is two frames, steps S301 to S304 are performed. The current bar bundle scene picture is two frames, all mapping unit bars in the calibration bar bundle scene picture and the single bar I to be positioned in the corresponding matched reference bar bundle scene picture are directly combined, and three-dimensional coordinate data Ixyz of the current end face corresponding to all the single bars I to be positioned which are successfully matched can be rapidly obtained.
Referring to fig. 2, when the current bar bundle scene picture is two frames, a specific bar three-dimensional data model reconstruction method based on multi-view is provided in an embodiment of the present invention, which includes the steps of:
s100, obtaining two frames of current bar bundle scene pictures of a target bar bundle at the same moment, taking one frame of panoramic current bar bundle scene picture as a reference bar bundle scene picture, taking the rest other frame of current bar bundle scene picture as a calibration bar bundle scene picture, wherein an overlapping view angle and known current mutual calibration parameters are arranged between the reference bar bundle scene picture and the calibration bar bundle scene picture;
s200, acquiring unit bar end face image information corresponding to each single bar to be positioned in a reference bar bundle scene picture based on a preset image coordinate system and a preset deep learning target detection network model, wherein the unit bar end face image information comprises pixel starting point parameters Xi, pixel end point parameters Yi, pixel width parameters Wi and pixel height parameters Hi, and further acquiring all unit bar end face image information corresponding to all single bars to be positioned in the reference bar bundle scene picture;
S301, acquiring a target search range of a single rod I to be positioned in a reference rod bundle scene picture mapped in a calibration rod bundle scene picture according to the current mutual calibration parameters;
s302, acquiring the searched bar end face image information correspondingly matched with the unit bar end face image information from a target searching range according to a preset similarity detection algorithm, and calibrating a mapping unit bar by one piece of the searched bar end face image information;
s303, fitting and matching unit bar end face image information and searched bar end face image information to a sub-pixel level;
s304, acquiring current end face three-dimensional coordinate data Ixyz of a single bar I to be positioned through multi-view geometric calculation based on a preset image coordinate system, current mutual calibration parameters, fitted unit bar end face image information and searched bar end face image information, and combining a plurality of the current end face three-dimensional coordinate data Ixyz to form bar bundle end face three-dimensional coordinate set data of a target bar bundle.
When the three-dimensional bar data model is rebuilt through the binocular camera system, the technical problems that the traditional method for recovering and acquiring the end face information of the whole bundle of bars by multi-view matching is easy to be interfered by environment and the bar point cloud data of the end face of the whole bundle of bars cannot be accurately determined are avoided.
Further, the step S30 specifically includes: s311, sorting importance degree of the multi-frame calibration bar bundle scene pictures according to a preset rule from high importance degree to low importance degree; s312, according to the current mutual calibration parameters, acquiring a target search range of the single rod I to be positioned in the reference rod bundle scene picture, which is mapped in the calibration rod bundle scene picture with the highest current importance; s313, acquiring the searched bar end face image information correspondingly matched with the unit bar end face image information from a target searching range according to a preset similarity detection algorithm, and calibrating a mapping unit bar by one piece of the searched bar end face image information; s314, fitting and matching the unit bar end face image information and the searched bar end face image information to a sub-pixel level; s315, acquiring current end face three-dimensional coordinate data Ixyz of a single bar I to be positioned based on a preset image coordinate system, current mutual calibration parameters, fitted unit bar end face image information and searched bar end face image information; s316, marking a single bar I to be positioned with current end face three-dimensional coordinate data Ixyz as a single positioned bar, and judging whether all single bars I to be positioned in a scene picture of a reference bar bundle are single positioned bars or not; s317, if all the single bars I to be positioned in the reference bar bundle scene picture are the positioned single bars, acquiring three-dimensional coordinate set data of the end face of the bar bundle of the target bar bundle, wherein the three-dimensional coordinate set data of the end face of the bar bundle of the target bar bundle is formed by combining all the three-dimensional coordinate data Ixyz of the current end face; s318, if the single rod I to be positioned in the reference rod bundle scene picture is not all the positioned single rod, eliminating the calibration rod bundle scene picture with the highest current importance and updating, and sorting the updated calibration rod bundle scene picture according to a preset rule; steps S312 to S318 are repeated until three-dimensional coordinate set data of the bundle end face of the target bundle of bars is acquired.
Referring to fig. 3, if the current bundle scene picture is greater than two frames, another embodiment of the present invention provides a specific method for reconstructing a three-dimensional bar data model based on multi-view, which includes the steps of:
s100, obtaining two frames of current bar bundle scene pictures of a target bar bundle at the same moment, taking one frame of panoramic current bar bundle scene picture as a reference bar bundle scene picture, taking the rest other frame of current bar bundle scene picture as a calibration bar bundle scene picture, wherein an overlapping view angle and known current mutual calibration parameters are arranged between the reference bar bundle scene picture and the calibration bar bundle scene picture;
s200, acquiring unit bar end face image information corresponding to each single bar to be positioned in a reference bar bundle scene picture based on a preset image coordinate system and a preset deep learning target detection network model, wherein the unit bar end face image information comprises pixel starting point parameters Xi, pixel end point parameters Yi, pixel width parameters Wi and pixel height parameters Hi, and further acquiring all unit bar end face image information corresponding to all single bars to be positioned in the reference bar bundle scene picture;
s311, sorting importance degree of the multi-frame calibration bar bundle scene pictures according to a preset rule from high importance degree to low importance degree;
S312, according to the current mutual calibration parameters, acquiring a target search range of the single rod I to be positioned in the reference rod bundle scene picture, which is mapped in the calibration rod bundle scene picture with the highest current importance;
s313, acquiring the searched bar end face image information correspondingly matched with the unit bar end face image information from a target searching range according to a preset similarity detection algorithm, and calibrating a mapping unit bar by one piece of the searched bar end face image information;
s314, fitting and matching the unit bar end face image information and the searched bar end face image information to a sub-pixel level;
s315, acquiring current end face three-dimensional coordinate data Ixyz of a single bar I to be positioned based on a preset image coordinate system, current mutual calibration parameters, fitted unit bar end face image information and searched bar end face image information;
s316, marking a single bar I to be positioned with current end face three-dimensional coordinate data Ixyz as a single positioned bar, and judging whether all single bars I to be positioned in a scene picture of a reference bar bundle are single positioned bars or not;
s317, if all the single bars I to be positioned in the reference bar bundle scene picture are the positioned single bars, acquiring three-dimensional coordinate set data of the end face of the bar bundle of the target bar bundle, wherein the three-dimensional coordinate set data of the end face of the bar bundle of the target bar bundle is formed by combining all the three-dimensional coordinate data Ixyz of the current end face;
S318, if the single rod I to be positioned in the reference rod bundle scene picture is not all the positioned single rod, eliminating the calibration rod bundle scene picture with the highest current importance and updating, and sorting the updated calibration rod bundle scene picture according to a preset rule;
steps S312 to S318 are repeated until three-dimensional coordinate set data of the bundle end face of the target bundle of bars is acquired.
When the three-dimensional data model of the bars is rebuilt through the multi-view camera system, the technical problems that the traditional multi-view matching method is adopted to restore and acquire the end face information of the whole bundle of bars are avoided, the technical problems that environmental interference is easy, the point cloud data of the bars on the end face of the whole bundle of bars cannot be accurately determined are solved, meanwhile, priority is sequentially set through importance sorting of the calibrated bar bundle scene pictures, and when all the single bars to be positioned in the reference bar bundle scene pictures cannot be restored by the calibrated bar bundle scene picture with the highest priority, the rest single bars to be positioned in the reference bar bundle scene picture are further restored from the calibrated bar bundle scene picture in the next priority until the three-dimensional coordinate set data of the end face of the bar bundle of the target bar bundle is acquired.
Further, according to the current mutual calibration parameters, calculating a target search range of unit bar end face image information obj_i of the ith single bar I to be positioned in the reference bar bundle scene picture mapped under the calibration bar bundle scene picture, and all target sets { obj_X }; inputting the unit bar end face image information obj_i and the target set { obj_x } together into a preset deep learning target detection network model, and obtaining a comparison cosine value between each numerical value in the unit bar end face image information obj_i and the target set { obj_x }; and determining the searched bar end face image information obj_j which is matched with the unit bar end face image information obj_i in a corresponding way and has the smallest cosine value.
Further, in order to improve the processing speed and processing accuracy, a preset rule is determined based on the overlapping view angle and the image sharpness. Specifically, if the range of the overlapping view angle with the reference bar bundle scene picture is larger, the importance of the calibration bar bundle scene picture is higher; if the calibration bar bundle scene pictures with the same overlapping visual angles of the multi-frame and the reference bar bundle scene pictures are provided, the higher the definition is, the higher the importance of the calibration bar bundle scene pictures is.
Referring to fig. 4, further, a preset deep learning target detection network model is established based on the bar three-dimensional modeling system; the bar three-dimensional modeling system comprises a double-light-source flash lamp system, a multi-eye industrial camera system, auxiliary 3D measuring equipment and an edge calculating unit, wherein the multi-eye industrial camera system is used for acquiring multi-frame modeling bar bundle scene pictures at corresponding positions at the same moment, the double-light-source flash lamp system is used for providing flash lamp support when the multi-eye industrial camera system shoots, the auxiliary 3D measuring equipment is used for acquiring actual point cloud data information data, the multi-eye industrial camera system has current mutual calibration parameters, and the multi-eye industrial camera system and the auxiliary 3D measuring equipment have associated external parameters; the edge calculation unit is used for establishing a preliminary deep learning target detection network model by adopting machine learning based on the current mutual calibration parameters, taking one of the modeling bar bundle scene pictures as an input data set in the model training set, taking the remaining modeling bar bundle scene pictures as an output data set in the model training set; the edge computing unit is used for acquiring actual point cloud data information data of the modeling bar bundle scene pictures of the output data set, carrying out iterative correction on the preliminary deep learning target detection network model based on the actual point cloud data information data, and establishing a preset deep learning target detection network model. According to the invention, the preset deep learning target detection network model is built in advance based on the multi-view industrial camera system, the double-light-source flash lamp system, the auxiliary 3D measuring equipment and the edge calculating unit, and then the three-dimensional coordinate data Ixyz of the current end face of the single rod I to be positioned is acquired based on the multi-view industrial camera system, the double-light-source flash lamp system and the edge calculating unit, so that the technical problems that the traditional method for reducing and acquiring end face information of the whole bundle of rods by multi-view matching is easy to be influenced by environment and the rod point cloud data of the end face of the whole bundle of rods cannot be accurately determined are avoided.
The invention provides a bar three-dimensional data model reconstruction method based on multi-view, which comprises the specific principles of data acquisition, data training and deployment reconstruction.
Data acquisition (production site acquisition): and (3) constructing a data acquisition system of a bar bundling assembly line site, and acquiring and establishing a bar data set.
1) A data acquisition system is established on a bar production site, and a binocular camera calibration distortion parameter, a calibration internal parameter and a calibration external parameter in a binocular camera system are firstly acquired, so that polar lines are aligned after the binocular camera is calibrated; calibrating the associated external parameters between the binocular camera system and the auxiliary 3D measuring equipment, wherein the binocular camera system and the auxiliary 3D measuring equipment are connected in a hardware wire control mode, so that the binocular camera system and the auxiliary 3D measuring equipment can start simultaneously and synchronously to acquire data;
2) And cleaning and screening the acquired data, filtering the depth data acquired by the auxiliary 3D measuring equipment, and removing the data pair with errors and noise acquired by the auxiliary 3D measuring equipment.
Data training (production site modeling): as for the three-dimensional reconstruction of the bars, the attention point is only on each single bar, objects such as the background of non-bars are not required to be paid attention to, a target-level preset deep learning target detection network model is provided, the three-dimensional reconstruction is only carried out on bar areas, and other non-bar areas are not calculated.
1) The data enhancement is carried out on two frames of modeling image data acquired by a binocular camera system (in practical application, the diameter size of a single bar is generally between 8mm and 40mm, and an optical system often has the conditions of dynamic blurring caused by bar stop shaking, image overexposure caused by overlong camera exposure, underexposure caused by backlight interference and the like when photographing, so that the size scaling range of an image is enhanced, and the blurring processing, mosaic processing, random salt and pepper noise, random brightness enhancement and random brightness reduction of a coiled area are shown to be relatively improved).
2) The method comprises the steps of (1) carrying out line alignment on binocular images (two-frame modeling image data), projecting point clouds of auxiliary 3D measuring equipment to a left-eye coordinate system according to calibration distortion parameters, calibration internal parameters and calibration external parameters, calculating a corresponding right-eye target area after a bar target area marked by the left eye is subjected to depth constraint and line constraint, marking and adjusting and correcting the right-eye target area (in the invention, marking and adjusting and correcting the right-eye target area through manual processing), finally establishing a preset deep learning target detection network model, and further conveniently obtaining all unit bar end face image information in a scene picture of a reference bar bundle, wherein one single bar to be positioned is marked by one unit bar end face image information.
Specifically, a two-stage network structure is designed, which consists of a target detection network and a target matching network. Firstly, carrying out rod target reasoning on a binocular image through a target detection network to obtain a rod region, limiting a search region through a binocular polar constraint method, and obtaining an optimal matching result through a target matching network by a target frame in the search region. Because the network is ultimately deployed on an edge computing platform, a symptomatic structural selection is made, and the structural design is shown in fig. 2.
Deployment rebuild (production site use): and shooting on site to obtain a scene picture of the multi-frame current bar bundle, and finally outputting a three-dimensional reconstruction point cloud of the target bar bundle based on the two frames of pictures through processing of a network structure and position optimization processing of sub-pixels.
The principle of the bar three-dimensional data model reconstruction system based on the multi-view vision provided by the invention is as follows:
the bar three-dimensional data model reconstruction system based on the multi-view is formed after an auxiliary 3D measuring device is removed from a bar three-dimensional modeling system, wherein the bar three-dimensional modeling system comprises a double-light-source flash lamp system, a multi-view industrial camera, a high-precision depth camera system and an edge computing unit, the multi-view industrial camera and the high-precision depth camera system are rigidly connected, a light source of the double-light-source flash lamp system is composed of a white light and an infrared double-light source, and calibration distortion parameters, calibration internal parameters, calibration external parameters and associated external parameters are arranged between the devices. Synchronous triggering is completed on the bar production line through a double-light-source flash lamp system, a multi-eye industrial camera and a high-precision depth camera system, flash lamp mode support is provided for the whole bundle of bar data through the double-light-source flash lamp system, the multi-eye data are collected through the multi-eye industrial camera, the depth data are collected through the high-precision depth camera system, and finally noise data collected by the high-precision depth camera system are screened and removed. Specifically, in order to reduce the dependence on field data and the data quality problem caused by noise of the sensor, the data acquisition system of the embodiment can also adopt a data simulation platform to acquire the data, reduce the dependence on external equipment, construct a camera model in a unit 3D digital twin system, randomly combine bar models, perform appearance rendering on the bar models, and artificially construct high-precision data.
And then manually labeling the acquired data by a semi-automatic method, performing machine learning, manually labeling on a small data set, training a large model network as a teaching labeling network, automatically labeling the rest data, manually screening and cleaning the automatically labeled data, and training a preset deep learning target detection network model.
When the bar three-dimensional data model reconstruction system based on the multi-view is used in a working scene, the bar three-dimensional data model reconstruction system based on the multi-view comprises a double-light-source flash lamp system, a multi-view industrial camera and an edge calculation unit, wherein the industrial camera and the flash lamp work to collect image data, a neural network calculation process is completed in the edge calculation unit, and bar point cloud data of a target bar bundle are output.
The invention also provides a bar three-dimensional data model reconstruction system based on multi-view, which comprises a double-light-source flash lamp system, a current multi-view camera system and an edge computing unit, wherein the current multi-view camera system is used for acquiring multi-frame current bar bundle scene pictures of a target bar bundle at the same moment, the edge computing unit is electrically connected with the current multi-view camera system, and the edge computing unit is used for executing the steps of the bar three-dimensional data model reconstruction method based on multi-view.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (6)

1. A bar three-dimensional data model reconstruction method based on multi-view vision is characterized by comprising the following steps:
s100, acquiring multi-frame current bar bundle scene pictures of a target bar bundle at the same moment, taking one frame of the current bar bundle scene picture as a reference bar bundle scene picture, taking the rest frames of the current bar bundle scene pictures as calibration bar bundle scene pictures, wherein an overlapping visual angle and current mutual calibration parameters are arranged between the reference bar bundle scene picture and the calibration bar bundle scene picture;
s200, acquiring all unit bar end face image information in the reference bar bundle scene picture based on a preset image coordinate system and a preset deep learning target detection network model, wherein the unit bar end face image information comprises a pixel starting point parameter Xi, a pixel end point parameter Yi, a pixel width parameter Wi and a pixel height parameter Hi, and one single bar I to be positioned is corresponding to one unit bar end face image information in calibration;
S300, acquiring corresponding matched image information from the calibrated bar bundle scene picture based on the unit bar end face image information to be the searched bar end face image information according to the preset image coordinate system, the current mutual calibration parameters and a preset similarity detection algorithm, and acquiring corresponding current end face three-dimensional coordinate data Ixyz of the single bar I to be positioned through multi-view geometric calculation according to the unit bar end face image information and the searched bar end face image information;
if the current bar bundle scene picture is two frames, step S300 specifically includes: s301, according to the current mutual calibration parameters, acquiring a target search range of the single bar I to be positioned in the reference bar bundle scene picture mapped in the calibration bar bundle scene picture; s302, acquiring searched bar end face image information correspondingly matched with the unit bar end face image information from the target searching range according to a preset similarity detection algorithm, wherein one piece of the searched bar end face image information is used for calibrating a mapping unit bar; s303, fitting and matching the unit bar end face image information and the search bar end face image information to a sub-pixel level; s304, acquiring current end face three-dimensional coordinate data Ixyz of the single rod I to be positioned through multi-view geometric calculation based on the preset image coordinate system, the current mutual calibration parameters, the fitted unit rod end face image information and the search rod end face image information, and combining a plurality of the current end face three-dimensional coordinate data Ixyz to form rod bundle end face three-dimensional coordinate set data of the target rod bundle;
If the current bar bundle scene picture is greater than two frames, the step S300 specifically includes: s311, sorting importance of the multi-frame calibration bar bundle scene pictures according to a preset rule from high importance to low importance; s312, according to the current mutual calibration parameters, acquiring a target search range of the single bar I to be positioned in the reference bar bundle scene picture, which is mapped in the calibration bar bundle scene picture with highest current importance; s313, acquiring searched bar end face image information correspondingly matched with the unit bar end face image information from the target searching range according to a preset similarity detection algorithm, wherein one piece of the searched bar end face image information is used for calibrating a mapping unit bar; s314, fitting and matching the unit bar end face image information and the search bar end face image information to a sub-pixel level; s315, acquiring current end face three-dimensional coordinate data Ixyz of the single bar I to be positioned based on the preset image coordinate system, the current mutual calibration parameters, the fitted unit bar end face image information and the search bar end face image information; s316, marking the single bar I to be positioned with the current end face three-dimensional coordinate data Ixyz as a single positioned bar, and judging whether all the single bars I to be positioned in the reference bar bundle scene picture are the single positioned bars; s317, if all the single bars I to be positioned in the reference bar bundle scene picture are the acquired positioned single bars, acquiring three-dimensional coordinate set data of the end face of the bar bundle of the target bar bundle, wherein the three-dimensional coordinate set data of the end face of the bar bundle of the target bar bundle is formed by combining all the three-dimensional coordinate data Ixyz of the current end face; s318, if the single rod I to be positioned in the reference rod bundle scene picture is not all the single rod I to be positioned, eliminating the calibration rod bundle scene picture with the highest current importance and updating, and sorting the updated calibration rod bundle scene picture according to a preset rule; repeating steps S312 to S318 until three-dimensional coordinate set data of the end face of the bar bundle of the target bar bundle is obtained.
2. The method for reconstructing the three-dimensional data model of the rod based on the multi-view according to claim 1, wherein,
in step S100, setting the priority of the current bar bundle scene pictures of a plurality of frames according to preset conditions, and determining the current bar bundle scene picture with the highest priority as the reference bar bundle scene picture;
the preset conditions comprise one or more of the quantity information of the shot bars, the definition information of the shot bars and the position information of the shot bars of the current bar bundle scene picture.
3. The method for reconstructing a three-dimensional data model of a rod based on multi-view according to any one of claim 1 or 2,
calculating the target search range of the unit bar end face image information obj_i of the ith single bar I to be positioned in the reference bar bundle scene picture mapped under the calibration bar bundle scene picture according to the current mutual calibration parameters, wherein all target sets { obj_X };
inputting the unit bar end face image information obj_i and the target set { obj_x } together into a preset deep learning target detection network model, and obtaining a comparison cosine value between each numerical value in the unit bar end face image information obj_i and the target set { obj_x };
And determining that the comparison cosine value is the searching bar end face image information which is matched with the unit bar end face image information obj_i correspondingly.
4. The method for reconstructing the three-dimensional data model of the rod based on the multi-view according to claim 3,
and if the range of the overlapping view angles with the reference bar bundle scene picture is larger, the importance of the calibration bar bundle scene picture is higher.
5. The method for reconstructing the three-dimensional data model of the rod based on the multi-view according to claim 1, wherein,
establishing a preset deep learning target detection network model based on a bar three-dimensional modeling system;
the bar three-dimensional modeling system comprises a double-light-source flash lamp system, a multi-eye industrial camera system, auxiliary 3D measuring equipment and an edge calculating unit, wherein the multi-eye industrial camera system is used for acquiring multi-frame modeling bar bundle scene pictures at corresponding positions at the same moment, the double-light-source flash lamp system is used for providing flash lamp support when the multi-eye industrial camera system shoots, the auxiliary 3D measuring equipment is used for acquiring actual point cloud data information data, the multi-eye industrial camera system has current mutual calibration parameters, and the multi-eye industrial camera system and the auxiliary 3D measuring equipment have associated external parameters;
The edge calculation unit is used for establishing a preliminary deep learning target detection network model by adopting machine learning based on the current mutual calibration parameters, taking one frame of modeling bar bundle scene picture as an input data set in a model training set, and taking the rest frames of modeling bar bundle scene pictures as output data sets in the model training set;
the edge calculation unit is used for acquiring the actual point cloud data information data of the modeling bar bundle scene pictures of the output data set, carrying out iterative correction on the preliminary deep learning target detection network model based on the actual point cloud data information data, and establishing the preset deep learning target detection network model.
6. A bar three-dimensional data model reconstruction system based on multi-view vision is characterized in that,
the method comprises a double-light-source flash lamp system, a current multi-view camera system and an edge computing unit, wherein the current multi-view camera system is used for acquiring multi-frame current bar bundle scene pictures of a target bar bundle at the same moment, the edge computing unit is electrically connected with the current multi-view camera system, and the edge computing unit is used for executing the steps of the multi-view vision-based bar three-dimensional data model reconstruction method according to any one of claims 1-5.
CN202310851546.7A 2023-07-12 2023-07-12 Rod three-dimensional data model reconstruction method and system based on multi-view vision Active CN116645476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310851546.7A CN116645476B (en) 2023-07-12 2023-07-12 Rod three-dimensional data model reconstruction method and system based on multi-view vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310851546.7A CN116645476B (en) 2023-07-12 2023-07-12 Rod three-dimensional data model reconstruction method and system based on multi-view vision

Publications (2)

Publication Number Publication Date
CN116645476A CN116645476A (en) 2023-08-25
CN116645476B true CN116645476B (en) 2023-10-24

Family

ID=87623226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310851546.7A Active CN116645476B (en) 2023-07-12 2023-07-12 Rod three-dimensional data model reconstruction method and system based on multi-view vision

Country Status (1)

Country Link
CN (1) CN116645476B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237546B (en) * 2023-11-14 2024-01-30 武汉大学 Three-dimensional profile reconstruction method and system for material-adding component based on light field imaging

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140031138A (en) * 2012-09-03 2014-03-12 가부시키가이샤 고베 세이코쇼 Shape inspection apparatus and method of bar steel
CN205034418U (en) * 2015-10-19 2016-02-17 安阳工学院 Automatic branch steel device calmly of rod based on machine vision
CN113139900A (en) * 2021-04-01 2021-07-20 北京科技大学设计研究院有限公司 Method for acquiring complete surface image of bar
CN113379894A (en) * 2021-06-10 2021-09-10 西安亚思工业自动化控制有限公司 Three-dimensional data model reconstruction method for bar
CN114581368A (en) * 2022-01-18 2022-06-03 无锡瑞进智能工程有限公司 Bar welding method and device based on binocular vision
CN115319338A (en) * 2022-05-24 2022-11-11 柳州职业技术学院 Deformed steel bar welding robot 3D visual counting and positioning method based on deep learning
CN116258745A (en) * 2023-01-04 2023-06-13 北京科技大学 Rod end target tracking method based on self-adaptive difference

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140031138A (en) * 2012-09-03 2014-03-12 가부시키가이샤 고베 세이코쇼 Shape inspection apparatus and method of bar steel
CN103658197A (en) * 2012-09-03 2014-03-26 株式会社神户制钢所 Shape checking device of steel bar and shape checking method of steel bar
CN205034418U (en) * 2015-10-19 2016-02-17 安阳工学院 Automatic branch steel device calmly of rod based on machine vision
CN113139900A (en) * 2021-04-01 2021-07-20 北京科技大学设计研究院有限公司 Method for acquiring complete surface image of bar
CN113379894A (en) * 2021-06-10 2021-09-10 西安亚思工业自动化控制有限公司 Three-dimensional data model reconstruction method for bar
CN114581368A (en) * 2022-01-18 2022-06-03 无锡瑞进智能工程有限公司 Bar welding method and device based on binocular vision
CN115319338A (en) * 2022-05-24 2022-11-11 柳州职业技术学院 Deformed steel bar welding robot 3D visual counting and positioning method based on deep learning
CN116258745A (en) * 2023-01-04 2023-06-13 北京科技大学 Rod end target tracking method based on self-adaptive difference

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多视角信息融合的棒材识别计数方法;罗三定;黄江峰;李勇;;计算机工程(第03期);全文 *
实时棒材图像识别与跟踪方法研究;张育胜;付永领;;北京航空航天大学学报(第05期);全文 *

Also Published As

Publication number Publication date
CN116645476A (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN110349251B (en) Three-dimensional reconstruction method and device based on binocular camera
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
KR100681320B1 (en) Method for modelling three dimensional shape of objects using level set solutions on partial difference equation derived from helmholtz reciprocity condition
CN111060023A (en) High-precision 3D information acquisition equipment and method
CN112509125A (en) Three-dimensional reconstruction method based on artificial markers and stereoscopic vision
CN116645476B (en) Rod three-dimensional data model reconstruction method and system based on multi-view vision
CN108648264B (en) Underwater scene reconstruction method based on motion recovery and storage medium
CN110782521A (en) Mobile terminal three-dimensional reconstruction and model restoration method and system
CN111028295A (en) 3D imaging method based on coded structured light and dual purposes
CN113129430B (en) Underwater three-dimensional reconstruction method based on binocular structured light
CN109242898B (en) Three-dimensional modeling method and system based on image sequence
CN105029691B (en) A kind of cigarette void-end detection method based on three-dimensional reconstruction
CN115937288A (en) Three-dimensional scene model construction method for transformer substation
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN110619660A (en) Object positioning method and device, computer readable storage medium and robot
CA3233222A1 (en) Method, apparatus and device for photogrammetry, and storage medium
CN113065502A (en) 3D information acquisition system based on standardized setting
CN112184793B (en) Depth data processing method and device and readable storage medium
CN116222425A (en) Three-dimensional reconstruction method and system based on multi-view three-dimensional scanning device
CN113971691A (en) Underwater three-dimensional reconstruction method based on multi-view binocular structured light
CN1544883A (en) Three-dimensional foot type measuring and modeling method based on specific grid pattern
JP2019091122A (en) Depth map filter processing device, depth map filter processing method and program
D'Apuzzo Automated photogrammetric measurement of human faces
JP2996067B2 (en) 3D measuring device
CN110310371B (en) Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant