CN214410073U - Three-dimensional detection positioning system combining industrial camera and depth camera - Google Patents

Three-dimensional detection positioning system combining industrial camera and depth camera Download PDF

Info

Publication number
CN214410073U
CN214410073U CN202023320197.7U CN202023320197U CN214410073U CN 214410073 U CN214410073 U CN 214410073U CN 202023320197 U CN202023320197 U CN 202023320197U CN 214410073 U CN214410073 U CN 214410073U
Authority
CN
China
Prior art keywords
camera
depth camera
industrial camera
depth
industrial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202023320197.7U
Other languages
Chinese (zh)
Inventor
林斌
邓启寰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Jiangao Photoelectric Technology Co ltd
Original Assignee
Suzhou Jiang'ao Optoelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Jiang'ao Optoelectronics Technology Co ltd filed Critical Suzhou Jiang'ao Optoelectronics Technology Co ltd
Priority to CN202023320197.7U priority Critical patent/CN214410073U/en
Application granted granted Critical
Publication of CN214410073U publication Critical patent/CN214410073U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model discloses a three-dimensional positioning detection positioning system, which comprises an acquisition device and an upper computer; the acquisition device comprises an industrial camera and a depth camera, and optical axes of the industrial camera and the depth camera are arranged in parallel; the industrial camera and the depth camera respectively collect information of scenes where the same objects to be detected are located and transmit the information to the upper computer; and the upper computer detects and positions the object to be detected according to the received information. The utility model discloses utilize industrial camera and degree of depth camera to combine, can gather the two-dimensional information and the degree of depth information of the object that awaits measuring and the scene of locating simultaneously for as the basis that three-dimensional positioning detected, can support the three-dimensional detection of realization to the object that awaits measuring, improve detection real-time and accuracy.

Description

Three-dimensional detection positioning system combining industrial camera and depth camera
Technical Field
The utility model relates to a computer vision detects technical field, especially a three-dimensional detection positioning system that industry camera and degree of depth camera combine.
Background
In the fields of production, transportation and the like, certain requirements are required for three-dimensional positioning of objects. In the existing method, manual detection is time-consuming and labor-consuming, the detection effect is not ideal, positioning based on two-dimensional feature matching lacks support of three-dimensional information, and the positioning can be carried out only by matching of personnel or three-dimensional detection equipment. But the line laser scanning method used in the three-dimensional information acquisition has poor instantaneity; visible structured light is easily interfered by external illumination and is not suitable for complex use environments; the point laser ranging needs to divide two-dimensional matching and depth detection into a front step and a rear step, so that the process is complicated, and the detection efficiency is not high.
SUMMERY OF THE UTILITY MODEL
The utility model aims at providing a three-dimensional detection positioning system utilizes industry camera and degree of depth camera to combine to support to realize improving detection real-time and accuracy to the three-dimensional detection of the object that awaits measuring. The utility model adopts the technical proposal as follows.
A three-dimensional detection positioning system combining an industrial camera and a depth camera comprises a collecting device and an upper computer; the acquisition device comprises an industrial camera and a depth camera, and optical axes of the industrial camera and the depth camera are arranged in parallel;
the industrial camera and the depth camera respectively collect information of scenes where the same objects to be detected are located and transmit the information to the upper computer;
and the upper computer detects and positions the object to be detected according to the received information.
Optionally, the industrial camera is connected and communicated with the upper computer through a GigE interface, and the depth camera is connected and communicated with the upper computer through a USB3.0 interface; the host computer adopts the industrial computer that possesses giga net gape and USB3.0 interface.
Optionally, the acquisition device further comprises a housing, and the industrial camera and the depth camera are fixedly mounted on the housing, and through holes are respectively formed in the housing and correspond to the view windows of the industrial camera and the depth camera. The portability and the portability of the acquisition device can be realized.
Optionally, a shell of the acquisition device is further provided with a communication terminal corresponding to the industrial camera and the depth camera, one end of the communication terminal is connected with the industrial camera/the depth camera, and the other end of the communication terminal is connected with the upper computer.
Optionally, the industrial camera and the depth camera are fixedly mounted in the housing through a mounting bracket, and two side portions of the mounting bracket are fixedly connected with two side portions in the housing;
the installing support is including corresponding the depressed part that degree of depth camera and industry camera set up respectively, and the upper surface of degree of depth camera and industry camera are laminated respectively with the depressed part lower surface that corresponds between.
Optionally, in order to make the positions of the industrial camera and the depth camera after the registration calibration more stable, the depth camera and the industrial camera are fixedly connected with the corresponding mounting frame concave parts through screws.
Optionally, in order to simplify the processing procedure of host computer, the utility model discloses in, the industrial camera adopts the visible light camera that can gather object two-dimensional image and relative angle that await measuring in the scene. The industrial camera can adopt the prior art, particularly has an independent data processing unit, and can judge the plane coordinate and the relative angle of an object to be detected in a scene according to an acquired color image.
Advantageous effects
The utility model discloses utilize industry camera and degree of depth camera to combine the information of gathering the object that awaits measuring, as the detection location foundation, can gather the two-dimentional and the degree of depth information of the object that awaits measuring and place scene simultaneously, make the host computer can realize the object detection location based on three-dimensional information based on these information, not only can solve among the prior art and only discern the location based on two-dimensional image, receive the problem of ambient light and the influence of object surface material information, can solve again through two-dimensional image information current three-dimensional testing process because the insensitivity of degree of depth information to dividing line between the sample, thereby the unable problem that detects to the intensive range sample that leads to, be suitable for and improve detection real-time and accuracy.
Drawings
FIG. 1 is a schematic diagram of a three-dimensional positioning system;
FIG. 2 is a schematic view of the structure of the collecting device;
fig. 3 is a schematic view illustrating a view finding and processing principle of an industrial camera and a depth camera according to an embodiment.
Detailed Description
The following further description is made in conjunction with the accompanying drawings and the specific embodiments.
The technical conception of the utility model is as follows: the method comprises the steps of utilizing an industrial camera and a depth camera to be combined, obtaining two-dimensional image information and depth information of an object to be detected simultaneously, and further conducting positioning detection on the object to be detected, so that the problems that in the prior art, identification and positioning are conducted only on the basis of two-dimensional images, and the object to be detected is affected by ambient light and material information on the surface of the object are solved, and the problem that in the existing three-dimensional detection process, due to the fact that the depth information is insensitive to boundaries among samples, detection cannot be conducted on densely arranged samples is solved.
Example 1
The three-dimensional positioning system of the utility model is shown in the reference figure 1 and comprises an acquisition device and an upper computer; the acquisition device comprises an industrial camera and a depth camera, and optical axes of the industrial camera and the depth camera are arranged in parallel; the industrial camera and the depth camera respectively collect information of scenes where the same objects to be detected are located and transmit the information to the upper computer; and the upper computer detects and positions the object to be detected according to the received information.
When the system is applied, the industrial camera and the depth camera can respectively acquire plane coordinates, relative angle information and relative height information of a scene where an object to be detected is located under the condition that optical axes are parallel to each other, and transmit the plane coordinates, the relative angle information and the relative height information to an upper computer; and the upper computer positions the object to be detected according to the received two-dimensional and depth information.
As shown in fig. 2, the acquisition device in this embodiment is formed by combining an industrial camera and a depth camera, and further includes a housing 1, and an industrial camera 3 and a depth camera 5 installed in the housing; the industrial camera and the depth camera are fixedly arranged in the shell 1 through the mounting bracket 2; through holes are respectively arranged on the shell corresponding to the viewfinder of the industrial camera and the depth camera.
The industrial camera is connected and communicated with the upper computer through a GigE interface 4, and the depth camera is connected and communicated with the upper computer through a USB3.0 interface; the host computer adopts the industrial computer that possesses giga net gape and USB3.0 interface. The shell of the acquisition device is also provided with communication terminals corresponding to the industrial camera and the depth camera, one end of each communication terminal is connected with the industrial camera/the depth camera, and the other end of each communication terminal is connected with the upper computer.
The industrial camera and the depth camera are fixedly arranged in the shell through the mounting bracket, and two side parts of the mounting bracket are fixedly connected with two side parts in the shell; the installing support is including corresponding the depressed part that degree of depth camera and industry camera set up respectively, and the upper surface of degree of depth camera and industry camera are laminated respectively with the depressed part lower surface that corresponds between. In order to enable the positions of the industrial camera and the depth camera after the registration calibration to be more stable, the depth camera and the industrial camera are fixedly connected with the corresponding mounting frame concave parts through screws.
Specifically, adopt M2.5 bolt and nut to fix between casing 1 and the installing support 2, the fixed surface is casing 1's positive and negative, adopts M2 screw fixation between installing support 2 and the industry camera 3, and the industry camera surface contains the screw, and installing support 3 lower surface and industry camera upper portion relatively fixed adopt M2 screw fixation between depth camera 5 and the installing support 2.
The utility model discloses a three-dimensional detection positioning system is being applied to the in-service time, and the image data information that industry camera was gathered mainly is color information and outside environment light information, and the relative height information of the object that awaits measuring in the scene of locating is detected to the degree of depth camera, and both concrete collection functions can adopt prior art.
In order to simplify the processing process of the upper computer, the industrial camera adopts a visible light camera capable of collecting two-dimensional images and relative angles of an object to be detected in a scene. Specifically, the industrial camera is provided with an independent data processing unit, and the plane coordinates and the relative angle of the object to be detected in the scene can be judged according to the acquired color image. The industrial camera detects the plane coordinate and relative angle information of an object to be detected in a scene, and the function realization principle can utilize the following processes:
the method comprises the steps that an industrial camera collects image data information of a scene where an object to be detected is located;
the industrial camera performs template matching by using a pre-constructed object template based on the image data information to obtain plane coordinates and angle information of the successfully matched object to be detected in the scene;
the construction method of the object template comprises the following steps: and collecting image samples of the object to be detected under various illumination and backgrounds, and training the convolutional neural network to obtain an object template.
The function of the depth camera for detecting the relative height information of the object to be detected in the scene can be realized by adopting the following processes:
the depth camera acquires three-dimensional appearance information of a scene where an object to be detected is located;
the depth camera obtains a height change differential image through differential calculation based on the field depth image of the object to be detected and the pre-collected background field depth image, and then relative height information is obtained.
The method for detecting and positioning by the upper computer according to the received information can be implemented as follows:
acquiring plane coordinates and relative angle information of an object to be detected in a scene detected by an industrial camera;
acquiring relative height information of an object to be detected in a scene detected by a depth camera;
according to the plane coordinates, the relative angle information and the relative height information, input data of a pre-trained three-dimensional information classification model are determined, and classification result information of the object to be detected output by the three-dimensional information classification model is obtained, wherein the classification result information comprises the material type of the object to be detected and position area information in the scene;
the training samples of the pre-trained three-dimensional information classification model are three-dimensional coordinate information of objects corresponding to a plurality of material types at a plurality of different positions and rotation angles.
Considering information transmission and flow processing of an external control link, the positioning method of the upper computer can further comprise the following steps: and matching the position area information obtained by classification with a material template corresponding to a pre-constructed material type to obtain the relative angle and the zoom factor information of the object to be detected in the scene.
The upper computer determines the input data of a pre-trained three-dimensional information classification model as follows according to the plane coordinates, the relative angle information and the relative height information: and converting the plane coordinate, the relative angle information and the relative height information into RGB-normal vector data as input of a three-dimensional information classification model according to the internal and external reference relation of the relative pose obtained by calibrating the industrial camera and the depth camera in advance.
Specifically, the method comprises the following steps: information obtained from the industrial camera and the depth camera is converted into registered point cloud data with color information, i.e., RGB-D data (RGB and depth data), and the depth information data is converted into normal vector data, thereby converting the data into a 4-element form. The conversion mode of the normal vector data is to calculate the gradient direction of the depth map, and the obtained data is the normal vector data of the surface of the object.
And then, the upper computer inputs the converted color information and the normal vector information into a trained convolutional neural network to finish the example segmentation operation of the material in the field of view, so as to obtain the classification and coordinate information of the material to be detected. And matching the area information of the obtained segmentation example with a template to be detected to obtain optimized positioning data such as relative angles, coordinates, scaling factors and the like, so that information transmission and flow processing of an external control link are facilitated.
Before the three-dimensional information of the material to be measured is acquired, the depth camera and the industrial camera need to be registered due to the fact that the pose difference exists between the depth camera and the industrial camera, and the internal and external parameters of the cameras can be calculated by a Zhangyingyou calibration method or a Tsai two-step method through the registration method, so that the accurate relative position relation between the two cameras is obtained.
The training of the three-dimensional information classification model can select T-Net or adopt a PPF characteristic method to carry out training, and the three-dimensional information classification model with the example segmentation capability is obtained. The input parameters of the classification model are surface normal vector information (RGBD-theta components), samples for training comprise manually marked example segmentation areas, and output marks comprise all three-dimensional ROI areas containing template target object examples in a field of view to be detected.
In practical application, the actual data is obtained by the same method and is put into a convolutional neural network for example segmentation. The obtained data is the matching area and the classification information of the materials. And calculating the PnP relation between the point cloud in the obtained region and the template to obtain the rotation translation and scaling relation of the obtained region relative to the template, and finishing the three-dimensional positioning estimation of the invisible region of the camera field of view. And converting the obtained result into the three-dimensional coordinate, the rotation angle and the scaling factor information of the relative template.
To sum up, the utility model discloses utilize the combination of industry camera and degree of depth camera, support to realize the three-dimensional information detection location to the different objects that await measuring, compare the inefficiency that traditional positioning method faced, the precision is poor, the adaptability is weak scheduling problem, the utility model has the advantages of instantaneity and accuracy.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The embodiments of the present invention have been described with reference to the accompanying drawings, but the present invention is not limited to the above-mentioned embodiments, which are only illustrative and not restrictive, and those skilled in the art can make many forms without departing from the spirit and scope of the present invention, and all of them fall within the protection scope of the present invention.

Claims (7)

1. A three-dimensional detection positioning system combining an industrial camera and a depth camera is characterized by comprising a collecting device and an upper computer; the acquisition device comprises an industrial camera and a depth camera, and optical axes of the industrial camera and the depth camera are arranged in parallel;
the industrial camera and the depth camera respectively collect information of scenes where the same objects to be detected are located and transmit the information to the upper computer;
and the upper computer detects and positions the object to be detected according to the received information.
2. The three-dimensional detection and positioning system combining the industrial camera and the depth camera as claimed in claim 1, wherein the industrial camera is connected and communicated with the upper computer through a GigE interface, and the depth camera is connected and communicated with the upper computer through a USB3.0 interface; the host computer adopts the industrial computer that possesses giga net gape and USB3.0 interface.
3. The three-dimensional detection and positioning system combining the industrial camera and the depth camera as claimed in claim 2, wherein the shell of the acquisition device is further provided with communication terminals corresponding to the industrial camera and the depth camera, one end of each communication terminal is connected with the industrial camera/the depth camera, and the other end of each communication terminal is connected with the upper computer.
4. The combined three-dimensional inspection and positioning system of the industrial camera and the depth camera as claimed in claim 1, wherein the capturing device further comprises a housing, and the industrial camera and the depth camera are fixedly mounted on the housing and respectively provided with through holes corresponding to the view windows of the industrial camera and the depth camera.
5. The three-dimensional detection and positioning system combining the industrial camera and the depth camera as claimed in claim 4, wherein the industrial camera and the depth camera are fixedly installed in the housing through a mounting bracket, and two sides of the mounting bracket are fixedly connected with two sides in the housing;
the installing support is including corresponding the depressed part that degree of depth camera and industry camera set up respectively, and the upper surface of degree of depth camera and industry camera are laminated respectively with the depressed part lower surface that corresponds between.
6. The three-dimensional detection and positioning system of the combination of the industrial camera and the depth camera as claimed in claim 5, wherein the depth camera and the industrial camera are fixedly connected with the corresponding mounting rack concave parts through screws.
7. The system as claimed in claim 1, wherein the industrial camera is a visible light camera capable of capturing two-dimensional images and relative angles of objects to be measured in a scene.
CN202023320197.7U 2020-12-31 2020-12-31 Three-dimensional detection positioning system combining industrial camera and depth camera Active CN214410073U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202023320197.7U CN214410073U (en) 2020-12-31 2020-12-31 Three-dimensional detection positioning system combining industrial camera and depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202023320197.7U CN214410073U (en) 2020-12-31 2020-12-31 Three-dimensional detection positioning system combining industrial camera and depth camera

Publications (1)

Publication Number Publication Date
CN214410073U true CN214410073U (en) 2021-10-15

Family

ID=78044857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202023320197.7U Active CN214410073U (en) 2020-12-31 2020-12-31 Three-dimensional detection positioning system combining industrial camera and depth camera

Country Status (1)

Country Link
CN (1) CN214410073U (en)

Similar Documents

Publication Publication Date Title
CN110570477B (en) Method, device and storage medium for calibrating relative attitude of camera and rotating shaft
CN106017325B (en) Improved non-contact optical measurement method for complex surface and irregular object volume
US20180361588A1 (en) Multiple Lens-Based Smart Mechanical Arm and Positioning and Assembly Method Thereof
CN110400315A (en) A kind of defect inspection method, apparatus and system
CN109341668B (en) Multi-camera measuring method based on refraction projection model and light beam tracking method
CN110672020A (en) Stand tree height measuring method based on monocular vision
CN106971406A (en) The detection method and device of object pose
CN110966956A (en) Binocular vision-based three-dimensional detection device and method
CN112492292B (en) Intelligent visual 3D information acquisition equipment of free gesture
CN112254670B (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
CN112361962B (en) Intelligent visual 3D information acquisition equipment of many every single move angles
CN110608685A (en) Object size rapid measurement method based on raspberry pie
CN112254680B (en) Multi freedom's intelligent vision 3D information acquisition equipment
CN112254675B (en) Space occupancy rate acquisition and judgment equipment and method containing moving object
CN112253913B (en) Intelligent visual 3D information acquisition equipment deviating from rotation center
CN112082486B (en) Handheld intelligent 3D information acquisition equipment
CN112254676B (en) Portable intelligent 3D information acquisition equipment
CN112254638B (en) Intelligent visual 3D information acquisition equipment that every single move was adjusted
CN110097540A (en) The visible detection method and device of polygon workpeace
CN214410073U (en) Three-dimensional detection positioning system combining industrial camera and depth camera
CN112146589A (en) Three-dimensional morphology measurement system and method based on ZYNQ platform
CN202204479U (en) Virtual optical extensometer
CN102542563A (en) Modeling method of forward direction monocular vision of mobile robot
CN206583440U (en) A kind of projected image sighting distance detecting system
CN112254673B (en) Self-rotation type intelligent vision 3D information acquisition equipment

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 101, Floor 1, Building 1, No. 15, Jiuzhou Road, Shangcheng District, Hangzhou City, Zhejiang Province, 310000

Patentee after: Zhejiang Jiangao Photoelectric Technology Co.,Ltd.

Address before: 221004 No. 232, Yuanfeng Road, Yushan Town, Kunshan City, Suzhou City, Jiangsu Province

Patentee before: SUZHOU JIANG'AO OPTOELECTRONICS TECHNOLOGY Co.,Ltd.