CN115760907A - Large-space material tracking system, method, equipment and medium - Google Patents

Large-space material tracking system, method, equipment and medium Download PDF

Info

Publication number
CN115760907A
CN115760907A CN202211195828.8A CN202211195828A CN115760907A CN 115760907 A CN115760907 A CN 115760907A CN 202211195828 A CN202211195828 A CN 202211195828A CN 115760907 A CN115760907 A CN 115760907A
Authority
CN
China
Prior art keywords
image
scene
target object
dimensional
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211195828.8A
Other languages
Chinese (zh)
Inventor
张文卿
付傲然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Platform For Smart Manufacturing Co Ltd
Original Assignee
Shanghai Platform For Smart Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Platform For Smart Manufacturing Co Ltd filed Critical Shanghai Platform For Smart Manufacturing Co Ltd
Priority to CN202211195828.8A priority Critical patent/CN115760907A/en
Publication of CN115760907A publication Critical patent/CN115760907A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a large-space material tracking system, method, equipment and medium, comprising the following steps: the image acquisition module acquires two-dimensional image information of a large space from a plurality of positions; the rear-end vision processing module identifies state information of a target object from the two-dimensional image information acquired by the image acquisition module by using a machine vision and deep learning method; the digital twin module is used for visually displaying the state information identified by the rear-end visual processing module. The invention can realize material tracking only by installing a camera and a computer, and has convenient deployment and low cost; the informatization degree of the enterprise in the aspects of material statistics, tracking, monitoring and the like can be improved, and the development of the enterprise to a more intelligent direction is facilitated; the functions of data set generation and training, material tracking and the like are all executed by a computer, so that the dependence on manpower is greatly reduced, and the cost of human resources is effectively reduced.

Description

Large-space material tracking system, method, equipment and medium
Technical Field
The invention relates to industrial production and logistics transportation technologies in the field of intelligent production, in particular to a large-space material tracking system, method, equipment and medium.
Background
The identification, tracking and pose estimation of the object in the space have wide application requirements in industrial production and logistics industries. The traditional method for tracking the materials in the production process is to manually record and check the running states of the materials, so that the efficiency is low, and the materials cannot be fed back in time when problems occur; the material produced in the process is separated from the preset motion track mainly by means of human eye inspection, theoretical guidance is lacked, the work content is simple and repeated, and the efficiency is extremely low by adopting a manual mode.
The existing automatic material tracking field emerges an indoor tracking mode based on a microspur approach method, a triangulation method and a fingerprint method. Most of the methods are expensive, the system deployment is complex, electronic tags (such as RFID, which can be read by other facilities to obtain information of materials) are required for assistance, the number of the electronic tags is in direct proportion to the number of the materials to be positioned, and the efficiency is greatly reduced when the number of the materials to be positioned is large. Therefore, the indoor multi-target real-time positioning and tracking method with high research efficiency, convenient deployment and low cost has important significance for promoting the informatization upgrading of the manufacturing industry, reducing the workload of workers and improving the production efficiency of enterprises.
Through the search of the prior art, the invention with the application number of CN202111437767.7 discloses a method, a system and an electronic device for reminding a target object in a monitoring scene. The system acquires a picture of a target scene through a monitoring camera, divides the picture into a plurality of detection areas according to a preset object contour in the scene, then carries out target object identification detection on each area based on an image enhancement technology, and returns a prompt instruction of successful identification when the target object is identified from the detection areas.
Through the search of the prior art, the invention with the application number of CN202011543812.2 discloses an object detection method and a device, and the method comprises the following steps: acquiring an initial image, wherein the image at least comprises a standard object and an object to be detected; acquiring a first mark representing the position area of the object to be detected on a preset target object and a second mark representing at least part of the outline of the standard object; and judging whether the position of the object to be detected is correct or not according to the first mark, the second mark and the object to be detected. Therefore, the first mark and the second mark on the standard object are obtained as reference marks so as to judge whether the position of the object to be detected is correct or not by combining the object to be detected.
The two inventions do not solve the technical problem of accurate detection of large-space objects under the condition of shielding.
Disclosure of Invention
In view of the defects in the prior art, the present invention provides a large space material tracking system, method, device and medium.
According to one aspect of the invention, there is provided a large space material tracking system comprising:
the image acquisition module acquires two-dimensional image information of a large space from a plurality of positions;
the rear-end vision processing module identifies the state information of the target object from the two-dimensional image information acquired by the image acquisition module by using a machine vision and deep learning method;
the digital twin module is used for visually displaying the state information identified by the rear-end visual processing module.
Preferably, the image acquisition module comprises:
the two-dimensional cameras are distributed at multiple corners of a large space and used for shooting a multi-angle scene;
and the light sources are uniformly distributed in a large space and provide shooting light sources for the two-dimensional camera.
Preferably, the image acquisition module further comprises a preprocessing sub-module, and the preprocessing sub-module is used for performing visual calibration and visual enhancement on the image; wherein the content of the first and second substances,
the visual calibration comprises the steps of calibrating an external reference matrix of each two-dimensional camera to the large-space reference system by using a two-dimensional camera calibration plate, and calculating an internal reference matrix caused by self distortion of the two-dimensional cameras;
the visual enhancement comprises image filtering and image enhancement of images taken by the two-dimensional camera.
Preferably, the backend vision processing module includes:
the off-line data set is a training sample formed by utilizing a virtual generation or real shooting collection data set;
the coordinate regression network is trained by a deep learning method based on the offline data set, and the trained coordinate regression network can identify the state information of the target object from the virtual scene image or the real scene image;
wherein the state information comprises material type, position and attitude.
Preferably, the backend vision processing module further comprises a virtual generation module, the virtual generation module is configured to:
rendering the whole scene through an engine;
randomly adding illumination intensity and a random background to the scene;
adding a three-dimensional model of a target object in the scene;
performing out-of-order translation and rotation on the three-dimensional model;
rendering the translated and rotated scene through an engine;
generating an object enclosing frame according to the pose of the target object;
shooting three-dimensional images of the object surrounding frame from different angles;
projecting the bounding box corner points in the three-dimensional image into a two-dimensional image;
labeling label information of the target object in the two-dimensional image to serve as a training sample.
Preferably, the rear-end vision processing module further includes a real shooting module including:
building a real scene containing a target object;
building a lighting environment in the real scene and arranging a depth camera;
adding an Aruco code around the target object;
acquiring an image of a target object by using the depth camera to obtain an image set; wherein each image comprises at least 3 Aruco codes;
calculating the point cloud rotational translation transformation of the subsequent frame image relative to the first frame image in the image set, detecting the angular point of the Aruco code, and reconstructing a point cloud model;
calculating coordinates of a 3D bounding box of a target object in the point cloud model under a camera coordinate system;
calculating a projection matrix according to internal parameters of a camera, and obtaining a projected two-dimensional image of the 3D surrounding frame and the target central point in a pixel coordinate system from coordinates in the camera coordinate system;
labeling label information of the target object in the two-dimensional image to serve as a training sample.
Preferably, the digital twinning module comprises:
the scene building unit completes modeling according to the real spatial scene and completes scene building in the engine;
a data communication unit that receives and parses the object state message of the rear-end visual processing module, and performs scene rendering or driving under a scene;
and the material control unit executes visual control display based on the object state information of the rear-end visual module.
According to a second aspect of the present invention, there is provided a large space material tracking method implemented by using any one of the systems, including:
acquiring two-dimensional image information of a large space from a plurality of positions;
the rear-end vision processing module identifies state information of a target object from the two-dimensional image information acquired by the image acquisition module by using a machine vision and deep learning method;
and performing visual real-time display on the state information of the target object.
According to a third aspect of the present invention, there is provided an electronic device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the at least one instruction, the at least one program, set of codes, or set of instructions being loaded and executed by the processor to implement any one of the following methods: the system as described, or the method as described.
According to a fourth aspect of the present invention, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded by a processor and which performs any one of the following methods: the system as described, or the method as described.
Compared with the prior art, the invention has the following beneficial effects:
according to the large-space material tracking system provided by the embodiment of the invention, the material tracking can be realized only by installing the camera and the computer, the deployment is convenient, and the cost is low.
The large-space material tracking system and method provided by the embodiment of the invention can improve the informatization degree of the enterprise in the aspects of material statistics, tracking, monitoring and the like, and are beneficial to the development of the enterprise to a more intelligent direction.
According to the large-space material tracking system and method provided by the embodiment of the invention, functions of data set generation and training, material tracking and the like are executed by the computer, so that dependence on manpower is greatly reduced, and the cost of manpower resources is effectively reduced.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic diagram of a large space motion material tracking system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a hardware arrangement of a large space moving material tracking system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a real dataset acquisition in an embodiment of the invention;
FIG. 4 is a diagram illustrating virtual data set acquisition, according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will aid those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any manner. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the invention.
As shown in fig. 1, a schematic diagram of a large space moving material tracking system according to an embodiment of the present invention includes: the device comprises an image acquisition module, a rear-end vision processing module and a digital twinning module. The image acquisition module acquires two-dimensional image information of a large space from a plurality of positions and preprocesses the two-dimensional image information; the back-end vision processing module detects the state information of the target object by using a machine vision and deep learning method for the two-dimensional image information acquired by the image acquisition module; and the digital twin module visually displays the state information identified by the rear-end visual processing module.
Further, the image acquisition module is used for acquiring high-quality multi-angle scene information, correcting factors such as noise and illumination, and transmitting the acquired image data to the subsequent module; the rear-end vision processing module receives the data of the image acquisition module, identifies the angle point of the surrounding frame of the target object in real time based on a pre-trained object identification network, further calculates the pose of the object, and finally transmits the pose result to the digital twin module for display; and the digital twin module builds a virtual scene aiming at the specific space, reflects the pose and the motion state of the object in the virtual space in real time according to the result of visual identification and outputs the result to the user side.
The large space in the embodiment of the present invention is a space having a volume greater than 3m × 3m. The large-space material tracking system can realize material tracking only by installing the camera and the computer, and is convenient to deploy and low in cost.
In a preferred embodiment of the present invention, the image acquisition module is configured to acquire two-dimensional image information of a large space from multiple positions, reduce interference of factors such as ambient illumination and shooting noise on subsequent recognition effects by using a visual calibration method, improve the feature quality of a target object in an image by using a visual enhancement method for image data, and transmit the image data to the back-end visual processing module for subsequent target detection and pose recognition. Specifically, referring to fig. 2, the image acquisition module includes a two-dimensional camera, a light source, and a non-standard camera mount. The camera is distributed in a plurality of corners of a large space to realize multi-view shooting of scene images; the light sources are uniformly distributed in a large space, and the shooting light sources are provided for areas where target objects may appear; an industrial camera is arranged in the center of the non-standard camera support. The number of the two-dimensional cameras is set, and the two-dimensional cameras are determined according to the large space range where the two-dimensional cameras need to be located, so that the two-dimensional cameras can cover the specified space.
In a preferred embodiment, the above-mentioned visual calibration method can utilize the two-dimensional camera calibration board to calibrate the external reference matrix of each camera relative to the large-space reference system, and calculate the internal reference matrix caused by the distortion of the camera itself.
In a preferred embodiment, the visual enhancement method can utilize image filtering and image enhancement methods to improve the quality of the image.
In another preferred embodiment of the present invention, the back-end vision processing module detects the pose information of the target object by using a machine vision combined with a deep learning method based on the image information acquired by the image acquisition module, performs more accurate pose estimation on an object with occlusion and incomplete shooting by using information of multiple viewing angles, and transmits the identification result to the digital twin module for result display.
Specifically, the back-end visual processing module comprises an offline data set and a coordinate regression network. Before the system is deployed, a virtual generation method or real shooting collection is utilized for providing sufficient training samples for target object detection, and the offline data set comprises/does not comprise a scene picture of a target object, specific pose data of the target object in the scene, and the situations of shielding or incomplete shooting and the like. The coordinate regression network is trained by a deep learning method based on an offline data set, and after sufficient iteration, the training error is converged to an acceptable range; the coordinate regression network may identify the specific type, position and pose of the target object from the virtual scene image or the real scene image. In the embodiment, the functions of data set generation and training, material tracking and the like are all executed by a computer, so that the dependence on manpower is greatly reduced.
Further, in an embodiment, a flowchart of the virtual data set generating method is shown in fig. 3, and includes:
s11, rendering the whole scene based on the illusion 4 engine;
s12, randomly adding illumination intensity and a random background to the overall scene rendered in the S11;
s13, adding a three-dimensional model of the target object in the scene added with the illumination intensity and the random background in the S12;
s14, performing disorder translation and rotation on the established three-dimensional model in the S13;
s15, rendering the translated and rotated scene in the S14 based on the illusion 4;
s16, generating an object enclosing frame according to the pose of the target object;
s17, shooting three-dimensional images of the target object from different angles;
s18, projecting the corner points of the bounding box in the three-dimensional image in the S17 into the two-dimensional image;
and S19, labeling label information of the target object in the two-dimensional image obtained in the S18 to serve as a training sample.
In the above embodiment, the illusion 4 engine is a mature technology in the prior art and is disposed in the digital twin module. In S14, since the pose of the object in the real space is unknown, it is necessary to ensure that the data set relates to any pose of the object as much as possible, and therefore, the model needs to be translated and rotated out of order. The tag information in S19 includes the type, position, and posture information of the material in the image.
Further, the real data set generating method, the flow of which is shown in fig. 4, includes:
s21, building a real scene containing a target object;
s22, building an illumination environment in the real scene built in the S21 and arranging a depth camera;
s23, adding an Aruco code around the target object in the S21;
s24, acquiring an image of the target object by using the depth camera in the S22 to obtain an image set; wherein each image comprises at least 3 Aruco codes;
s25, calculating the point cloud rotational-translational transformation of the subsequent frame image relative to the first frame image in the image set obtained in the S24, detecting the corner points of the Aruco code, and reconstructing a point cloud model;
s26, calculating the coordinates of the 3D surrounding frame of the target object in the point cloud model obtained in the S25 under a depth camera coordinate system;
s27, calculating a projection matrix according to the internal parameters of the depth camera to obtain a projected two-dimensional image of the 3D surrounding frame and the target central point in the pixel coordinate system in the S26;
and S28, labeling the label information of the target object in the two-dimensional image.
And repeating the steps S25 to S28, and finishing the marking of all the frame images in the image set in an iterative manner, namely obtaining the rest poses and the label information of the target object.
In the above embodiment, the depth camera in S21 further includes depth information on the basis of the two-dimensional camera; s24, various postures of the target object are shot as many as possible, and at least 3 Aruco codes are guaranteed to be contained in each view angle. The pose relation between the camera and the plane where the Aruco code is located can be calculated according to the information of the Aruco code. Similarly, the label information labeled in S28 includes the type, position and posture information of the material in the image.
According to the virtual data set generation method and the real data set generation method, the poses correspond to the tags, so that in subsequent use of the system, the electronic tags do not need to be marked on the target objects, and the corresponding tag information is obtained by identifying the poses of the target objects, namely the real-time state information of the target objects is obtained.
In a preferred embodiment of the present invention, the digital twin module renders and displays the object recognition result of the rear-end visual processing module in the digital mirror space, so as to visually display the recognition effect and improve the overall visualization effect of the system.
Further, the digital twin module comprises a scene building unit, a data communication unit and a material control unit; the scene building unit completes modeling in 3DMAX according to a real space scene and completes scene building in phantom 4; an existing set of blueprint visual programming tools developed based on illusion 4 completes the development of a communication program in a connection mode, receives and analyzes messages from a rear-end visual processing module, and executes functions of scene rendering, driving and the like under a scene; the material control unit can execute the generation of visual control display of materials, moving the materials, destroying the materials and the like in real time based on the state information such as the specific types, positions, postures and the like of the objects of the rear-end visual processing module, and specifically generates the materials when a new object is detected; when the existing object is detected to move, moving the material; and when the object is detected to disappear the visual field, destroying the material.
Based on the same inventive concept, in another embodiment of the present invention, a method for tracking large-space materials is provided, which is implemented by using the space material tracking system in the above embodiments, and includes:
s1, acquiring two-dimensional image information of a large space from multiple positions;
s2, detecting the pose and the corresponding label of the target object by using a machine vision and deep learning method for the two-dimensional image information of the S1;
and S3, performing visual real-time display on the pose of the target object and the corresponding label in the S2.
Based on the same inventive concept, in other embodiments, an electronic device is provided, which comprises a processor and a memory, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded by the processor and executes or implements the above-mentioned system or method.
Based on the same inventive concept, a computer-readable storage medium is provided in other embodiments, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded by a processor and executes the system, or the method.
It should be noted that, the steps in the method provided by the present invention may be implemented by using corresponding modules, devices, units, and the like in the system, and those skilled in the art may refer to the technical solution of the system to implement the step flow of the method, that is, the embodiment in the system may be understood as a preferred example for implementing the method, and details are not described herein.
Those skilled in the art will appreciate that, in addition to implementing the system and its various means provided by the present invention in purely computer readable program code means, the system and its various means provided by the present invention can be implemented with the same functionality in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like by entirely programming the method steps logically. Therefore, the system and various devices thereof provided by the present invention can be regarded as a hardware component, and the devices included therein for realizing various functions can also be regarded as structures in the hardware component; means for performing the functions may also be regarded as structures within both software modules and hardware components for performing the methods.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The above-described preferred features may be used in any combination without conflict with each other.

Claims (10)

1. A large space material tracking system, comprising:
the system comprises an image acquisition module, a data processing module and a data processing module, wherein the image acquisition module acquires two-dimensional image information of a large space from a plurality of positions;
the rear-end vision processing module identifies the state information of the target object from the two-dimensional image information acquired by the image acquisition module by using a machine vision and deep learning method;
the digital twin module is used for visually displaying the state information identified by the rear-end visual processing module.
2. The large-space material tracking system according to claim 1, wherein the image acquisition module comprises:
the two-dimensional cameras are distributed at multiple corners of a large space and used for shooting a multi-angle scene;
and the light sources are uniformly distributed in a large space and provide a shooting light source for the two-dimensional camera.
3. The large-space material tracking system according to claim 2, wherein the image acquisition module further comprises a preprocessing sub-module, and the preprocessing sub-module performs visual calibration and visual enhancement on the image; wherein the content of the first and second substances,
the vision calibration comprises the steps of calibrating an external parameter matrix of each two-dimensional camera to the large-space reference system by using a two-dimensional camera calibration plate, and calculating an internal parameter matrix caused by the distortion of the two-dimensional camera;
the visual enhancement comprises image filtering and image enhancement of the images taken by the two-dimensional camera.
4. The large space material tracking system of claim 1, wherein the back-end vision processing module comprises:
the off-line data set is a training sample formed by utilizing a virtual generation or real shooting collection data set;
the coordinate regression network is trained by a deep learning method based on the offline data set, and the trained coordinate regression network can identify the state information of the target object from the virtual scene image or the real scene image;
wherein the state information comprises material type, position and attitude.
5. The large-space material tracking system according to claim 4, wherein the back-end vision processing module further comprises a virtual generation module configured to:
rendering the whole scene through an engine;
randomly adding illumination intensity and a random background to the scene;
adding a three-dimensional model of a target object in the scene;
performing out-of-order translation and rotation on the three-dimensional model;
rendering the translated and rotated scene through an engine;
generating an object enclosure frame according to the pose of the target object;
shooting three-dimensional images of the object surrounding frame from different angles;
projecting the bounding box corner points in the three-dimensional image into a two-dimensional image;
labeling label information of the target object in the two-dimensional image to serve as a training sample.
6. The large-space material tracking system according to claim 4, wherein the rear-end vision processing module further comprises a real shooting module, comprising:
building a real scene containing a target object;
building an illumination environment in the real scene and arranging a depth camera;
adding an Aruco code around the target object;
acquiring an image of a target object by using the depth camera to obtain an image set; wherein each image comprises at least 3 Aruco codes;
calculating the point cloud rotational translation transformation of the subsequent frame image relative to the first frame image in the image set, detecting the angular point of the Aruco code, and reconstructing a point cloud model;
calculating coordinates of a 3D bounding box of a target object in the point cloud model under a camera coordinate system;
calculating a projection matrix according to internal parameters of a camera, and obtaining a projected two-dimensional image of the 3D surrounding frame and the target central point in a pixel coordinate system from coordinates in the camera coordinate system;
labeling label information of the target object in the two-dimensional image to serve as a training sample.
7. The large-space material tracking system of claim 1, wherein the digital twinning module comprises:
the scene building unit completes modeling according to the real spatial scene and completes scene building in the engine;
a data communication unit that receives and parses the object state message of the rear-end visual processing module, and performs scene rendering or driving under a scene;
and the material control unit is used for executing visual control display on the scene building unit based on the object state information of the rear-end visual module.
8. A method for tracking mass-space materials, implemented using the system of any one of claims 1-7, comprising:
acquiring two-dimensional image information of a large space from a plurality of positions;
identifying state information of a target object from two-dimensional image information acquired by the image acquisition module by using a machine vision and deep learning method;
and visually displaying the state information of the target object in real time.
9. An electronic device, comprising a processor and a memory, wherein at least one instruction, at least one program, set of codes, or set of instructions is stored in the memory, and wherein the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement any one of the following methods:
-the system of claims 1-7, or,
-the method of claim 8.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded by a processor and performs any one of the following methods:
-the system of claims 1-7, or,
-the method of claim 8.
CN202211195828.8A 2022-09-29 2022-09-29 Large-space material tracking system, method, equipment and medium Pending CN115760907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211195828.8A CN115760907A (en) 2022-09-29 2022-09-29 Large-space material tracking system, method, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211195828.8A CN115760907A (en) 2022-09-29 2022-09-29 Large-space material tracking system, method, equipment and medium

Publications (1)

Publication Number Publication Date
CN115760907A true CN115760907A (en) 2023-03-07

Family

ID=85350545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211195828.8A Pending CN115760907A (en) 2022-09-29 2022-09-29 Large-space material tracking system, method, equipment and medium

Country Status (1)

Country Link
CN (1) CN115760907A (en)

Similar Documents

Publication Publication Date Title
CN111783820B (en) Image labeling method and device
US9058669B2 (en) Incorporating video meta-data in 3D models
Singh et al. Bigbird: A large-scale 3d database of object instances
EP3660787A1 (en) Training data generation method and generation apparatus, and image semantics segmentation method therefor
US9432655B2 (en) Three-dimensional scanner based on contours from shadow images
JPWO2019189661A1 (en) Learning data set creation method and equipment
JP6503153B2 (en) System and method for automatically selecting a 3D alignment algorithm in a vision system
US20140022281A1 (en) Projecting airplane location specific maintenance history using optical reference points
CN112639846A (en) Method and device for training deep learning model
WO2019177539A1 (en) Method for visual inspection and apparatus thereof
Yan et al. Joint camera intrinsic and lidar-camera extrinsic calibration
US20220139030A1 (en) Method, apparatus and system for generating a three-dimensional model of a scene
CN113848931B (en) Agricultural machinery automatic driving obstacle recognition method, system, equipment and storage medium
CN115008454A (en) Robot online hand-eye calibration method based on multi-frame pseudo label data enhancement
CN111951333A (en) Automatic six-dimensional attitude data set generation method, system, terminal and storage medium
McIlroy et al. Kinectrack: 3d pose estimation using a projected dense dot pattern
Montero et al. Framework for natural landmark-based robot localization
Costa et al. Modeling of video projectors in OpenGL for implementing a spatial augmented reality teaching system for assembly operations
CN116524022B (en) Offset data calculation method, image fusion device and electronic equipment
Bauer et al. Spatial interactive projections in robot-based inspection systems
Francken et al. Screen-camera calibration using a spherical mirror
CN115760907A (en) Large-space material tracking system, method, equipment and medium
Li et al. Deep learning approaches for improving robustness in real-time 3D-object positioning and manipulation in severe lighting conditions
Beňo et al. RGBD mapping solution for low-cost robot
Bauer et al. Intelligent predetection of projected reference markers for robot-based inspection systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination