WO2023172231A1 - An object control system - Google Patents

An object control system Download PDF

Info

Publication number
WO2023172231A1
WO2023172231A1 PCT/TR2023/050229 TR2023050229W WO2023172231A1 WO 2023172231 A1 WO2023172231 A1 WO 2023172231A1 TR 2023050229 W TR2023050229 W TR 2023050229W WO 2023172231 A1 WO2023172231 A1 WO 2023172231A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing unit
image
camera
electronic device
model
Prior art date
Application number
PCT/TR2023/050229
Other languages
French (fr)
Inventor
Gul Meltem KULALI
Original Assignee
Simtek Simulasyon Ve Bilisim Teknolojileri Muhendislik Danismanlik Ticaret Limited Sirketi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from TR2022/003687 external-priority patent/TR2022003687A2/en
Application filed by Simtek Simulasyon Ve Bilisim Teknolojileri Muhendislik Danismanlik Ticaret Limited Sirketi filed Critical Simtek Simulasyon Ve Bilisim Teknolojileri Muhendislik Danismanlik Ticaret Limited Sirketi
Publication of WO2023172231A1 publication Critical patent/WO2023172231A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • G05B19/41875Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by quality surveillance of production
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/32Operator till task planning
    • G05B2219/32193Ann, neural base quality management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Definitions

  • the present invention relates to a system which enables to automatically control serially produced objects particularly in factories in terms of technical characteristics they need to have such as size, integrity and to subject them to a quality control process.
  • Objects which are serially produced in factories must be subjected to quality control processes.
  • the said controls are performed by quality control personnel visually.
  • the fact that controls are performed by human and also the objects produced increase in terms of variety and number of parts cause control processes to take a long time and fail to notice defective objects.
  • high-cost industrial cameras and laser scanners are used. It is possible to control the sizes of objects, and the location and sizes of the surface shapes located on an object or the holes bored thereon by using the said industrial cameras and laser scanners.
  • These systems are usually used in production lines wherein serial production is performed, and it is ensured that controls of a product reaching in front of cameras are performed by locating the said cameras in a fixed position.
  • the International patent document no. WO2020235194 discloses a manufacture condition output device, a quality management system and program. In the said invention, it is possible to have information about a manufacture condition of a product and information about a change degree in a product. Therefore, defects are determined in manufactured products by using a model information and machine learning of a manufactured product.
  • the International patent document no. WO2020235194 discloses the presence of a model learning device and the fact that the said device comprises a processor and storage device. The model learning device generates a difference model that will enable to show the condition of a manufactured product as bad or bad by using machine learning.
  • the said International patent document no. WO2020235194 also discloses that the manufacture condition output device, which takes images of a product in order to have information about a product condition, can be a camera.
  • An objective of the present invention is to realize a system which enables to automatically control serially produced objects particularly in factories in terms of technical characteristics they need to have such as size, integrity and to subject them to a quality control process.
  • Another objective of the present invention is to realize a system which enables to show production defects in objects to users by marking the differences included in the model but not included in the product or included in the product but not included in the model upon ensuring that a three-dimensional model of the product is superimposed on the product by using cameras and sensors, depth cameras of cost-effective devices such as tablet, mobile phone; or cameras and sensors included in augmented reality glasses.
  • Another objective of the present invention is to realize a system which enables to perform quality control processes on a product being produced, by means of image processing methods without using deep learning-based algorithms.
  • Figure l is a schematic view of the inventive system.
  • the inventive system (1) for detecting the differences included in the model but not included in the product or included in the product but not included in the model upon ensuring that a three-dimensional model of the product is superimposed on the product by using cameras and sensors comprises at least one electronic device (2) which is configured to have at least one camera (3) present thereon, to establish communication with remote servers by using any remote communication protocol, and/or to upload or keep record of three-dimensional models; at least one database (5) which is configured to keep record of three- dimensional models of products to be subjected to a control process, together with vertex coordinate data; and at least one processing unit (6) which is configured to establish communication with the electronic device (2) by using any remote communication protocol or to be run on the electronic device (2), to realize data exchange with the electronic device (2) over this communication or connection established, to receive the image data of the object -that is obtained by the camera (3) and will be controlled- from the electronic device (2), to access the database (5), to superimpose the three- dimensional model of the object -images of which are taken by the camera (3) from the database
  • the electronic device (2) included in the inventive system (1) is a device such as smartphone, tablet computer, augmented reality glasses or augmented reality device that has at least one camera (3) having capability to take image and at least one sensor (4) used for obtaining the position information of the camera (3).
  • the camera (3) located on the electronic device (2) is a RGB camera. Depth or lidar is used together with the RGB camera.
  • the electronic device (2) can be all or several of accelerometer, gyroscope, magnetometer.
  • the electronic device (2) is configured to establish connection with the processing unit (6) by using any remote communication protocol included in the state of the art and to transmit the data it receives from the camera (3) and the sensor (4) to the processing unit (6) over this connection established.
  • the database (5) included in the inventive system (1) is in communication with the processing unit (6) and configured to ensure that information is read and written by the processing unit (6).
  • the said database (5) is configured to keep record of models of product to be used a quality control process to be performed by the processing unit (6) and the margins of error to be a valid or each product model therein.
  • the processing unit (6) included in the inventive system (1) is configured to establish communication with the electronic device (2) by using any remote communication protocol included in the state of the art or to be run on the electronic device (2), to realize data exchange with the camera (3) and the sensor (4) that is run on the electronic device (2) over this communication or connection established.
  • the processing unit (6) is configured to perform a new data record in the database (5) and to access the data recorded in the database (5).
  • the processing unit (6) is configured to extract the information of the three-dimensional model from the database (5) for pose estimation and to transfer the vertex coordinates of the said model from three-dimensional plane to two-dimensional plane by using them together with the calibration parameters.
  • the processing unit (6) is configured to overlap the three-dimensional model with the image of the product, that is made of the three-dimensional model, obtained from the camera (3) by processing the characteristics of the three-dimensional model.
  • the processing unit (6) is configured to access the position and rotation information of the model from the overlapped image following the overlapping process.
  • the processing unit (6) is configured to perform an overlapping process for each image frame and to ensure that the three-dimensional model is displayed on the product real-timely.
  • the processing unit (6) is configured to track the motions of the camera (3) by placing and fixing the three- dimensional model into its position on the real-world plane.
  • the processing unit (6) included in the inventive system (1) is configured to use the model position and rotation matrices calculated by means of an pose estimation algorithm in order to superimpose and compare the model and the image, and to render the image of the model from the current camera (3) aspect.
  • the processing unit (6) is configured to render an RGB image and if there is a depth or lidar in the system, a depth image by processing the three-dimensional model.
  • the processing unit (6) is configured to extract features by using an RGB image and if available, a depth camera view or a lidar image.
  • the processing unit (6) is configured to compare the by the three-dimensional model and the camera (3) image with the features it obtains, by means of an artificial intelligence-assisted decision-making mechanism and to detect the defective parts on the product.
  • the processing unit (6) is configured to share the defective parts it detects, with the users over an image by means of the electronic device (2). Thereby, the users can take notes on the image and keep record of the said images.
  • the processing unit (6) included in the inventive system (1) is configured to create a weight value for each feature it extracts by using the three-dimensional model and the camera (3) image in order to set the precision of object control, and to perform object control on the basis of the acceptable margins of error determined for the objects that are generated by means of different methods according to the said reference values.
  • the processing unit (6) is configured to divide the three-dimensional model into grids at the control stage and to ensure that the control is performed on a server or the electronic device (2) in parallel on a processor.
  • the processing unit (6) included in the inventive system (1) is configured to be run on desktop workstations and to process the images obtained from a fixed RGB camera (3) and if available, a depth camera or a lidar. In the said usage, objects to be controlled move on a band or by means of any method and it is detected whether the objects passing by the camera (3) are defective or faulty.
  • the processing unit (6) included in the inventive system (1) is configured to be run on a laptop, tablet, phone or augmented reality glasses and to superimpose the produced object and the three- dimensional model and then to perform motion tracking by using the image obtained from the electronic device (2), the sensor information of depth, accelerometer, gyroscope and magnetometer.
  • the processing unit (6) ensures that a model remains stable on an object to be controlled even if the camera (3) is on the move by means of motion tracking and a control process can be performed on the parts determined by the user or over the entire object in general.
  • the processing unit (6) included in the inventive system (1) is configured to operate on a remote server and to realize data exchange with the electronic device (2) by using any remote communication protocol.
  • the processing unit (6) is configured to process the sensor (4) and camera (3) data of the electronic device (2) on a remote server and to share the overlapping images with the electronic device (2).
  • the users can view the object -that is controlled on the image shared on the electronic device (2)- unconstrainedly at angles whereby they can see it and ensures that the requested parts or all of the object can be controlled.
  • the inventive system (1) it is possible to show errors, deformations on an object or product on an image by superimposing three-dimensional model information and the camera (3) images of a product/object on serially produced object/products in factories.
  • control processes of objects or products available in different types and having a plurality of parts can be performed quickly and reliably.
  • the camera (3) and the sensors (4) included in a tablet, mobile phone or augmented reality device are used instead of high-cost industrial cameras and laser scanners for the said control processes.
  • quality control processes are performed at low cost.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Architecture (AREA)
  • Automation & Control Theory (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a system (1) which enables to automatically control serially produced objects particularly in factories in terms of technical characteristics they need to have such as size, integrity and to subject them to a quality control process.

Description

AN OBJECT CONTROL SYSTEM
Technical Field
The present invention relates to a system which enables to automatically control serially produced objects particularly in factories in terms of technical characteristics they need to have such as size, integrity and to subject them to a quality control process.
Background of the Invention
Objects which are serially produced in factories must be subjected to quality control processes. The said controls are performed by quality control personnel visually. However, the fact that controls are performed by human and also the objects produced increase in terms of variety and number of parts cause control processes to take a long time and fail to notice defective objects. In order to overcome such problems, high-cost industrial cameras and laser scanners are used. It is possible to control the sizes of objects, and the location and sizes of the surface shapes located on an object or the holes bored thereon by using the said industrial cameras and laser scanners. These systems are usually used in production lines wherein serial production is performed, and it is ensured that controls of a product reaching in front of cameras are performed by locating the said cameras in a fixed position. When laser scanners are used, a three- dimensional model of an object to be controlled is extracted by taking its images from different angles and then this model and the model sent to model sent to production are compared upon being viewed in computer environment. This method may confront problems such as the process of scanning a product having quite a lot of details takes a long time, costs of precision scanners to be used are high and some details cannot be modelled properly in low-cost scanners. The fact that methods/sy stems included in the state of the art are high-cost causes them to be limited in terms of use and to take a long time in terms of time.
Therefore, today there is need for a solution which enables to automatically perform quality controls of produced objects by using cameras and sensors, depth cameras of cost-effective devices such as tablet, mobile phone; or cameras and sensors included in augmented reality glasses.
The International patent document no. WO2020235194, an application in the state of the art, discloses a manufacture condition output device, a quality management system and program. In the said invention, it is possible to have information about a manufacture condition of a product and information about a change degree in a product. Therefore, defects are determined in manufactured products by using a model information and machine learning of a manufactured product. The International patent document no. WO2020235194 discloses the presence of a model learning device and the fact that the said device comprises a processor and storage device. The model learning device generates a difference model that will enable to show the condition of a manufactured product as bad or bad by using machine learning. The said International patent document no. WO2020235194 also discloses that the manufacture condition output device, which takes images of a product in order to have information about a product condition, can be a camera.
Summary of the Invention
An objective of the present invention is to realize a system which enables to automatically control serially produced objects particularly in factories in terms of technical characteristics they need to have such as size, integrity and to subject them to a quality control process.
Another objective of the present invention is to realize a system which enables to show production defects in objects to users by marking the differences included in the model but not included in the product or included in the product but not included in the model upon ensuring that a three-dimensional model of the product is superimposed on the product by using cameras and sensors, depth cameras of cost-effective devices such as tablet, mobile phone; or cameras and sensors included in augmented reality glasses.
Another objective of the present invention is to realize a system which enables to perform quality control processes on a product being produced, by means of image processing methods without using deep learning-based algorithms.
Detailed Description of the Invention
“An Object Control System” realized to fulfil the objectives of the present invention is shown in the figure attached, in which:
Figure l is a schematic view of the inventive system.
The components illustrated in the figure are individually numbered, where the numbers refer to the following:
1. System
2. Electronic device
3. Camera
4. Sensor
5. Database
6. Processing unit
The inventive system (1) for detecting the differences included in the model but not included in the product or included in the product but not included in the model upon ensuring that a three-dimensional model of the product is superimposed on the product by using cameras and sensors comprises at least one electronic device (2) which is configured to have at least one camera (3) present thereon, to establish communication with remote servers by using any remote communication protocol, and/or to upload or keep record of three-dimensional models; at least one database (5) which is configured to keep record of three- dimensional models of products to be subjected to a control process, together with vertex coordinate data; and at least one processing unit (6) which is configured to establish communication with the electronic device (2) by using any remote communication protocol or to be run on the electronic device (2), to realize data exchange with the electronic device (2) over this communication or connection established, to receive the image data of the object -that is obtained by the camera (3) and will be controlled- from the electronic device (2), to access the database (5), to superimpose the three- dimensional model of the object -images of which are taken by the camera (3) from the database (5)- and the image of the product received from the camera (3) by means of a pose estimation, to find the differences between the model and the image by using an artificial intelligence-assisted decision-making mechanism on the superimposed images, and to detect the said differences as defective/deformed parts in the product.
The electronic device (2) included in the inventive system (1) is a device such as smartphone, tablet computer, augmented reality glasses or augmented reality device that has at least one camera (3) having capability to take image and at least one sensor (4) used for obtaining the position information of the camera (3). The camera (3) located on the electronic device (2) is a RGB camera. Depth or lidar is used together with the RGB camera. The electronic device (2) can be all or several of accelerometer, gyroscope, magnetometer. The electronic device (2) is configured to establish connection with the processing unit (6) by using any remote communication protocol included in the state of the art and to transmit the data it receives from the camera (3) and the sensor (4) to the processing unit (6) over this connection established.
The database (5) included in the inventive system (1) is in communication with the processing unit (6) and configured to ensure that information is read and written by the processing unit (6). The said database (5) is configured to keep record of models of product to be used a quality control process to be performed by the processing unit (6) and the margins of error to be a valid or each product model therein.
The processing unit (6) included in the inventive system (1) is configured to establish communication with the electronic device (2) by using any remote communication protocol included in the state of the art or to be run on the electronic device (2), to realize data exchange with the camera (3) and the sensor (4) that is run on the electronic device (2) over this communication or connection established. The processing unit (6) is configured to perform a new data record in the database (5) and to access the data recorded in the database (5).
In one preferred embodiment of the invention, the processing unit (6) is configured to extract the information of the three-dimensional model from the database (5) for pose estimation and to transfer the vertex coordinates of the said model from three-dimensional plane to two-dimensional plane by using them together with the calibration parameters. The processing unit (6) is configured to overlap the three-dimensional model with the image of the product, that is made of the three-dimensional model, obtained from the camera (3) by processing the characteristics of the three-dimensional model. The processing unit (6) is configured to access the position and rotation information of the model from the overlapped image following the overlapping process. The processing unit (6) is configured to perform an overlapping process for each image frame and to ensure that the three-dimensional model is displayed on the product real-timely. In another preferred embodiment of the invention, the processing unit (6) is configured to track the motions of the camera (3) by placing and fixing the three- dimensional model into its position on the real-world plane.
The processing unit (6) included in the inventive system (1) is configured to use the model position and rotation matrices calculated by means of an pose estimation algorithm in order to superimpose and compare the model and the image, and to render the image of the model from the current camera (3) aspect. The processing unit (6) is configured to render an RGB image and if there is a depth or lidar in the system, a depth image by processing the three-dimensional model.
The processing unit (6) is configured to extract features by using an RGB image and if available, a depth camera view or a lidar image. The processing unit (6) is configured to compare the by the three-dimensional model and the camera (3) image with the features it obtains, by means of an artificial intelligence-assisted decision-making mechanism and to detect the defective parts on the product. The processing unit (6) is configured to share the defective parts it detects, with the users over an image by means of the electronic device (2). Thereby, the users can take notes on the image and keep record of the said images.
The processing unit (6) included in the inventive system (1) is configured to create a weight value for each feature it extracts by using the three-dimensional model and the camera (3) image in order to set the precision of object control, and to perform object control on the basis of the acceptable margins of error determined for the objects that are generated by means of different methods according to the said reference values. The processing unit (6) is configured to divide the three-dimensional model into grids at the control stage and to ensure that the control is performed on a server or the electronic device (2) in parallel on a processor. The processing unit (6) included in the inventive system (1) is configured to be run on desktop workstations and to process the images obtained from a fixed RGB camera (3) and if available, a depth camera or a lidar. In the said usage, objects to be controlled move on a band or by means of any method and it is detected whether the objects passing by the camera (3) are defective or faulty.
In another preferred embodiment of the invention, the processing unit (6) included in the inventive system (1) is configured to be run on a laptop, tablet, phone or augmented reality glasses and to superimpose the produced object and the three- dimensional model and then to perform motion tracking by using the image obtained from the electronic device (2), the sensor information of depth, accelerometer, gyroscope and magnetometer. The processing unit (6) ensures that a model remains stable on an object to be controlled even if the camera (3) is on the move by means of motion tracking and a control process can be performed on the parts determined by the user or over the entire object in general.
In a further preferred embodiment of the invention, the processing unit (6) included in the inventive system (1) is configured to operate on a remote server and to realize data exchange with the electronic device (2) by using any remote communication protocol. Thereby, the processing unit (6) is configured to process the sensor (4) and camera (3) data of the electronic device (2) on a remote server and to share the overlapping images with the electronic device (2). Thus, the users can view the object -that is controlled on the image shared on the electronic device (2)- unconstrainedly at angles whereby they can see it and ensures that the requested parts or all of the object can be controlled.
With the inventive system (1), it is possible to show errors, deformations on an object or product on an image by superimposing three-dimensional model information and the camera (3) images of a product/object on serially produced object/products in factories. Thereby, control processes of objects or products available in different types and having a plurality of parts can be performed quickly and reliably. The camera (3) and the sensors (4) included in a tablet, mobile phone or augmented reality device are used instead of high-cost industrial cameras and laser scanners for the said control processes. Thus, quality control processes are performed at low cost.
Within these basic concepts; it is possible to develop various embodiments of the inventive “Object Control System (1)”; the invention cannot be limited to examples disclosed herein and it is essentially according to claims.

Claims

1. A system (1) for detecting the differences included in the model but not included in the product or included in the product but not included in the model upon ensuring that a three-dimensional model of the product is superimposed on the product by using cameras and sensors; comprising at least one electronic device (2) which is configured to have at least one camera (3) present thereon, to establish communication with remote servers by using any remote communication protocol, and/or to upload or keep record of three-dimensional models; at least one database (5) which is configured to keep record of three- dimensional models of products to be subjected to a control process, together with vertex coordinate data; and characterized by at least one processing unit (6) which is configured to establish communication with the electronic device (2) by using any remote communication protocol or to be run on the electronic device (2), to realize data exchange with the electronic device (2) over this communication or connection established, to receive the image data of the object -that is obtained by the camera (3) and will be controlled- from the electronic device (2), to access the database (5), to superimpose the three- dimensional model of the object -images of which are taken by the camera (3) from the database (5)- and the image of the product received from the camera (3) by means of a pose estimation, to find the differences between the model and the image by using an artificial intelligence-assisted decision-making mechanism on the superimposed images, and to detect the said differences as defective/deformed parts in the product.
2. A system (1) according to Claim 1; characterized by the electronic device (2) which is a device such as smartphone, tablet computer, augmented reality glasses or augmented reality device that has at least one camera (3) having capability to take image and at least one sensor (4) used for obtaining the position information of the camera (3).
3. A system (1) according to Claim 1 or 2; characterized by the electronic device (2) which can be all or several of accelerometer, gyroscope, magnetometer.
4. A system (1) according to any of the preceding claims; characterized by the electronic device (2) which is configured to establish connection with the processing unit (6) by using any remote communication protocol and to transmit the data it receives from the camera (3) and the sensor (4) to the processing unit (6) over this connection established.
5. A system (1) according to any of the preceding claims; characterized by the database (5) which is configured to keep record of models of product to be used a quality control process to be performed by the processing unit (6) and the margins of error to be a valid or each product model therein.
6. A system (1) according to any of the preceding claims; characterized by the processing unit (6) which is configured to extract the information of the three- dimensional model from the database (5) for pose estimation and to transfer the vertex coordinates of the said model from three-dimensional plane to two- dimensional plane by using them together with the calibration parameters.
7. A system (1) according to any of the preceding claims; characterized by the processing unit (6) which is configured to overlap the three-dimensional model with the image of the product, that is made of the three-dimensional model, obtained from the camera (3) by processing the characteristics of the three- dimensional model.
8. A system (1) according to any of the preceding claims; characterized by the processing unit (6) which is configured to is configured to access the position and rotation information of the model from the overlapped image following the overlapping process.
9. A system (1) according to any of the preceding claims; characterized by the processing unit (6) which is configured to is configured to perform an overlapping process for each image frame and to ensure that the three-dimensional model is displayed on the product real-timely.
10. A system (1) according to any of the preceding claims; characterized by the processing unit (6) which is configured to use the model position and rotation matrices calculated by means of an pose estimation algorithm in order to superimpose and compare the model and the image, and to render the image of the model from the current camera (3) aspect.
11. A system (1) according to any of the preceding claims; characterized by the processing unit (6) which is configured to render an RGB image and if there is a depth or lidar in the system, a depth image by processing the three-dimensional model.
12. A system (1) according to Claim 11; characterized by the processing unit (6) which is configured to extract features by using an RGB image and if available, a depth camera view or a lidar image.
13. A system (1) according to any of the preceding claims; characterized by the processing unit (6) which is configured to compare the by the three- dimensional model and the camera (3) image with the features it obtains, by means of an artificial intelligence-assisted decision-making mechanism and to detect the defective parts on the product.
14. A system (1) according to any of the preceding claims; characterized by the processing unit (6) which is configured to share the defective parts it detects, with the users over an image by means of the electronic device (2).
15. A system (1) according to any of the preceding claims; characterized by the processing unit (6) which is configured to create a weight value for each feature it extracts by using the three-dimensional model and the camera (3) image in order to set the precision of object control, and to perform object control on the basis of the acceptable margins of error determined for the objects that are generated by means of different methods according to the said reference values.
16. A system (1) according to any of the preceding claims; characterized by the processing unit (6) which is configured to be run on desktop workstations and to process the images obtained from a fixed RGB camera (3) and if available, a depth camera or a lidar.
17. A system (1) according to any of the preceding claims; characterized by the processing unit (6) which is configured to be run on a laptop, tablet, phone or augmented reality glasses and to superimpose the produced object and the three- dimensional model and then to perform motion tracking by using the image obtained from the electronic device (2), the sensor information of depth, accelerometer, gyroscope and magnetometer.
18. A system (1) according to any of the preceding claims; characterized by the processing unit (6) which is configured to operate on a remote server and to realize data exchange with the electronic device (2) by using any remote communication protocol.
19. A system (1) according to Claim 18; characterized by the processing unit (6) which is configured to process the sensor (4) and camera (3) data of the electronic device (2) on a remote server and to share the overlapping images with the electronic device (2).
PCT/TR2023/050229 2022-03-11 2023-03-08 An object control system WO2023172231A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TR2022/003687 TR2022003687A2 (en) 2022-03-11 AN OBJECT CONTROL SYSTEM
TR2022003687 2022-03-11

Publications (1)

Publication Number Publication Date
WO2023172231A1 true WO2023172231A1 (en) 2023-09-14

Family

ID=87935684

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/TR2023/050229 WO2023172231A1 (en) 2022-03-11 2023-03-08 An object control system

Country Status (1)

Country Link
WO (1) WO2023172231A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019067641A1 (en) * 2017-09-26 2019-04-04 Aquifi, Inc. Systems and methods for visual inspection based on augmented reality
WO2020223594A2 (en) * 2019-05-02 2020-11-05 Kodak Alaris, Inc Automated 360-degree dense point object inspection
US20210149359A1 (en) * 2019-11-18 2021-05-20 Rockwell Automation Technologies, Inc. Remote support via visualizations of instructional procedures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019067641A1 (en) * 2017-09-26 2019-04-04 Aquifi, Inc. Systems and methods for visual inspection based on augmented reality
WO2020223594A2 (en) * 2019-05-02 2020-11-05 Kodak Alaris, Inc Automated 360-degree dense point object inspection
US20210149359A1 (en) * 2019-11-18 2021-05-20 Rockwell Automation Technologies, Inc. Remote support via visualizations of instructional procedures

Similar Documents

Publication Publication Date Title
CN109584307B (en) System and method for improving calibration of intrinsic parameters of a camera
CN111127422A (en) Image annotation method, device, system and host
EP2194725A1 (en) Method and apparatus for correcting a depth image
JP2011198349A (en) Method and apparatus for processing information
EP3633606B1 (en) Information processing device, information processing method, and program
US9996947B2 (en) Monitoring apparatus and monitoring method
CN109247068A (en) Method and apparatus for rolling shutter compensation
CN106537908A (en) Camera calibration
CN108430032B (en) Method and equipment for realizing position sharing of VR/AR equipment
US20170292827A1 (en) Coordinate measuring system
JP2019032218A (en) Location information recording method and device
Bapat et al. Towards kilo-hertz 6-dof visual tracking using an egocentric cluster of rolling shutter cameras
Frahm et al. Camera calibration with known rotation
WO2016208404A1 (en) Device and method for processing information, and program
JP6288770B2 (en) Face detection method, face detection system, and face detection program
CN111492409B (en) Apparatus and method for three-dimensional interaction with augmented reality remote assistance
KR102538685B1 (en) Method and apparatus for restoring 3d information using multi-view information
WO2023181212A1 (en) Display data generation program, display data generation device, and display data generation method
WO2023172231A1 (en) An object control system
CN116577072A (en) Calibration method, device, system and storage medium of equipment
CN113483669B (en) Multi-sensor pose calibration method and device based on three-dimensional target
CN108573504A (en) The 3D image generating methods and its system of phenotype for analyzing plant
CN111882601B (en) Positioning method, device and equipment
US12002162B2 (en) Method and apparatus for providing virtual contents in virtual space based on common coordinate system
KR20220115223A (en) Method and apparatus for multi-camera calibration

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23767268

Country of ref document: EP

Kind code of ref document: A1