CN117549314B - Industrial robot intelligent control system based on image recognition - Google Patents
Industrial robot intelligent control system based on image recognition Download PDFInfo
- Publication number
- CN117549314B CN117549314B CN202410027856.1A CN202410027856A CN117549314B CN 117549314 B CN117549314 B CN 117549314B CN 202410027856 A CN202410027856 A CN 202410027856A CN 117549314 B CN117549314 B CN 117549314B
- Authority
- CN
- China
- Prior art keywords
- frame
- route
- data
- module
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 claims abstract description 55
- 238000009966 trimming Methods 0.000 claims abstract description 38
- 230000002159 abnormal effect Effects 0.000 claims abstract description 26
- 238000004891 communication Methods 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 33
- 238000000605 extraction Methods 0.000 claims description 22
- 230000007613 environmental effect Effects 0.000 claims description 10
- 238000009432 framing Methods 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 4
- 239000003086 colorant Substances 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 2
- 238000012216 screening Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Manipulator (AREA)
Abstract
The invention discloses an intelligent control system of an industrial robot based on image recognition, and relates to the field of intelligent control; the system comprises a control terminal, wherein the control terminal is in wireless communication connection with a data acquisition module, a data processing module, a target identification module, a route virtual verification module and a path trimming module; the data acquisition module acquires image data of an industrial robot working scene and captures environment data of the industrial robot working scene; the data processing module is used for processing the image data to obtain a frame object database; the object recognition module is used for recognizing the object frame according to the frame object database; the route virtual verification module is used for carrying out virtual simulation experiments on the work planning route to obtain a trimming data set; the path trimming module is used for acquiring an abnormal route according to the trimming data set, trimming the abnormal route and acquiring a working route; the control of the industrial robot on the route is realized.
Description
Technical Field
The invention relates to the field of intelligent control, in particular to an intelligent control system of an industrial robot based on image recognition.
Background
In the prior art, when an industrial robot works when acquiring a target object, path planning and motion control cannot be performed according to the position and posture information of the object; therefore, an intelligent control system of the industrial robot based on image recognition is provided.
Disclosure of Invention
In order to solve the technical problems, the invention provides an intelligent control system of an industrial robot based on image recognition;
the aim of the invention can be achieved by the following technical scheme: an intelligent control system of an industrial robot based on image recognition comprises a control terminal, wherein the control terminal is connected with a data acquisition module, a data processing module, a target recognition module, a route virtual verification module and a route trimming module in a wireless communication mode;
the data acquisition module comprises a camera acquisition unit and a sensor acquisition unit, and the camera acquisition unit is used for setting a camera to acquire image data of an industrial robot working scene; the sensor acquisition unit is used for capturing environmental data of an industrial robot working scene;
the data processing module comprises an image data processing unit and an environment data processing unit; the image data processing unit is used for processing the image data to obtain image frame data; the environment data processing unit is used for processing the environment data to obtain a frame object database;
the target recognition module is used for recognizing image frame data according to the frame object database;
the route virtual verification module is used for carrying out virtual simulation experiments on a preset work planning route, setting key frames and inserting the key frames into the abnormal route;
the path trimming module is used for acquiring an abnormal route according to the trimming data set, trimming the abnormal route and acquiring a working route.
Further, the environmental data includes location, distance, temperature, and attitude information.
Further, the image data processing unit is configured to process the image data, and the process of obtaining the image frame data includes:
preprocessing the image data to obtain high-quality image data; establishing an object extraction network, and acquiring objects in high-quality image data based on the object extraction network; framing the object to obtain image frame data;
the pretreatment comprises denoising, filtering and size adjustment; setting the size adjustment to be uniform in size, and further obtaining high-quality image data;
the object extraction network is used for acquiring image frame data in the image data.
Further, the process of establishing the object extraction network includes:
setting an object identification unit, an acquisition frame unit and a frame object acquisition unit, and sequentially connecting the object identification unit, the acquisition frame unit and the frame object acquisition unit with each other to establish an object extraction network;
the object recognition unit is used for recognizing all objects in the high-quality image data, the object recognition unit receives the high-quality image data, marks the recognized objects in the high-quality image data, and generates recognition object image data.
Further, the process of obtaining the frame object database includes:
the frame obtaining unit is used for carrying out frame processing on the received object with the mark in the image data of the identified object, the frame processing is used for framing the marked image by using frames with different colors, and an object frame is generated according to the object framed by using the frames;
the object frame is sent to a frame object acquisition unit for processing, and according to the processing result, a frame object is acquired;
the frame object acquisition unit is used for acquiring objects in the frame according to the object frame and generating frame objects;
and a trunk feature extraction network is arranged in the frame object acquisition unit and is used for extracting trunk features of objects in the object frame and acquiring object names according to the trunk features.
Further, the frame position and frame posture information of the frame object in the image data of the identified object are obtained, the frame position and frame posture information are marked, and the object name, the frame object, the frame position and the frame posture information are subjected to data association, so that a frame object database is generated.
Further, the process of the object recognition module recognizing the object frame according to the frame object database includes:
setting a target object name in the target identification module, sending the target object name to a border object database, identifying an object name corresponding to the target object name, acquiring a corresponding border object, and border position and border gesture information of the corresponding border object, and marking the border object position and the border gesture information as target object position and target gesture information; and further acquiring a working planning route according to the target object position and the target attitude information.
Further, the process of performing the virtual simulation experiment on the work planning route by the route virtual verification module includes:
the working planning route is sent to a route virtual verification module, real-time environment data are obtained according to the working planning route, and verification is carried out on the working planning route according to the real-time environment data;
setting an alarm device, wherein the alarm device is used for connecting a key frame, if an alarm signal occurs in the alarm device in the verification process of a working planning route, generating an abnormal route on a route corresponding to the working planning route of the alarm signal, starting the key frame, inserting the key frame into the abnormal route, and obtaining the key frame and the distance between the position corresponding to the key frame and the position of a target object; generating a residual distance, and carrying out data association on the key frame and the residual distance to generate a trimming data set; and send the trimming dataset to the path trimming module.
Further, the path trimming module receives the trimming data set, extracts the key frame, acquires the position corresponding to the key frame and the distance corresponding to the corresponding position, and re-trims the abnormal route to acquire the working route.
Compared with the prior art, the invention has the beneficial effects that: the method comprises the steps that a camera shooting acquisition unit and a sensor acquisition unit are arranged through a data acquisition module, and a camera is arranged through the camera shooting acquisition unit to acquire image data of a working scene of the industrial robot; capturing environmental data of an industrial robot working scene through a sensor acquisition unit; the image data and the environment data are sent to a data processing module, and image frame data are obtained through an image data processing unit in the data processing module; processing the environmental data by an environmental data processing unit to obtain a frame object database; identifying the object frame according to the frame object database through the target identification module; virtual simulation experiments are carried out on the work planning route through a route virtual verification module, and key frames are set and used for inserting the key frames into the abnormal route; finally, acquiring an abnormal route according to the trimming data set through a path trimming module, trimming the abnormal route, and acquiring a working route; the path planning and the motion control of the industrial robot are realized.
Drawings
For a clearer description of embodiments of the present application or of the solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments described in the present invention, and that other drawings may be obtained according to these drawings for a person skilled in the art.
Fig. 1 is a schematic diagram of the present invention.
Detailed Description
For a clearer description of embodiments of the present application or of the solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments described in the present invention, and that other drawings may be obtained according to these drawings for a person skilled in the art.
The embodiment 1, as shown in fig. 1, is an intelligent control system of an industrial robot based on image recognition, which comprises a control terminal, wherein the control terminal is connected with a data acquisition module, a data processing module, a target recognition module, a route virtual verification module and a path trimming module in a wireless communication manner;
the data acquisition module comprises a camera acquisition unit and a sensor acquisition unit, and the camera acquisition unit is used for setting a camera to acquire image data of an industrial robot working scene; the sensor acquisition unit is used for capturing environmental data of an industrial robot working scene;
the image data includes objects and non-objects; the environment data comprises position, distance, temperature and gesture information;
it should be further noted that the sensor includes a laser scanner, a distance sensor, and an attitude sensor; in a specific implementation process, the laser scanner is used for acquiring a position; the distance sensor is used for acquiring the distance; the gesture sensor is used for acquiring gesture information.
The data processing module comprises an image data processing unit and an environment data processing unit; the image data processing unit is used for processing the image data to obtain image frame data; the environment data processing unit is used for processing the environment data to obtain real-time environment data;
in embodiment 2, as shown in fig. 1, the process of processing image data by the image data processing unit to obtain image frame data includes:
preprocessing the image data to obtain high-quality image data; establishing an object extraction network, and acquiring objects in high-quality image data based on the object extraction network; framing the object to obtain image frame data;
the pretreatment comprises denoising, filtering and size adjustment; setting the size adjustment to a uniform size; thereby obtaining high-quality image data;
it should be further noted that, in the specific implementation process, the size adjustment is set according to the complexity of the image and the range of the image data, and the size of the image data after denoising and filtering is adjusted, so as to adjust the image data to be a processing size favorable for the image data;
the object extraction network is used for acquiring image frame data in the image data, and the process for establishing the object extraction network comprises the following steps:
setting an object identification unit, an acquisition frame unit and a frame object acquisition unit, and sequentially connecting the object identification unit, the acquisition frame unit and the frame object acquisition unit with each other to establish an object extraction network;
the object recognition unit is used for recognizing all objects in the high-quality image data, the object recognition unit receives the high-quality image data, marks the recognized objects in the high-quality image data, and generates recognition object image data;
the image data of the identification object is sent to an acquisition frame unit for acquiring the frame;
the frame obtaining unit is used for carrying out frame processing on the marked object in the received image data of the identified object, and the frame processing is as follows: framing the marked object in the identified object image data by using frames with different colors, and generating corresponding object frames;
the method comprises the steps of sending an object frame to a frame object obtaining unit for processing, obtaining a frame object according to a processing result, sending the frame object to a screening unit for screening, and obtaining a complete frame object according to a screening result;
the frame object acquisition unit is used for acquiring objects in the frame according to the object frame and generating frame objects;
setting a trunk feature extraction network on the frame object acquisition unit, wherein the trunk feature extraction network is used for extracting trunk features of objects in the object frame; the trunk feature extraction network is specifically a trained deep convolutional neural network model;
the frame object acquisition unit receives an object frame, acquires the trunk characteristics of objects in the object frame based on the trunk characteristic extraction network, and identifies the objects according to the trunk characteristics to obtain corresponding identification results, wherein the identification results are object names;
it should be further noted that, in the implementation process, the trunk feature is used to represent a feature that is mainly represented in all objects in the image data; for example, obtaining the names of objects in the high-quality image data, namely the table and chair according to the trunk characteristics of the table and chair;
acquiring position and posture information of a frame object in the image data of the identified object, marking the position and posture information as frame position and posture information, and carrying out data association on the object name, the frame object, the frame position and the posture information to generate a frame object database;
it should be further noted that, in the specific implementation process, the obtained frame object and the object name corresponding to the frame object are not necessarily required objects, so that the industrial robot needs to be accurately identified when operating, and further, the work is more accurately executed;
and sending the frame object database to a target recognition module for target recognition.
The target recognition module is used for recognizing the object frame according to the frame object database, and the specific process comprises the following steps:
the target recognition module is used for setting a target object name, sending the target object name to the border object database according to the border object database, recognizing an object name corresponding to the target object name, acquiring a corresponding border object, and border position and border gesture information of the corresponding border object, and marking the border position and the border gesture information as target object position and target gesture information; according to the position and the target attitude information of the target object, a working planning route is further obtained;
it should be further explained that, in the specific implementation process, if the object name does not identify the object name in the frame object database, and if the frame object is not obtained, the image frame data needs to be re-acquired, and it should be further explained that the re-acquired image frame data is not randomly acquired, but different image frame data is further acquired according to the position where the last image frame data is acquired, so that the object is accurately identified, and the acquisition time is saved;
the route virtual verification module is used for carrying out virtual simulation experiments on the work planning route, setting key frames and inserting the key frames into the abnormal route, and the specific process comprises the following steps:
the method comprises the steps of sending a work planning route to a route virtual verification module, receiving the work planning route, acquiring real-time environment data, and verifying the work planning route according to the real-time environment data;
setting an alarm device, wherein the alarm device is used for connecting a key frame, if an alarm signal occurs in the alarm device in the verification process of a working planning route, generating an abnormal route on a route corresponding to the working planning route of the alarm signal, starting the key frame, inserting the key frame into the abnormal route, and obtaining the key frame and the distance between the position corresponding to the key frame and the position of a target object; generating a residual distance from the distance, and performing data association on the key frame and the residual distance to generate a trimming data set; and sending the trimming dataset to a path trimming module;
it should be further noted that in the specific implementation process, the work planning route has no accuracy and history record in the experimental process, and cannot ensure that the work planning route is not necessarily accurate when the industrial robot inputs the target, so that early verification is required.
The path trimming module is used for acquiring an abnormal route according to the trimming data set, trimming the abnormal route and acquiring a working route, and the specific process comprises the following steps:
and receiving the trimming data set, extracting a key frame, acquiring a position corresponding to the key frame and a distance corresponding to the corresponding position, and re-trimming the abnormal route corresponding to the working planning route according to the position to acquire the working route.
Working principle: the method comprises the steps that a camera shooting acquisition unit and a sensor acquisition unit are arranged through a data acquisition module, and a camera is arranged through the camera shooting acquisition unit to acquire image data of a working scene of the industrial robot; capturing environmental data of an industrial robot working scene through a sensor acquisition unit; the image data and the environment data are sent to a data processing module, and image frame data are obtained through an image data processing unit in the data processing module; processing the environmental data by an environmental data processing unit to obtain a frame object database; identifying the object frame according to the frame object database through the target identification module; virtual simulation experiments are carried out on the work planning route through a route virtual verification module, and key frames are set and used for inserting the key frames into the abnormal route; and finally, acquiring an abnormal route according to the trimming data set by a path trimming module, trimming the abnormal route, and acquiring a working route.
Features and exemplary embodiments of various aspects of the present application are described in detail above, and in order to make the objects, technical solutions, and advantages of the present application more apparent, the present application is described in further detail above with reference to the accompanying drawings and the specific embodiments; it should be understood that the particular embodiments described herein are intended to be illustrative of the application and not limiting of the application; it will be apparent to one skilled in the art that the present application may be practiced without some of these specific details; the above description of embodiments is merely intended to provide a better understanding of the present application by showing examples of the present application.
The above embodiments are only for illustrating the technical method of the present invention and not for limiting the same, and it should be understood by those skilled in the art that the technical method of the present invention may be modified or substituted without departing from the spirit and scope of the technical method of the present invention.
Claims (1)
1. The intelligent control system of the industrial robot based on image recognition comprises a control terminal and is characterized in that the control terminal is connected with a data acquisition module, a data processing module, a target recognition module, a route virtual verification module and a route trimming module in a wireless communication mode;
the data acquisition module comprises a camera acquisition unit and a sensor acquisition unit, and the camera acquisition unit is used for setting a camera to acquire image data of an industrial robot working scene; the sensor acquisition unit is used for capturing environmental data of an industrial robot working scene;
the data processing module comprises an image data processing unit and an environment data processing unit; the image data processing unit is used for processing the image data to obtain image frame data; the environment data processing unit is used for processing the environment data to obtain a frame object database;
the target recognition module is used for recognizing image frame data according to the frame object database;
the route virtual verification module is used for carrying out virtual simulation experiments on a preset work planning route, setting key frames and inserting the key frames into the abnormal route;
the path trimming module is used for acquiring an abnormal route according to the trimming data set, trimming the abnormal route and acquiring a working route;
the environment data comprises position, distance, temperature and gesture information;
the image data processing unit is used for processing the image data, and the process of obtaining the image frame data comprises the following steps:
preprocessing the image data to obtain high-quality image data; establishing an object extraction network, and acquiring objects in high-quality image data based on the object extraction network; framing the object to obtain image frame data;
the pretreatment comprises denoising, filtering and size adjustment; setting the size adjustment to be uniform in size, and further obtaining high-quality image data;
the object extraction network is used for acquiring image frame data in the image data;
the process for establishing the object extraction network comprises the following steps:
setting an object identification unit, an acquisition frame unit and a frame object acquisition unit, and sequentially connecting the object identification unit, the acquisition frame unit and the frame object acquisition unit with each other to establish an object extraction network;
the object recognition unit is used for recognizing all objects in the high-quality image data, the object recognition unit receives the high-quality image data, marks the recognized objects in the high-quality image data, and generates recognition object image data;
the process for acquiring the frame object database comprises the following steps:
the frame obtaining unit is used for carrying out frame processing on the received object with the mark in the image data of the identified object, the frame processing is used for framing the marked image by using frames with different colors, and an object frame is generated according to the object framed by using the frames;
the object frame is sent to a frame object acquisition unit for processing, and according to the processing result, a frame object is acquired;
the frame object acquisition unit is used for acquiring objects in the frame according to the object frame and generating frame objects;
setting a trunk feature extraction network in the frame object acquisition unit, and identifying an object through the trunk feature extraction network to obtain a corresponding object name;
acquiring frame position and frame posture information of a frame object in the image data of the identified object, marking the frame position and frame posture information, and carrying out data association on the object name, the frame object, the frame position and the frame posture information to generate a frame object database;
the process of the target recognition module for recognizing the object frame according to the frame object database comprises the following steps:
setting a target object name in the target identification module, sending the target object name to a border object database, identifying an object name corresponding to the target object name, acquiring a corresponding border object, and border position and border gesture information of the corresponding border object, and marking the border object position and the border gesture information as target object position and target gesture information; according to the position and the target attitude information of the target object, a working planning route is further obtained;
the process of the virtual simulation experiment of the route virtual verification module for the work planning route comprises the following steps:
the working planning route is sent to a route virtual verification module, real-time environment data are obtained according to the working planning route, and verification is carried out on the working planning route according to the real-time environment data;
setting an alarm device, wherein the alarm device is used for connecting a key frame, if an alarm signal occurs in the alarm device in the verification process of a working planning route, generating an abnormal route on a route corresponding to the working planning route of the alarm signal, starting the key frame, inserting the key frame into the abnormal route, and obtaining the key frame and the distance between the position corresponding to the key frame and the position of a target object; generating a residual distance, and carrying out data association on the key frame and the residual distance to generate a trimming data set; and sending the trimming dataset to a path trimming module;
the path trimming module receives the trimming data set, extracts the key frame, acquires the position corresponding to the key frame and the distance corresponding to the corresponding position, and re-trims the abnormal route to acquire the working route.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410027856.1A CN117549314B (en) | 2024-01-09 | 2024-01-09 | Industrial robot intelligent control system based on image recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410027856.1A CN117549314B (en) | 2024-01-09 | 2024-01-09 | Industrial robot intelligent control system based on image recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117549314A CN117549314A (en) | 2024-02-13 |
CN117549314B true CN117549314B (en) | 2024-03-19 |
Family
ID=89818839
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410027856.1A Active CN117549314B (en) | 2024-01-09 | 2024-01-09 | Industrial robot intelligent control system based on image recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117549314B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112308688A (en) * | 2020-12-02 | 2021-02-02 | 杭州微洱网络科技有限公司 | Size meter detection method suitable for e-commerce platform |
CN112847321A (en) * | 2021-01-04 | 2021-05-28 | 扬州市职业大学(扬州市广播电视大学) | Industrial robot visual image recognition system based on artificial intelligence |
KR20210063701A (en) * | 2019-11-25 | 2021-06-02 | (주)이노시뮬레이션 | the method for configuring and controling real-time distributed autonomous driving simulation framework |
CN114092886A (en) * | 2021-11-29 | 2022-02-25 | 昆明理工大学 | Early warning method and system for potential safety hazards in residential corridor |
CN114940188A (en) * | 2022-06-14 | 2022-08-26 | 安徽工程大学 | Automatic driving auxiliary system of intelligent fire engine |
CN116224998A (en) * | 2023-01-05 | 2023-06-06 | 扬州工业职业技术学院 | Remote control industrial robot and control system thereof |
CN116824120A (en) * | 2023-06-14 | 2023-09-29 | 深圳大学 | Multiphase flow image processing method, device, equipment and medium based on microfluidics |
-
2024
- 2024-01-09 CN CN202410027856.1A patent/CN117549314B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210063701A (en) * | 2019-11-25 | 2021-06-02 | (주)이노시뮬레이션 | the method for configuring and controling real-time distributed autonomous driving simulation framework |
CN112308688A (en) * | 2020-12-02 | 2021-02-02 | 杭州微洱网络科技有限公司 | Size meter detection method suitable for e-commerce platform |
CN112847321A (en) * | 2021-01-04 | 2021-05-28 | 扬州市职业大学(扬州市广播电视大学) | Industrial robot visual image recognition system based on artificial intelligence |
CN114092886A (en) * | 2021-11-29 | 2022-02-25 | 昆明理工大学 | Early warning method and system for potential safety hazards in residential corridor |
CN114940188A (en) * | 2022-06-14 | 2022-08-26 | 安徽工程大学 | Automatic driving auxiliary system of intelligent fire engine |
CN116224998A (en) * | 2023-01-05 | 2023-06-06 | 扬州工业职业技术学院 | Remote control industrial robot and control system thereof |
CN116824120A (en) * | 2023-06-14 | 2023-09-29 | 深圳大学 | Multiphase flow image processing method, device, equipment and medium based on microfluidics |
Non-Patent Citations (3)
Title |
---|
复杂背景下兼顾跟踪实时性和跟踪精度的目标跟踪技术研究;李庆生;赵丽君;张志锋;;光电子・激光;20200215(02);第117页至第124页 * |
常明等.《计算机图形学算法与应用》.华中科技大学出版社,2009,第298页. * |
码垛机器人路径规划系统设计;段海龙;;自动化应用;20170925(09);第72页至第75页 * |
Also Published As
Publication number | Publication date |
---|---|
CN117549314A (en) | 2024-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200234036A1 (en) | Multi-Camera Multi-Face Video Splicing Acquisition Device and Method Thereof | |
Kowsalya et al. | Attendance monitoring system using face detection & face recognition | |
CN105022999A (en) | Man code company real-time acquisition system | |
CN110969045B (en) | Behavior detection method and device, electronic equipment and storage medium | |
CN110941992B (en) | Smile expression detection method and device, computer equipment and storage medium | |
CN113936340B (en) | AI model training method and device based on training data acquisition | |
CN111145257A (en) | Article grabbing method and system and article grabbing robot | |
Mekala et al. | Face recognition based attendance system | |
CN110807391A (en) | Human body posture instruction identification method for human-unmanned aerial vehicle interaction based on vision | |
CN110378289B (en) | Reading and identifying system and method for vehicle identification code | |
CN109214278B (en) | User instruction matching method and device, computer equipment and storage medium | |
CN117549314B (en) | Industrial robot intelligent control system based on image recognition | |
CN113033297B (en) | Method, device, equipment and storage medium for programming real object | |
CN111241505A (en) | Terminal device, login verification method thereof and computer storage medium | |
KR102366396B1 (en) | RGB-D Data and Deep Learning Based 3D Instance Segmentation Method and System | |
CN110889847A (en) | Nuclear radiation damage assessment system and method based on infrared imaging | |
CN112347824A (en) | Wearing object identification method, device, equipment and storage medium | |
KR102664123B1 (en) | Apparatus and method for generating vehicle data, and vehicle system | |
WO2017219562A1 (en) | Method and apparatus for generating two-dimensional code | |
CN114299481A (en) | Vehicle identification code identification method and device and computer equipment | |
CN114241556A (en) | Non-perception face recognition attendance checking method and device | |
Muqeet | M.: Face recognition based attendance management system using Raspberry Pi | |
CN112597854A (en) | Non-matching type face recognition system and method | |
CN106022246A (en) | Difference-based patterned-background print character extraction system and method | |
CN113128545B (en) | Method and device for collecting sample by robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |