CN109840508A - One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium - Google Patents
One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium Download PDFInfo
- Publication number
- CN109840508A CN109840508A CN201910118700.3A CN201910118700A CN109840508A CN 109840508 A CN109840508 A CN 109840508A CN 201910118700 A CN201910118700 A CN 201910118700A CN 109840508 A CN109840508 A CN 109840508A
- Authority
- CN
- China
- Prior art keywords
- searched
- network architecture
- robot vision
- deep neural
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The present invention provides a robot vision control method searched for automatically based on the depth network architecture, image data needed for obtaining training deep neural network;Obtain the mark that user carries out image data;Search space is established, preset deep neural network framework is searched in search space based on neural framework;In search process, using verifying collection come to search out come deep neural network framework tested and record test result;Contrastive test is input to deep neural network model and calculates recognition result as a result, obtaining data;The recognition result is converted to target position and the posture of robot arm end effector;It calculates motion profile and sends and execute instruction;Tracks are modified according to mechanical arm current feedback information.System automatically can configure deep neural network based on user itself actual demand, and be deployed in robot vision control system, therefore user can use system and realize some individual demands.
Description
Technical field
The present invention relates to technical field of image processing more particularly to a machines searched for automatically based on the depth network architecture
People's visual spatial attention method, equipment and storage medium.
Background technique
In novel Study of Intelligent Robot Control system, the measurement and control of view-based access control model are essential.Early stage
Visual spatial attention passes through hand-eye system (hand-eye coordination) mainly to calculate target object in robot coordinate
Position and posture in system.However since early stage machine vision technique is not yet mature, measured object can only pass through one
Then a little simple image processing tools control mechanical arm by prespecified track to identify.And these systems are general only
Some single tasks can be handled, when user has new demand, are typically only capable to that expert is looked for redesign system, so that one new
The development cost of application are very high.
Deep learning is an important branch of machine learning, since deep neural network possesses powerful representative learning energy
Power, therefore the technology is also widely used for solving much to be related to the task of vision, such as object detection, classification and image are raw
At etc..It is compared with traditional image processing techniques, deep learning also has traditional images processing system is incomparable to hold
To the advantages of the study of end, user only needs to collect and mark good enough data, passes through prior established depth nerve net
It can directly export after network study as a result, due to no longer needing manual designs feature (hand designed feature)
Step, so that non-image processing expert can also build a deep learning system according to their own needs.
Although deep neural network possesses above-mentioned a series of advantage, existing some problems also counteract general user
The application of oneself is built using deep neural network.Namely current deep neural network is natural mode, and user uses
Existing curing model is only limitted to when deep neural network, it is difficult to be based on itself actual parameter demand, or based on the machine disposed
Device human visual system configures deep neural network, causes the deep neural network usage experience poor in this way, it is difficult to suitable
The robotic vision system that should have been disposed is unable to satisfy deep neural network in the application of actual robot scene.
Summary of the invention
In order to overcome the deficiencies in the prior art described above, what present invention offer one was searched for automatically based on the depth network architecture
Robot vision control method, method include:
Step 1: image data needed for obtaining training deep neural network;
Step 2: configuration figure operating user interface port obtains the mark that user carries out image data;
Step 3: image data is divided into training set, verifying collection and test set;
Step 4: establishing search space, and preset deep neural network frame is searched in search space based on neural framework
Structure;
Step 5: in search process, using verifying collection come to search out come deep neural network framework test
And record test result;
Step 6: contrastive test is as a result, using search framework obtained in search process as applied to robot vision control
The deep neural network model of system processed;
Step 7: trained deep neural network model is deployed in preset application program, passes through RGB camera
Or depth camera obtains data and is input to deep neural network model and calculates recognition result;
Step 8: the recognition result is converted to target position and the posture of robot arm end effector;
Step 9: it calculates motion profile according to robot arm end effector current location and target position and sends execution and refer to
It enables;
Step 10: tracks are modified according to mechanical arm current feedback information.
Preferably, step 2 further include: for the position detection type application of two-dimensional image data, labeling form is
(x, y), wherein x, y indicate to be marked transverse and longitudinal coordinate a little on the image, and the position of two-dimensional image data is added towards detection
Type application, labeling form are (x, y, θ), and wherein θ indicates direction.
Preferably, step 2 further include: for the position detection type application of three dimensional point cloud, labeling form (x,
y,z);The position of three dimensional point cloud is added towards detection type application, the labeling form of Eulerian angles is
(x,y,z,θx,θy,θz), the labeling form of quaternary number is (x, y, z, qx,qy,qz, w), wherein (x, y, z) indicates quilt
The coordinate of mark point in three dimensions;
(θx,θy,θz) respectively indicate around x, y, the rotary variable of z-axis, (qx,qy,qz, w) and it indicates around x, y, z-axis quaternary number point
Amount.
Preferably, step 3 further include: for being not equipped with the user of GPU, system upload the data to cloud server into
Row model training.
Preferably, step 3 further include: the data that user collects are divided into training set, verifying collection and survey in 8:1:1 ratio
Examination collection.
Preferably, step 4 further include: parameter used in search space are as follows: the number of convolution kernel, step-length and size, convolution
The number of layer, the neuron number of hidden layer, if use jump connection and activation primitive type.
Preferably, loss function is used in step 5 are as follows:
Wherein n is data amount check, and k is data dimension, for image data, k=2, for point cloud data, k=3.pl, it is
The position of the mark of user, slFor confidence level, default can be set to 1;
pθ, sθFor deep neural network calculate as a result, θlFor direction, for there was only the mark of position, the Xiang Bucun
?.
Preferably, step 10 further include: tracks amendment is carried out according to the force sensor data of mechanical arm.
A kind of equipment for realizing the robot vision control method searched for automatically based on the depth network architecture, comprising:
Memory, the robot vision controlling party for storing computer program and being searched for automatically based on the depth network architecture
Method;
Processor, the robot vision control for executing the computer program and being searched for automatically based on the depth network architecture
Method processed, the step of to realize the robot vision control method searched for automatically based on the depth network architecture.
It is a kind of with the computer-readable storage medium based on the automatic searching machine people visual spatial attention method of the depth network architecture
Matter, computer program is stored on the computer readable storage medium, and the computer program is executed by processor to realize
Automatically the step of robot vision control method searched for based on the depth network architecture.
As can be seen from the above technical solutions, the invention has the following advantages that
The invention proposes one based on the depth network architecture search for automatically can rapid deployment robot vision control
Method.By using this method, user can collect according to their own needs and after labeled data, can participate in no expert
In the case where efficiently build it is some such as grab, assembling etc. robot applications.The system can satisfy some individual characteies of user
Change demand, so that those can not bear high expense to set up and safeguard middle small manufacturing enterprises user and the family of mechanical production line
User can introduce mechanization production in some steps.
The present invention can construct one using the search of neural framework and can be regarded according to user demand with the robot of rapid deployment
Feel control system.When user demand changes, it is only necessary to which oneself collects the data for training neural network again.It can be with
Deep neural network is matched based on user itself actual parameter demand, or based on the robotic vision system disposed
It sets, so that neural framework search adapts to the robotic vision system disposed, meets the actual use of robot.
Detailed description of the invention
In order to illustrate more clearly of technical solution of the present invention, attached drawing needed in description will be made below simple
Ground introduction, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ordinary skill
For personnel, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the robot vision control method flow chart searched for automatically based on the depth network architecture;
Fig. 2 is schematic diagram of the embodiment of the present invention;
Fig. 3 is schematic diagram of the embodiment of the present invention;
Fig. 4 is schematic diagram of the embodiment of the present invention.
Specific embodiment
The present invention provides a robot vision control method searched for automatically based on the depth network architecture, as shown in Figure 1,
Method includes:
Step 1: image data needed for obtaining training deep neural network;
Wherein, image data includes the location information and posture information of object to be detected.
Step 2: configuration figure operating user interface port obtains the mark that user carries out image data;
Need further exist for explanation, for the position detection type application of two-dimensional image data, labeling form be (x,
Y), wherein x, y are indicated to be marked transverse and longitudinal coordinate a little on the image, the position of two-dimensional image data are added towards detection type
Using labeling form is (x, y, θ), and wherein θ indicates direction.
For the position detection type application of three dimensional point cloud, labeling form (x, y, z);For three dimensional point cloud
Position add towards detection type application, the labeling form of Eulerian angles is
(x,y,z,θx,θy,θz), the labeling form of quaternary number is (x, y, z, qx,qy, qz, w), wherein (x, y, z) indicates quilt
The coordinate of mark point in three dimensions;
(θx, θy, θz) respectively indicate around x, y, the rotary variable of z-axis, (qx, qy, qz, w) and it indicates around x, y, z-axis quaternary number point
Amount.
Step 3: image data is divided into training set, verifying collection and test set;
For being not equipped with the user of GPU, system upload the data to cloud server and carries out model training.
Here the data that user collects are divided into training set, verifying collection and test set in 8:1:1 ratio.
Step 4: establishing search space, and preset deep neural network frame is searched in search space based on neural framework
Structure;
Parameter used in search space are as follows: the number of convolution kernel, step-length and size, the number of convolutional layer, the mind of hidden layer
Through first number, if use jump connection and activation primitive type.
Step 5: in search process, using verifying collection come to search out come deep neural network framework test
And record test result;
Explanation is needed further exist for, using loss function are as follows:
Wherein n is data amount check, and k is data dimension, for image data, k=2, for point cloud data, k=3.pl, it is
The position of the mark of user, slFor confidence level, default can be set to 1;
pθ, sθFor deep neural network calculate as a result, θlFor direction, for there was only the mark of position, the Xiang Bucun
?.
Step 6: contrastive test is as a result, using search framework obtained in search process as applied to robot vision control
The deep neural network model of system processed;
Step 7: trained deep neural network model is deployed in preset application program, passes through RGB camera
Or depth camera obtains data and is input to deep neural network model and calculates recognition result;
Step 8: the recognition result is converted to target position and the posture of robot arm end effector;
Step 9: it calculates motion profile according to robot arm end effector current location and target position and sends execution and refer to
It enables;
Step 10: tracks are modified according to mechanical arm current feedback information.According to the force snesor of mechanical arm
Data carry out tracks amendment.
The present invention may be achieved in many ways method and device of the invention.For example, can by software, hardware,
Firmware or software, hardware, firmware any combination realize method and device of the invention.The step of for the method
Said sequence merely to be illustrated, the step of method of the invention, is not limited to sequence described in detail above, unless with
Other way illustrates.In addition, in some embodiments, also the present invention can be embodied as to record journey in the recording medium
Sequence, these programs include for realizing machine readable instructions according to the method for the present invention.Thus, the present invention also covers storage and uses
In the recording medium for executing program according to the method for the present invention.
Technology as described herein may be implemented in hardware, software, firmware or any combination of them.The various spies
Sign is module, and unit or assembly may be implemented together in integration logic device or separately as discrete but interoperable logic
Device or other hardware devices.In some cases, the various features of electronic circuit may be implemented as one or more integrated
Circuit devcie, such as IC chip or chipset.
Technical solution of the present invention is further illustrated with specific embodiment: as shown in Figures 2 to 4, system be divided into from
Line study and two stages of application on site.The off-line learning stage comprises the steps of:
User, which collects, applies required initial data, and data mode can be two-dimentional rgb image data or three-dimensional
Point cloud data, in order to improve the precision of identification, the data that user collects should be in image or sky comprising object to be detected as far as possible
Between middle different location and towards asynchronous situation.For example when detecting single body in the picture, the object should be collected as far as possible
In image different location and it is different towards when picture, picture number is no less than 100.
User can be labeled initial data by the subsidiary graphical user interface of system, for two-dimensional image data
Point (position) type, labeling form is (x, y), and wherein x, y indicate to be marked transverse and longitudinal coordinate a little on the image, for point
Adding the data towards type, labeling form is (x, y, θ), and wherein θ indicates direction, similarly, for three dimensional point cloud,
Labeling form (x, y, z, θx, θy, θz)。
For two dimensional image, mark can be that the certain pixels or thing on image can indicate posture dotted line.It is right
In three-dimensional point cloud, mark can be the coordinate that a little or can indicate position and posture.User is according to different mark classes
Type selects corresponding loss function (Loss function) Lai Xunlian depth neural model.For example, for the number of vertex type
According to loss function can use similar
Mode, wherein n is data
Number, k is data dimension, for two-dimensional image data, k=2, for three dimensional point cloud, k=3.pk, it is the mark of user
Position, slFor confidence level, default can be set to 1.pθ, sθFor deep neural network calculate as a result, θlFor direction, for
The only mark of position, this may be not present.
User uploads the data with mark by system graphical interfaces.
According to the data that user uploads, data (such as 8:1:1) can be divided into training set, school according to a certain percentage automatically by system
Collection and test set are tested, wherein for training deep neural network model, checksum set is used in numerous neural network frameworks training set
In filter out optimal framework, the i.e. highest network architecture of accuracy or accuracy rate, and test set is then used to test optimal framework
And the final standards of grading being used as.
Neural framework search module implementation method;
System establishes search space using above-mentioned test set, which may include following parameter (of convolution kernel
Number, step-length and size, the number of convolutional layer, the number of parameters of hidden layer, if use jump connection (skip
Connection) etc.).
(more popular random search can be used, Monte Carlo tree is searched for and based on reinforcing in neural framework searching algorithm
The search scheduling algorithm of study) different parameters is sampled in search space establishes corresponding deep neural network framework.
For the network architecture that each is sampled, system using checksum set come to search out come deep neural network
Framework is tested for the property and records test result.
After search, system is using optimal framework obtained in search process as being finally applied to robot vision control
The deep neural network model of system.
Finally obtained deep neural network is tested for the property using test set, and is referred in this, as corresponding reference
Mark.
The application on site stage comprises the steps of:
System obtains two dimension or 3D vision dependency number by the peripheral hardwares such as common camera or RGB-D camera
According to.
The frame vision data is input to the deep neural network model obtained by previous steps and obtains result.
The result that system calculates deep neural network is converted to target position and the appearance of robot arm end effector
State.
System calculates control track according to the current location of robot arm end effector and target position.
System transmission executes instruction, and mechanical arm is started to work, and for grabbing or assembling task, mechanical arm would generally be first
It moves near target position, then executes subsequent action according to certain track.
In order to compensate for by vision calibration, error brought by the limitation of hardware precision and recognition result, system can pass through
The force sensor data and application of force feedback algorithm for obtaining mechanical arm are modified subsequent operation.User can upload data
To itself computer or cloud server.
Setting for the robot vision control method searched for automatically based on the depth network architecture is realized the present invention also provides a kind of
It is standby, comprising:
Memory, the robot vision controlling party for storing computer program and being searched for automatically based on the depth network architecture
Method;
Processor, the robot vision control for executing the computer program and being searched for automatically based on the depth network architecture
Method processed, the step of to realize the robot vision control method searched for automatically based on the depth network architecture.
It includes that one or more processors execute that the code or instruction, which can be software and/or firmware by processing circuit,
Such as one or more digital signal processors (DSP), general purpose microprocessor, application-specific integrated circuit (ASICs), scene can be compiled
Journey gate array (FPGA) or other equivalents are integrated circuit or discrete logic.Therefore, term " processor, " due to
It can refer to that any aforementioned structure or any other structure are more suitable for the technology as described herein realized as used herein.Separately
Outside, in some respects, function described in the disclosure can be provided in software module and hardware module.
The present invention also provides a kind of with the calculating based on the automatic searching machine people visual spatial attention method of the depth network architecture
Machine readable storage medium storing program for executing is stored with computer program on the computer readable storage medium, and the computer program is processed
Device executes the step of to realize the robot vision control method searched for automatically based on the depth network architecture.
The computer program product of computer-readable medium can form a part, may include packaging material.Data
Computer-readable medium may include computer storage medium, such as random access memory (RAM), read-only memory
(ROM), nonvolatile RAM (NVRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory, magnetic or
Optical data carrier and analog.In some embodiments, a kind of manufacture product may include that one or more computers can
Read storage media.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention.
Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention
It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one
The widest scope of cause.
Claims (10)
1. a robot vision control method searched for automatically based on the depth network architecture, which is characterized in that method includes:
Step 1: image data needed for obtaining training deep neural network;
Step 2: configuration figure operating user interface port obtains the mark that user carries out image data;
Step 3: image data is divided into training set, verifying collection and test set;
Step 4: establishing search space, and preset deep neural network framework is searched in search space based on neural framework;
Step 5: in search process, using verifying collection come to search out come deep neural network framework tested and remembered
Record test result;
Step 6: contrastive test as a result, be using search framework obtained in search process as robot vision control is applied to
The deep neural network model of system;
Step 7: trained deep neural network model is deployed in preset application program, passes through RGB camera or depth
Degree camera obtains data and is input to deep neural network model and calculates recognition result;
Step 8: the recognition result is converted to target position and the posture of robot arm end effector;
Step 9: motion profile is calculated according to robot arm end effector current location and target position and transmission executes instruction;
Step 10: tracks are modified according to mechanical arm current feedback information.
2. the robot vision control method according to claim 1 searched for automatically based on the depth network architecture, feature
It is,
Step 2 further include: for the position detection type application of two-dimensional image data, labeling form is (x, y), wherein x, y
It indicates to be marked transverse and longitudinal coordinate a little on the image, the position of two-dimensional image data is added towards detection type application, mark
Note form is (x, y, θ), and wherein θ indicates direction.
3. the robot vision control method according to claim 1 searched for automatically based on the depth network architecture, feature
It is,
Step 2 further include: for the position detection type application of three dimensional point cloud, labeling form (x, y, z);For three
The position of dimension point cloud data adds towards detection type application, and the labeling form of Eulerian angles is
(x,y,z,θx,θy,θz), the labeling form of quaternary number is (x, y, z, qx,qy,qz, w), wherein (x, y, z) expression is marked
The coordinate of point in three dimensions;
(θx,θy,θz) respectively indicate around x, y, the rotary variable of z-axis, (qx,qy,qz, w) and it indicates around x, y, z-axis quaternion components.
4. the robot vision control method according to claim 1 searched for automatically based on the depth network architecture, feature
It is,
Step 3 further include: for being not equipped with the user of GPU, system upload the data to cloud server and carries out model training.
5. the robot vision control method according to claim 1 searched for automatically based on the depth network architecture, feature
It is,
Step 3 further include: the data that user collects are divided into training set, verifying collection and test set in 8:1:1 ratio.
6. the robot vision control method according to claim 1 searched for automatically based on the depth network architecture, feature
It is,
Step 4 further include: parameter used in search space are as follows: the number of convolution kernel, step-length and size, the number of convolutional layer are hidden
Hide the neuron number of layer, if use jump connection and activation primitive type.
7. the robot vision control method according to claim 1 searched for automatically based on the depth network architecture, feature
It is,
Loss function is used in step 5 are as follows:
Wherein n is data amount check, and k is data dimension, for image data, k=2, for point cloud data, k=3;pl, it is user
Mark position, slFor confidence level, default can be set to 1;
pθ, sθFor deep neural network calculate as a result, θlFor direction, for there was only the mark of position, this is not present.
8. the robot vision control method according to claim 1 searched for automatically based on the depth network architecture, feature
It is,
Step 10 further include: tracks amendment is carried out according to the force sensor data of mechanical arm.
9. a kind of equipment for realizing the robot vision control method searched for automatically based on the depth network architecture, which is characterized in that
Include:
Memory, the robot vision control method for storing computer program and being searched for automatically based on the depth network architecture;
Processor, the robot vision controlling party for executing the computer program and being searched for automatically based on the depth network architecture
Method, to realize the robot vision controlling party searched for automatically as described in claim 1 to 8 any one based on the depth network architecture
The step of method.
10. a kind of with the computer-readable storage medium based on the automatic searching machine people visual spatial attention method of the depth network architecture
Matter, which is characterized in that computer program is stored on the computer readable storage medium, the computer program is by processor
It executes to realize the robot vision control searched for automatically as described in claim 1 to 8 any one based on the depth network architecture
The step of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910118700.3A CN109840508A (en) | 2019-02-17 | 2019-02-17 | One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910118700.3A CN109840508A (en) | 2019-02-17 | 2019-02-17 | One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109840508A true CN109840508A (en) | 2019-06-04 |
Family
ID=66884705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910118700.3A Pending CN109840508A (en) | 2019-02-17 | 2019-02-17 | One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109840508A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110400315A (en) * | 2019-08-01 | 2019-11-01 | 北京迈格威科技有限公司 | A kind of defect inspection method, apparatus and system |
CN110428464A (en) * | 2019-06-24 | 2019-11-08 | 浙江大学 | Multi-class out-of-order workpiece robot based on deep learning grabs position and orientation estimation method |
CN110705695A (en) * | 2019-10-10 | 2020-01-17 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for searching model structure |
CN111126550A (en) * | 2019-12-25 | 2020-05-08 | 武汉科技大学 | Neural network molten steel temperature forecasting method based on Monte Carlo method |
CN111310704A (en) * | 2020-02-28 | 2020-06-19 | 联博智能科技有限公司 | Luggage van posture estimation method, luggage van posture estimation device and robot |
WO2021007743A1 (en) * | 2019-07-15 | 2021-01-21 | 富士通株式会社 | Method and apparatus for searching neural network architecture |
WO2021164276A1 (en) * | 2020-07-31 | 2021-08-26 | 平安科技(深圳)有限公司 | Target tracking method and apparatus, computer device, and storage medium |
WO2022022757A1 (en) | 2020-07-27 | 2022-02-03 | Y Soft Corporation, A.S. | A method for testing an embedded system of a device, a method for identifying a state of the device and a system for these methods |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107139179A (en) * | 2017-05-26 | 2017-09-08 | 西安电子科技大学 | A kind of intellect service robot and method of work |
CN108818537A (en) * | 2018-07-13 | 2018-11-16 | 南京工程学院 | A kind of robot industry method for sorting based on cloud deep learning |
CN109165249A (en) * | 2018-08-07 | 2019-01-08 | 阿里巴巴集团控股有限公司 | Data processing model construction method, device, server and user terminal |
-
2019
- 2019-02-17 CN CN201910118700.3A patent/CN109840508A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107139179A (en) * | 2017-05-26 | 2017-09-08 | 西安电子科技大学 | A kind of intellect service robot and method of work |
CN108818537A (en) * | 2018-07-13 | 2018-11-16 | 南京工程学院 | A kind of robot industry method for sorting based on cloud deep learning |
CN109165249A (en) * | 2018-08-07 | 2019-01-08 | 阿里巴巴集团控股有限公司 | Data processing model construction method, device, server and user terminal |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110428464A (en) * | 2019-06-24 | 2019-11-08 | 浙江大学 | Multi-class out-of-order workpiece robot based on deep learning grabs position and orientation estimation method |
WO2021007743A1 (en) * | 2019-07-15 | 2021-01-21 | 富士通株式会社 | Method and apparatus for searching neural network architecture |
CN110400315A (en) * | 2019-08-01 | 2019-11-01 | 北京迈格威科技有限公司 | A kind of defect inspection method, apparatus and system |
CN110400315B (en) * | 2019-08-01 | 2020-05-05 | 北京迈格威科技有限公司 | Defect detection method, device and system |
CN110705695A (en) * | 2019-10-10 | 2020-01-17 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for searching model structure |
CN110705695B (en) * | 2019-10-10 | 2022-11-18 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for searching model structure |
CN111126550A (en) * | 2019-12-25 | 2020-05-08 | 武汉科技大学 | Neural network molten steel temperature forecasting method based on Monte Carlo method |
CN111310704A (en) * | 2020-02-28 | 2020-06-19 | 联博智能科技有限公司 | Luggage van posture estimation method, luggage van posture estimation device and robot |
CN111310704B (en) * | 2020-02-28 | 2020-11-20 | 联博智能科技有限公司 | Luggage van posture estimation method, luggage van posture estimation device and robot |
WO2022022757A1 (en) | 2020-07-27 | 2022-02-03 | Y Soft Corporation, A.S. | A method for testing an embedded system of a device, a method for identifying a state of the device and a system for these methods |
WO2021164276A1 (en) * | 2020-07-31 | 2021-08-26 | 平安科技(深圳)有限公司 | Target tracking method and apparatus, computer device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109840508A (en) | One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium | |
US10740694B2 (en) | System and method for capture and adaptive data generation for training for machine vision | |
CN111208783B (en) | Action simulation method, device, terminal and computer storage medium | |
Wang et al. | DV-LOAM: Direct visual lidar odometry and mapping | |
CN109816773A (en) | A kind of driving method, plug-in unit and the terminal device of the skeleton model of virtual portrait | |
CN113043267A (en) | Robot control method, device, robot and computer readable storage medium | |
CN104574357B (en) | The system and method for datum mark of the positioning with known figure | |
CN109949900B (en) | Three-dimensional pulse wave display method and device, computer equipment and storage medium | |
Xu et al. | RGB-D-based pose estimation of workpieces with semantic segmentation and point cloud registration | |
Cheng et al. | A vision-based robot grasping system | |
CN113221726A (en) | Hand posture estimation method and system based on visual and inertial information fusion | |
Cao et al. | Real-time gesture recognition based on feature recalibration network with multi-scale information | |
Navarro et al. | Integrating 3D reconstruction and virtual reality: A new approach for immersive teleoperation | |
Lee et al. | Control framework for collaborative robot using imitation learning-based teleoperation from human digital twin to robot digital twin | |
Deng et al. | A human–robot collaboration method using a pose estimation network for robot learning of assembly manipulation trajectories from demonstration videos | |
Mateo et al. | 3D visual data-driven spatiotemporal deformations for non-rigid object grasping using robot hands | |
Scheuermann et al. | Mobile augmented reality based annotation system: A cyber-physical human system | |
Kiyokawa et al. | Efficient collection and automatic annotation of real-world object images by taking advantage of post-diminished multiple visual markers | |
Chen et al. | A quick development toolkit for augmented reality visualization (QDARV) of a Factory | |
Grigorescu | Robust machine vision for service robotics | |
Jiang et al. | 6D pose annotation and pose estimation method for weak-corner objects under low-light conditions | |
Ji et al. | A 3D Hand Attitude Estimation Method for Fixed Hand Posture Based on Dual-View RGB Images | |
CN116580084B (en) | Industrial part rapid pose estimation method based on deep learning and point cloud | |
Ramasubramanian et al. | On the Evaluation of Diverse Vision Systems towards Detecting Human Pose in Collaborative Robot Applications | |
Liu et al. | RealDex: Towards Human-like Grasping for Robotic Dexterous Hand |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190604 |
|
RJ01 | Rejection of invention patent application after publication |