CN110519582A - A kind of crusing robot data collection system and collecting method - Google Patents
A kind of crusing robot data collection system and collecting method Download PDFInfo
- Publication number
- CN110519582A CN110519582A CN201910755993.6A CN201910755993A CN110519582A CN 110519582 A CN110519582 A CN 110519582A CN 201910755993 A CN201910755993 A CN 201910755993A CN 110519582 A CN110519582 A CN 110519582A
- Authority
- CN
- China
- Prior art keywords
- image
- data
- identification
- module
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of crusing robot data collection system and collecting methods, after main completion robot reaches specified inspection location point, using embedded deep learning image processing techniques, quickly identification surrounding scene, identification equipment is simultaneously registrated, adjustment holder angle is instructed by sending, carries out the autonomous acquisition of equipment image.The specific steps of the invention are as follows: image information is acquired by binocular camera, figure is formatted in the processor, the pretreatment such as resolution adjustment and smothing filtering, target identification is carried out to pretreated image, identification data are uploaded into cloud again, data are handled and analyzed beyond the clouds, feed back to holder based on the analysis results for correcting camera angle etc..Realize that the identification of complex environment equipment, part of appliance identification and Image Acquisition, maintenance monitor by repeating.
Description
Technical field
The present invention relates to a kind of crusing robot data collection system and collecting methods, belong to field of image recognition.
Background technique
Crusing robot be realize intelligent substation inspection operation new technology, both with manual inspection flexibility,
It is intelligent, the poor in timeliness of manual inspection can also be made up, the defects of error rate is high.Inspection job content includes that transformer equipment is red
Outer thermometric, meter identification and equipment deficiency identification etc., need multi-field integration of operation, are just able to achieve diversification, the intelligence of detection
Change.
Crusing robot during inspection, need to complete the acquisition of target image information, the identification of target object and
The upload of data information.The convolutional neural networks (Convolution Neural Network, CNN) of mainstream are that image is known at present
One of the core algorithm in other field, and have stable performance when there is a large amount of learning datas.Convolutional neural networks are passed through into hardware
The identification that can be completed to target object is speeded up to realize, can be used for identifying station equipment, collecting data information.
The development of technology of Internet of things is, it can be achieved that the real time data in robot and cloud is transmitted.Cloud computing, mould are utilized beyond the clouds
The data and information of magnanimity are analyzed and are handled by the intellectual technologies such as paste identification and big data, implement intelligence to crusing robot
The control of energyization, can route, visual angle etc. to robot be corrected.
Summary of the invention
For the above-mentioned prior art, the technical problem to be solved in the present invention is to provide a kind of knowledges of realization complex environment equipment
Not, the crusing robot data collection system and collecting method of part of appliance identification and Image Acquisition, maintenance monitoring.
In order to solve the above technical problems, a kind of crusing robot data collection system of the invention, including binocular camera,
Target identification matching module, wireless network transmission module and camera calibration and calibration unit, binocular camera obtain image information;Mesh
Identifying other matching module includes image processing module and image stereo matching module;The picture number that binocular camera shooting is obtained
According to being acquired, storing and uploading, the hardware resource for realizing multi-targets recognition network model is provided, realizes and is based on convolutional Neural net
The Stereo matching of network;The correction data that camera calibration and calibration module are fed back according to cloud adjusts height, the direction of camera
With the parameters such as angle, the calibration to video camera is completed;Wireless network transmission module is transceiver, and module carries IPEX interface outside
Antenna is set, professional radio frequency shielded enclosure is provided with, there are multiple communication channels, carries out multi-point communication, grouping, frequency hopping.
A kind of collecting method based on above-mentioned crusing robot data collection system, comprising the following steps:
Step 1: after crusing robot reaches designated place, surrounding image is acquired by binocular camera, is transmitted at image
Manage module;Acquired image is the high-definition image of resolution ratio >=800 × 600 at this, for holder processing and is identified;
Step 2: after image processing module receives the image from binocular camera acquisition, format is carried out to image and is turned
It changes, resolution adjustment and smothing filtering, completes the pretreatment to image;
Step 3: it after the image stereo matching module based on convolutional neural networks receives pretreated image, completes
Identification and position matching to target in image;Module collecting data information from image, including instrumented data, equipment portion
Part and line defct information;
Step 4: the image and data information that image stereo matching module extracts are uploaded to by wireless network transmission module
Cloud, real-time storage data, and it is supplied to staff's real-time detection in station;
Step 5: beyond the clouds by the transmission information of the real-time acquisition system of local area network, using cloud computing, fuzzy diagnosis and big
Data technique is analyzed and is handled to the data and information of system, implements intelligentized control to object, and pass to machine
People's feedback compensation information;
Step 6: height, view according to the control information fed back by cloud, according to acquisition image demand to camera
Angle is corrected;
The data collection system of crusing robot repeats the above steps during inspection, completes preset task.
The invention has the advantages that: a kind of realizations that the present invention is crusing robot data acquisition subsystem, are convolutional Neurals
The application direction that the network hardware accelerates.Field of image recognition is related generally to, is by popular convolutional neural networks (CNN) In
It is speeded up to realize on hardware, for identification target object, concrete implementation is related to artificial intelligence, pattern-recognition, Internet of Things and insertion
The multiple fields such as formula exploitation.After inspection data acquisition subsystem mainly completes the specified inspection location point of robot arrival, use is embedding
Enter formula deep learning image processing techniques, quickly identify surrounding scene, identify equipment (component) and be registrated, passes through transmission
Instruction adjustment holder angle, carries out the autonomous acquisition of equipment (component) image.Realize that the identification of complex environment equipment, part of appliance are known
It is not monitored with Image Acquisition, maintenance.
The processor of this holder is the algorithm platform based on convolutional Neural network, realizes crusing robot intelligent recognition station
The targets such as interior equipment, instrument.
The core of the processor of this holder is the Zynq UltraScale+MPSoC family chip of Xilinx, arithmetic speed
Fastly, effect stability.
Multiple targets under the recognizable complex environment of the image stereo matching module of this holder processor, discrimination >=
90%, industrial requirements can be met.
This holder can carry out data communication with cloud, upload data in real time, can also be according to cloud feedback compensation inspection machine
People.
Detailed description of the invention
Fig. 1 is the crusing robot data acquisition subsystem holder architecture diagram that the present invention is built
Fig. 2 is the work flow diagram of crusing robot data acquisition subsystem of the present invention
Fig. 3 is the hardware-accelerated implementation process of convolutional neural networks in the present invention
Fig. 4 is the convolutional neural networks frame selected in the present invention
Fig. 5 is that convolutional neural networks of the present invention accelerate the hardware bottom layer of platform to realize
Fig. 6 is parallel realization structure between the convolution window of eight parallel channels that the present invention realizes
Fig. 7 is the convolution window interior Parallel Implementation module that the present invention realizes
Specific embodiment
In order to which purpose, technical solution and effect for realizing the present invention are more clearly understood, below in conjunction with attached drawing to this
Invention further elaborates.
It is the overall architecture for the inspection machine user tripod head that the present invention is built shown in Fig. 1, inspection data acquisition subsystem is main
After completing the specified inspection location point of robot arrival, using embedded deep learning image processing techniques, periphery is quickly identified
Scene, identification equipment (component) simultaneously be registrated, by send instruct adjustment holder angle, carry out equipment (component) image from
Main acquisition.Realize that the identification of complex environment equipment, part of appliance identification and Image Acquisition, maintenance monitor, core processor completes
Hardware-accelerated realization to convolutional Neural network.As shown in Figure 1, the function of data acquisition holder framework each section of crusing robot
It can be as follows with performance requirement:
(1) binocular camera
Stereo matching problem, i.e., the reference picture and target image taken according to binocular camera, determines reference picture
One process of upper each point corresponding position on target image.
Performance requirement: the real-time deep output accelerated based on CUDA is provided;Offer indoor and outdoor is photosensitive adaptive and adjusts;Firmly
Part grade binocular frame synchronization;Reduce pattern distortion when follow shot
Product specification: size: 165*31.5*29.6, frame per second: >=30FPS, resolution ratio: >=800 × 600, IR it is detectable away from
From: 3m, motion perception: 6 Axis IMU, operating distance: 0.8-5m, power consumption: 3.5W@5V DC from USB.
(2) hardware platform
Complete acquisition to image data, the identification of target, and the image matched uploaded into cloud, using cloud computing,
The technologies such as big data verify registration result, and the angle of holder is adjusted by sending feedback command, complete equipment (component) image
Autonomous acquisition.
1. target identification matching module
Complete the acquisition, storage and upload to image data;The hardware money that multi-targets recognition network model can be achieved is provided
The Stereo matching based on convolutional neural networks is realized in source;It can recognize multiple target objects: casing, connector, reactor, transformer
And its auxiliary device;
Product specification: power consumption: 6W@5V DC from USB, running temperature: -10 DEG C~60 DEG C, communication and network: RGMII
The serial gmii interface of high-speed communication, RJ45 Ethernet connector, discrimination: >=90%.
2. camera calibration and calibration module
According to the correction data that cloud is fed back, the parameters such as height, direction and the angle of camera are adjusted, are completed to video camera
Calibration.
3. wireless network transmission module:
High-rate wireless module, transceiver;Module carries IPEX interface and uses external antenna;Suitable for a variety of applied fields
Scape;Professional radio frequency shielded enclosure, anti-interference, antistatic;Multiple communication channels meet multi-point communication, grouping, frequency hopping etc. using need
It asks.
Product specification: working frequency range: 2400~2525MHz, receiving sensitivity: -95 ± 6dBm, aerial rate: 250k~
2Mbps, measured distance: >=2Km, operating temperature: -40-85 DEG C.
The specific implementation step of holder data collection system is as shown in Fig. 2, acquire image information, In by binocular camera
Figure is formatted in processor, the pretreatment such as resolution adjustment and smothing filtering, to pretreated image into
Row target identification, then identification data are uploaded into cloud, data are handled and analyzed beyond the clouds, are fed back to based on the analysis results
Holder is for correcting camera angle etc..The identification of complex environment equipment, part of appliance identification and image are realized by repeating
Acquisition, maintenance monitoring.It specifically includes:
Step 1: after crusing robot reaches designated place, surrounding image is acquired by binocular camera, is transmitted to holder
Hardware processor.Acquired image is the high-definition image of resolution ratio >=800 × 600 at this, for holder processing and is identified.
Step 2: after holder receives the image from binocular camera acquisition, to need convenient for subsequent target identification
Image is formatted, resolution adjustment and smothing filtering, completes the pretreatment to image.
Step 3: it after the image stereo matching module based on CNN receives pretreated image, completes to mesh in image
Target identification and position matching.The module can collect many data informations, including instrumented data, part of appliance from image
And the information such as line defct.
The image stereo matching module based on CNN in the step is emphasis of the invention, is holder data acquisition process
Core cell in device.Its implementation process is as shown in Figure 3.As seen from the figure, which needs the end PC and hardware processor
Joint is realized.
Firstly, it is necessary to build the convolutional Neural network frame that can be realized on hardware at the end PC, which is required to reach
To the purpose of target identification, it is also desirable to a possibility that can satisfy hardware realization.After putting up frame, with the data made
Collection is trained network frame, by repeatedly debugging, can obtain best convolutional neural networks frame option.The present invention is in experiment rank
Duan Xuanyong YOLO V3 scaled-down version, frame structure are as shown in Figure 4.After having trained network frame, need from final type selecting
In, the network parameters such as weight, biasing are extracted, for initializing the hardware processor of holder.
Secondly, the main control chip of the hardware processor of holder select be Xilinx Zynq UltraScale+MPSoC
Family chip, the hardware computing platform built with the family chip can meet the insertion for running common convolutional neural networks (CNN)
The requirement of formula AI.
The bottom layer realization of the hardware computing platform is as shown in figure 5, mainly complete image procossing and storage, convolution parallel computation
And data output, in its convolution algorithm core module, selected convolutional neural networks have successfully been transplanted to firmly by the present invention
In part computing platform.Convolutional Neural network frame is composed of convolutional layer, activation primitive, pond layer and full articulamentum, each layer
Between have a large amount of and similar parallel convolution operations.It is limited to hardware computing platform resource, the present invention realizes the volume of 8 degree of parallelisms
Operation is accumulated, parallel realization structure is as shown in Figure 6 between convolution window.And each layer of convolution window interior also have it is duplicate parallel
Operation, as shown in Fig. 7.By being repeatedly multiplexed Fig. 6, Fig. 7, the forward direction calculation of achievable convolutional neural networks, and export final
Recognition result.
After initializing successfully, the recognition effect of target is imitated at the end PC with the convolutional neural networks of hardware-accelerated realization
Fruit is consistent.And the Zynq UltraScale+MPSoC family chip of Xilinx is concurrent operation, identifies that the speed of target is far fast
In the end PC, identification requirement of the complex condition to target can be met.
Finally, needing to carry out performance evaluation to it, and the knot for platform of optimizing hardware after hardware-accelerated platform building is good
Structure.And it needs to complete the communication with other modules such as image capture module, net transmission modules.
Step 4: the image and data information that stereo matching module extracts can upload cloud by net transmission module, can
Real-time storage data, can also be convenient for interior staff's real-time detection of standing.
Step 5: beyond the clouds can be by local area network, accurately acquisition holder transmits information in real time;Using cloud computing, obscure
The data and information of holder magnanimity are analyzed and are handled by the intellectual technologies such as identification and big data, are implemented to object intelligent
Control, and pass to robot feedback compensation information.
Step 6: according to the control information fed back by cloud, school is carried out to parameters such as height, the visual angles of camera
Just, convenient for image needed for acquisition.
The data acquisition holder of crusing robot of the invention repeats the above steps during inspection, completes default appoint
Business.
Claims (2)
1. a kind of crusing robot data collection system, it is characterised in that: including binocular camera, target identification matching module,
Wireless network transmission module and camera calibration and calibration unit, binocular camera obtain image information;Target identification matching module packet
Include image processing module and image stereo matching module;The image data that binocular camera shooting obtains is acquired, is stored
And upload, the hardware resource for realizing multi-targets recognition network model is provided, realizes the Stereo matching based on convolutional neural networks;It takes the photograph
The correction data that camera calibration and calibration module are fed back according to cloud adjusts the parameters such as height, direction and the angle of camera, complete
The calibration of pairs of video camera;Wireless network transmission module is transceiver, and module carries IPEX interface and uses external antenna, is provided with specially
Industry radio frequency shielded enclosure has multiple communication channels, carries out multi-point communication, grouping, frequency hopping.
2. a kind of collecting method based on crusing robot data collection system described in claim 1, which is characterized in that
The following steps are included:
Step 1: after crusing robot reaches designated place, surrounding image is acquired by binocular camera, is transmitted to image procossing mould
Block;Acquired image is the high-definition image of resolution ratio >=800 × 600 at this, for holder processing and is identified;
Step 2: after image processing module receives the image from binocular camera acquisition, image is formatted, is divided
Resolution adjustment and smothing filtering, complete the pretreatment to image;
Step 3: it after the image stereo matching module based on convolutional neural networks receives pretreated image, completes to figure
The identification of target and position matching as in;Module collecting data information from image, including instrumented data, part of appliance with
And line defct information;
Step 4: the image and data information that image stereo matching module extracts are uploaded to cloud by wireless network transmission module
End, real-time storage data, and it is supplied to staff's real-time detection in station;
Step 5: pass through the transmission information of the real-time acquisition system of local area network, using cloud computing, fuzzy diagnosis and big data beyond the clouds
Technology is analyzed and is handled to the data and information of system, implements intelligentized control to object, and it is anti-to pass to robot
Present control information;
Step 6: according to the control information fed back by cloud, according to acquisition image demand to the height of camera, visual angle into
Row correction;
The data collection system of crusing robot repeats the above steps during inspection, completes preset task.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910755993.6A CN110519582A (en) | 2019-08-16 | 2019-08-16 | A kind of crusing robot data collection system and collecting method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910755993.6A CN110519582A (en) | 2019-08-16 | 2019-08-16 | A kind of crusing robot data collection system and collecting method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110519582A true CN110519582A (en) | 2019-11-29 |
Family
ID=68626272
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910755993.6A Pending CN110519582A (en) | 2019-08-16 | 2019-08-16 | A kind of crusing robot data collection system and collecting method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110519582A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111063051A (en) * | 2019-12-20 | 2020-04-24 | 深圳市优必选科技股份有限公司 | Communication system of inspection robot |
CN111698418A (en) * | 2020-04-17 | 2020-09-22 | 广州市讯思视控科技有限公司 | Industrial intelligent camera system based on deep learning configuration cloud platform |
CN112497198A (en) * | 2021-02-03 | 2021-03-16 | 北京创泽智慧机器人科技有限公司 | Intelligent inspection robot based on enterprise safety production hidden danger investigation |
CN112668442A (en) * | 2020-12-23 | 2021-04-16 | 南京明德软件有限公司 | Data acquisition and networking method based on intelligent image processing |
CN113505685A (en) * | 2021-07-06 | 2021-10-15 | 浙江大华技术股份有限公司 | Monitoring equipment installation positioning method and device, electronic equipment and storage medium |
CN116442219A (en) * | 2023-03-24 | 2023-07-18 | 东莞市新佰人机器人科技有限责任公司 | Intelligent robot control system and method |
CN116872233A (en) * | 2023-09-07 | 2023-10-13 | 泉州师范学院 | Campus inspection robot and control method thereof |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102572486A (en) * | 2012-02-06 | 2012-07-11 | 清华大学 | Acquisition system and method for stereoscopic video |
CN103400392A (en) * | 2013-08-19 | 2013-11-20 | 山东鲁能智能技术有限公司 | Binocular vision navigation system and method based on inspection robot in transformer substation |
CN106125744A (en) * | 2016-06-22 | 2016-11-16 | 山东鲁能智能技术有限公司 | The Intelligent Mobile Robot cloud platform control method of view-based access control model servo |
CN106709950A (en) * | 2016-11-28 | 2017-05-24 | 西安工程大学 | Binocular-vision-based cross-obstacle lead positioning method of line patrol robot |
US20190096057A1 (en) * | 2017-05-11 | 2019-03-28 | Jacob Nathaniel Allen | Object inspection system and method for inspecting an object |
CN109579825A (en) * | 2018-11-26 | 2019-04-05 | 江苏科技大学 | Robot positioning system and method based on binocular vision and convolutional neural networks |
-
2019
- 2019-08-16 CN CN201910755993.6A patent/CN110519582A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102572486A (en) * | 2012-02-06 | 2012-07-11 | 清华大学 | Acquisition system and method for stereoscopic video |
CN103400392A (en) * | 2013-08-19 | 2013-11-20 | 山东鲁能智能技术有限公司 | Binocular vision navigation system and method based on inspection robot in transformer substation |
CN106125744A (en) * | 2016-06-22 | 2016-11-16 | 山东鲁能智能技术有限公司 | The Intelligent Mobile Robot cloud platform control method of view-based access control model servo |
CN106709950A (en) * | 2016-11-28 | 2017-05-24 | 西安工程大学 | Binocular-vision-based cross-obstacle lead positioning method of line patrol robot |
US20190096057A1 (en) * | 2017-05-11 | 2019-03-28 | Jacob Nathaniel Allen | Object inspection system and method for inspecting an object |
CN109579825A (en) * | 2018-11-26 | 2019-04-05 | 江苏科技大学 | Robot positioning system and method based on binocular vision and convolutional neural networks |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111063051A (en) * | 2019-12-20 | 2020-04-24 | 深圳市优必选科技股份有限公司 | Communication system of inspection robot |
CN111698418A (en) * | 2020-04-17 | 2020-09-22 | 广州市讯思视控科技有限公司 | Industrial intelligent camera system based on deep learning configuration cloud platform |
CN112668442A (en) * | 2020-12-23 | 2021-04-16 | 南京明德软件有限公司 | Data acquisition and networking method based on intelligent image processing |
CN112668442B (en) * | 2020-12-23 | 2022-01-25 | 南京市计量监督检测院 | Data acquisition and networking method based on intelligent image processing |
CN112497198A (en) * | 2021-02-03 | 2021-03-16 | 北京创泽智慧机器人科技有限公司 | Intelligent inspection robot based on enterprise safety production hidden danger investigation |
CN113505685A (en) * | 2021-07-06 | 2021-10-15 | 浙江大华技术股份有限公司 | Monitoring equipment installation positioning method and device, electronic equipment and storage medium |
CN116442219A (en) * | 2023-03-24 | 2023-07-18 | 东莞市新佰人机器人科技有限责任公司 | Intelligent robot control system and method |
CN116442219B (en) * | 2023-03-24 | 2023-11-03 | 东莞市新佰人机器人科技有限责任公司 | Intelligent robot control system and method |
CN116872233A (en) * | 2023-09-07 | 2023-10-13 | 泉州师范学院 | Campus inspection robot and control method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110519582A (en) | A kind of crusing robot data collection system and collecting method | |
Chamoso et al. | UAVs applied to the counting and monitoring of animals | |
CN108805258A (en) | A kind of neural network training method and its device, computer server | |
CN107911429A (en) | A kind of online traffic flow monitoring method in unmanned plane high in the clouds based on video | |
CN110134147A (en) | A kind of autonomous paths planning method and device of plant protection drone | |
CN111008733B (en) | Crop growth control method and system | |
CN109353504B (en) | Unmanned aerial vehicle intelligent spraying system and method based on prescription chart | |
CN110675395A (en) | Intelligent on-line monitoring method for power transmission line | |
CN111985352B (en) | AI front-end substation inspection video real-time identification method and system | |
CN105843147A (en) | Smart agriculture monitoring and management system | |
CN111046943A (en) | Method and system for automatically identifying state of isolation switch of transformer substation | |
CN112528912A (en) | Crop growth monitoring embedded system and method based on edge calculation | |
CN115494733A (en) | Underwater robot self-adaptive control method based on gazebo | |
CN112052736A (en) | Cloud computing platform-based field tea tender shoot detection method | |
CN113349188B (en) | Lawn and forage precise weeding method based on cloud weeding spectrum | |
CN107392398A (en) | A kind of agricultural management method, mist calculating platform and system | |
Xin et al. | Development of vegetable intelligent farming device based on mobile APP | |
CN212459378U (en) | Ground object hyperspectral meter remote sensing land utilization sample acquisition instrument based on unmanned aerial vehicle | |
CN112001290B (en) | Rice planthopper migration path prediction method based on YOLO algorithm | |
CN115297303B (en) | Image data acquisition and processing method and device suitable for power grid power transmission and transformation equipment | |
CN116824369A (en) | Litchi diseases real-time detection system and method based on edge calculation and yolov7-tiny | |
CN110113575A (en) | A kind of agriculture feelings information real-time monitoring platform based on NB-IoT | |
CN114545833B (en) | Intelligent interactive processing system of facility agriculture based on internet of things | |
CN114898361A (en) | Peach orchard fruit state identification and counting method and system | |
CN115922697A (en) | Intelligent robot automatic inspection method based on transformer substation digital twinning technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191129 |
|
RJ01 | Rejection of invention patent application after publication |