CN110443197A - A kind of visual scene intelligent Understanding method and system - Google Patents

A kind of visual scene intelligent Understanding method and system Download PDF

Info

Publication number
CN110443197A
CN110443197A CN201910719181.6A CN201910719181A CN110443197A CN 110443197 A CN110443197 A CN 110443197A CN 201910719181 A CN201910719181 A CN 201910719181A CN 110443197 A CN110443197 A CN 110443197A
Authority
CN
China
Prior art keywords
scene
data
module
visual
case
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910719181.6A
Other languages
Chinese (zh)
Inventor
谭泽汉
邓海燕
陈彦宇
马雅奇
谭龙田
周慧子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201910719181.6A priority Critical patent/CN110443197A/en
Publication of CN110443197A publication Critical patent/CN110443197A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of visual scene understanding methods, comprising the following steps: S10: acquisition current scene data;S20: current scene information is handled using deep learning method and traditional images analytical, to obtain prospect, the background and context feature of current scene;S30: using the contextual data of deep learning method study different scenes, the case where to judge current scene and/or it is expected that the case where will occur and counte-rplan are formulated.The application combines traditional image analysis method with deep learning method to obtain the prospect of scene, background and context parameter, and the environmental parameter variation for passing through study different scenes, come the case where judging current scene or it is expected that the case where will occur and the optional processing scheme of scenario that reply will occur, when scene is unfavorable for user preference, user can make counter-measure by optional processing scheme.

Description

A kind of visual scene intelligent Understanding method and system
Technical field
The present invention relates to technical field of vision detection, a kind of particularly visual scene intelligent Understanding method and system.
Background technique
In the method that existing visual scene understands, it is only concerned about target detection, scene cut and the mesh of scene Mark tracking etc., user can understand the location of target and morphological feature by the judgement to target, but cannot Data acquisition is carried out to the actual foreground of current environment, background parameter and is shown, it will can to user or cannot remind The situation etc. that can occur, is unfavorable for user according to current context information to judge, to make as early as possible according to current scene Prevent counter-measure.
Summary of the invention
To solve the above-mentioned problems, above-mentioned for solving the present invention provides a kind of employee's adjustmenting management method and system Problem.
In a first aspect, the application provides a kind of visual scene intelligent Understanding system, comprising:
Data acquisition module is used to acquire prospect, the background and context supplemental characteristic of current scene;
Data memory module is used to store the collected data of the data acquisition module institute;
Visual processes analysis module comprising intelligence learning module and image processing module, described image processing module benefit The collected data of institute are analyzed and processed with traditional images processing method, the intelligence learning module utilizes deep learning side Method is analyzed and processed the collected data of institute, to obtain prospect, the background and context feature of current scene;
And the intelligence learning module is by the contextual data of study different scenes, the case where to judge current scene and/ Or it is expected that the case where will occur and formulate counte-rplan.
In one embodiment, the intelligence learning module is in such a way that convolution, pondization calculate, and utilizes the depth Trained model is analyzed and processed data in degree learning method.
In one embodiment, the trained model is using deep neural network structure to comprising a large amount of positive and negative The data set of sample carries out what model learning training obtained.
In one embodiment, the contextual data other than data collecting module collected to the locating scene of user's habit When, the intelligence learning module to contextual data other than the locating scene of collected user habit analyze, if symbol The locating scene characteristic of family habit is shared, then is used as positive sample, is otherwise used as negative sample.
In one embodiment, the visual scene intelligent Understanding system further includes communication module, the communication module For with user terminal communication, by data collected, and the result and counte-rplan of judgement are sent to user.
In one embodiment, described image processing and visual analysis identification module using Threshold segmentation, Color Picking, The method that luminance contrast and canny operator contours extract and image area calculate carries out at analysis data collected Reason.
In one embodiment, the deep learning method includes RCNN target detection and FCN foreground segmentation.
In one embodiment, the data acquisition module includes photosensitive sensor, sound transducer, temperature and humidity sensing Device and image capture module,
The photosensitive sensor is used to acquire the luminance contrast data of scene;
The sound transducer is for acquiring voice data;
The Temperature Humidity Sensor is for acquiring data of the Temperature and Humidity module;
Described image acquisition module is used to acquire people, object, the number of event, position and the categorical data in scene image.
Second aspect, this application provides a kind of visual scene intelligent Understanding methods, comprising the following steps:
S10: prospect, the background and context supplemental characteristic of current scene are acquired;
S20: handling current scene data using deep learning method and traditional images analytical, current to obtain Prospect, the background and context feature of scene;
S30: using the contextual data of deep learning method study different scenes, the case where to judge current scene and/or It is expected that the case where will occurring and counte-rplan are formulated.
In one embodiment, in step S30: the case where judging current scene includes:
Scene foreground target positioning and detection, scene background information extraction, three-dimensional space present case or it is expected that be It will be the case where appearance and the optional processing scheme of scenario that will occur of reply.
In one embodiment, further include step S40 after step S30: the estimate of situation of current scene will be made It is stored for trained model and is used for next deep learning.
Compared with the prior art, the advantages of the present invention are as follows: the application by traditional image analysis method and deep learning Method combines to obtain the prospect of scene, background and context parameter, and by the environmental parameter variation of study different scenes, comes The case where judging current scene or it is expected that the case where will occur and scenario that reply will occur optionally is handled Scheme is conducive to user and is effectively treated, prevents to the scenario currently faced, when scene is unfavorable for user preference, uses Family can make counter-measure by optional processing scheme.
Detailed description of the invention
The invention will be described in more detail below based on embodiments and refering to the accompanying drawings.
Fig. 1 is the schematic diagram according to a kind of visual scene intelligent Understanding system of the application.
Fig. 2 is the flow chart one according to a kind of visual scene intelligent Understanding method of the application.
Fig. 3 is the flowchart 2 according to a kind of visual scene intelligent Understanding method of the application.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings.
As shown in Figure 1, it is shown that a kind of visual scene intelligent Understanding system according to the present invention, including data acquisition module Block, data memory module and visual processes analysis module.
Wherein, data acquisition module is used to acquire prospect, the background and context supplemental characteristic of current scene.Data store mould Block collected data of acquisition module institute for storing data.Visual processes analysis module includes intelligence learning module and image again Processing module, image processing module are analyzed and processed the collected data of institute using traditional images processing method, and intelligence is learned It practises module and the collected data of institute is analyzed and processed using deep learning method, to obtain prospect, the background of current scene And environmental characteristic.
In addition, intelligence learning module is also used to learn the scenario parameters of different scenes, the case where to judge current scene and/ Or it is expected that the case where will occur and formulate counte-rplan.
Wherein, data acquisition module includes photosensitive sensor, sound transducer, Temperature Humidity Sensor and Image Acquisition mould Block.Photosensitive sensor is used to acquire the luminance contrast data of scene;Sound transducer is for acquiring voice data;Temperature and humidity passes Sensor is for acquiring data of the Temperature and Humidity module;Image capture module is for acquiring the people in scene image, object, the number of event, position And categorical data.Image capture module can be high-definition camera, high speed camera etc..
Wherein, traditional images processing method includes Threshold segmentation, Color Picking, luminance contrast and canny operator profile Extraction and image area calculating etc..Deep learning method includes RCNN target detection and FCN foreground segmentation etc..
Wherein, intelligence learning module can be in such a way that convolution, pondization calculate, and utilize training in deep learning method Good model is analyzed and processed data.Trained model can be using deep neural network structure to comprising largely just The data set of negative sample carries out what model learning training obtained.In the data set, it is accustomed in data collecting module collected to user When contextual data other than locating scene, intelligence learning module to field other than the locating scene of collected user's habit Scape data are analyzed, if meeting the locating scene characteristic of user's habit, are used as positive sample, are otherwise used as negative sample.
In a preferred embodiment, visual scene intelligent Understanding system further includes communication module, communication module be used for User terminal communication, by data collected, and judgement current scene the case where and optional counte-rplan be sent to use Family.
Fig. 2 shows a kind of visual scene intelligent Understanding method according to the application, comprising the following steps:
Step 1: acquisition current scene data.
Using the luminance contrast data of photosensitive sensor acquisition scene, voice data, benefit are acquired using sound transducer With Temperature Humidity Sensor acquire data of the Temperature and Humidity module, using image capture module acquisition scene image in people, object, event Number, position and categorical data.
Step 2: current scene information is handled using deep learning method and traditional images analytical, to obtain Prospect, the background and context feature of current scene.
Step 3: using the contextual data of deep learning method study different scenes, the case where to judge current scene, or It is expected that the case where will occurring and counte-rplan are formulated.
Wherein, the case where judging current scene include: scene foreground target positioning and detection, scene background information extraction, Three-dimensional space present case or it is expected that the case where will occur and scenario that reply will occur optionally is handled Scheme.
It in a preferred embodiment, further include step 4: by point of prospect, background and context feature to current scene Analysis processing result is stored in data memory module as trained model and uses for next deep learning.
In conclusion the application is complicated to the luminance contrast of current scene, color, foreground target, background by being added The acquisition module of the environmental parameters such as degree, so that further comprising the background information of current scene when the target prospect of detection current scene Deng, traditional image analysis method is combined with deep learning method to obtain the prospect of scene, background and context parameter, and By learning the environmental parameter variation of different scenes, the case where to judge current scene or it is expected that the case where will occur and The optional processing scheme of scenario that will occur is coped with, when scene is unfavorable for user preference, user can be by optional Processing scheme make counter-measure.
It is below the embodiment that the visual scene understanding method is applied to industrial production line scene, specifically:
Contextual data acquisition, the contextual data that need to be acquired are carried out by photosensitive sensor, high-definition camera, high speed camera etc. Including essential informations such as the employee, product, the processes in production line that include in scene brightness contrast, temperature and humidity, scene image Data save the data in data memory module after acquisition.
Data memory module sends the data to visual processes analysis module, visual processes can be analyzed in image at Reason module and intelligence learning module are configured to an entirety in order to package application.By by the traditional analysis of image processing module Method (Threshold segmentation, Color Picking, luminance contrast, canny operator contours extract, image area calculate) and intelligence learning mould The deep learning method (RCNN target detection, FCN foreground segmentation) of block matches, and calls trained in deep learning method Model is analyzed and processed current scene, obtains work employee, the product produced, production line produced of current scene In the data such as process.
Intelligence learning module, according to the feature of industrial scene information, is matched to above-mentioned scene in learning model as industrial field Scape, and correspondence classifies current scene, and labeled as industrial scene, corresponding task dressing, behavioural characteristic are then labeled as Product line worker, corresponding article characteristics are then labeled as production material or product etc..To excavate effective industrial scene characteristic, Biological characteristic, object category feature, and modeling recovery is carried out to current scene, it is fitted properly with 3-D image or in conjunction with audio Current scene environment be presented to the user, so that user does counte-rplan to current scene situation, such as: current industrial scene In, somewhere temperature is higher and brightness is higher, excludes except production technology demand, judges that it, with the presence or absence of fire hazard, is mentioned in time Awake user does the counter-measure for excluding fire hazard, and provides nearest fire extinguisher, alarm, fire hydrant and exit passageway etc. and mentioned It wakes up.
It is to be appreciated that the visual scene understanding method of the application applies also for the detection of road conditions traffic scene, periphery Environment food, clothing, housing and transportation, gas station or detection of scenic spot etc., to show the actual conditions of current scene.
Although by reference to preferred embodiment, invention has been described, the case where not departing from the scope of the present invention Under, various improvement can be carried out to it and can replace component therein with equivalent.Especially, as long as there is no structures to rush Prominent, items technical characteristic mentioned in the various embodiments can be combined in any way.The invention is not limited to texts Disclosed in specific embodiment, but include all technical solutions falling within the scope of the claims.

Claims (11)

1. a kind of visual scene intelligent Understanding system characterized by comprising
Data acquisition module is used to acquire prospect, the background and context supplemental characteristic of current scene;
Data memory module is used to store the collected data of the data acquisition module institute;
Visual processes analysis module comprising intelligence learning module and image processing module, described image processing module utilize biography System image processing method is analyzed and processed the collected data of institute, and the intelligence learning module utilizes deep learning method pair The collected data of institute are analyzed and processed, to obtain prospect, the background and context feature of current scene;
And the intelligence learning module is by the contextual data of study different scenes, it is the case where to judge current scene and/or pre- The case where meter will occur simultaneously formulates counte-rplan.
2. visual scene intelligent Understanding system according to claim 1, which is characterized in that the intelligence learning module uses The mode that convolution, pondization calculate, and data are analyzed and processed using trained model in the deep learning method.
3. visual scene intelligent Understanding system according to claim 2, which is characterized in that the trained model is to adopt What model learning training obtained is carried out to the data set comprising a large amount of positive negative samples with deep neural network structure.
4. visual scene intelligent Understanding system according to claim 3, which is characterized in that arrived in data collecting module collected When contextual data other than the locating scene of user's habit, the intelligence learning module is to locating for institute's collected user habit Contextual data other than scene is analyzed, if meeting the locating scene characteristic of user's habit, is used as positive sample, otherwise conduct Negative sample.
5. visual scene intelligent Understanding system according to claim 1, which is characterized in that the visual scene intelligent Understanding System further includes communication module, and the communication module is used for and user terminal communication, by data collected, and the result of judgement And counte-rplan are sent to user.
6. visual scene intelligent Understanding system according to claim 1, which is characterized in that described image processing and vision point It analyses identification module and uses Threshold segmentation, Color Picking, luminance contrast and canny operator contours extract and image area meter The method of calculation is analyzed and processed data collected.
7. visual scene intelligent Understanding system according to claim 1, which is characterized in that the deep learning method includes RCNN target detection and FCN foreground segmentation.
8. visual scene intelligent Understanding system according to claim 1, which is characterized in that the data acquisition module includes Photosensitive sensor, sound transducer, Temperature Humidity Sensor and image capture module,
The photosensitive sensor is used to acquire the luminance contrast data of scene;
The sound transducer is for acquiring voice data;
The Temperature Humidity Sensor is for acquiring data of the Temperature and Humidity module;
Described image acquisition module is used to acquire people, object, the number of event, position and the categorical data in scene image.
9. a kind of visual scene intelligent Understanding method, which comprises the following steps:
S10: prospect, the background and context supplemental characteristic of current scene are acquired;
S20: current scene data are handled using deep learning method and traditional images analytical, to obtain current scene Prospect, background and context feature;
S30: using the contextual data of deep learning method study different scenes, the case where to judge current scene and/or it is expected that The case where will occurring, simultaneously formulates counte-rplan.
10. visual scene intelligent Understanding method according to claim 9, which is characterized in that in step S30: judgement is current The case where scene includes:
The positioning of scene foreground target and detection, three-dimensional space present case or it is expected that will go out at scene background information extraction The optional processing scheme of scenario that existing situation and reply will occur.
11. visual scene intelligent Understanding method according to claim 9, which is characterized in that after step S30, also wrap It includes step S40: the estimate of situation to current scene being stored for next deep learning as trained model and is used.
CN201910719181.6A 2019-08-05 2019-08-05 A kind of visual scene intelligent Understanding method and system Pending CN110443197A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910719181.6A CN110443197A (en) 2019-08-05 2019-08-05 A kind of visual scene intelligent Understanding method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910719181.6A CN110443197A (en) 2019-08-05 2019-08-05 A kind of visual scene intelligent Understanding method and system

Publications (1)

Publication Number Publication Date
CN110443197A true CN110443197A (en) 2019-11-12

Family

ID=68433423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910719181.6A Pending CN110443197A (en) 2019-08-05 2019-08-05 A kind of visual scene intelligent Understanding method and system

Country Status (1)

Country Link
CN (1) CN110443197A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111780334A (en) * 2020-08-06 2020-10-16 江苏浩金欧博环境科技有限公司 Combined air conditioning unit based on 5G network technology monitoring

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408469A (en) * 2014-11-28 2015-03-11 武汉大学 Firework identification method and firework identification system based on deep learning of image
CN105575119A (en) * 2015-12-29 2016-05-11 大连楼兰科技股份有限公司 Road condition climate deep learning and recognition method and apparatus
CN105939421A (en) * 2016-06-14 2016-09-14 努比亚技术有限公司 Terminal parameter adjusting device and method
CN108052914A (en) * 2017-12-21 2018-05-18 中国科学院遥感与数字地球研究所 A kind of forest forest resource investigation method identified based on SLAM and image
CN108416963A (en) * 2018-05-04 2018-08-17 湖北民族学院 Forest Fire Alarm method and system based on deep learning
CN109190575A (en) * 2018-09-13 2019-01-11 深圳增强现实技术有限公司 Assemble scene recognition method, system and electronic equipment
CN109858516A (en) * 2018-12-24 2019-06-07 武汉工程大学 A kind of fire and smog prediction technique, system and medium based on transfer learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408469A (en) * 2014-11-28 2015-03-11 武汉大学 Firework identification method and firework identification system based on deep learning of image
CN105575119A (en) * 2015-12-29 2016-05-11 大连楼兰科技股份有限公司 Road condition climate deep learning and recognition method and apparatus
CN105939421A (en) * 2016-06-14 2016-09-14 努比亚技术有限公司 Terminal parameter adjusting device and method
CN108052914A (en) * 2017-12-21 2018-05-18 中国科学院遥感与数字地球研究所 A kind of forest forest resource investigation method identified based on SLAM and image
CN108416963A (en) * 2018-05-04 2018-08-17 湖北民族学院 Forest Fire Alarm method and system based on deep learning
CN109190575A (en) * 2018-09-13 2019-01-11 深圳增强现实技术有限公司 Assemble scene recognition method, system and electronic equipment
CN109858516A (en) * 2018-12-24 2019-06-07 武汉工程大学 A kind of fire and smog prediction technique, system and medium based on transfer learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GONG CHENG 等: "When Deep Learning Meets Metric Learning:Remote Sensing Image Scene Classification via Learning Discriminative CNNs", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111780334A (en) * 2020-08-06 2020-10-16 江苏浩金欧博环境科技有限公司 Combined air conditioning unit based on 5G network technology monitoring

Similar Documents

Publication Publication Date Title
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN109325429B (en) Method, device, storage medium and terminal for associating feature data
CN111726586A (en) Production system operation standard monitoring and reminding system
CN109886130A (en) Determination method, apparatus, storage medium and the processor of target object
CN109299703A (en) The method, apparatus and image capture device counted to mouse feelings
CN112819068B (en) Ship operation violation behavior real-time detection method based on deep learning
CN113642474A (en) Hazardous area personnel monitoring method based on YOLOV5
CN111222478A (en) Construction site safety protection detection method and system
CN105426828A (en) Face detection method, face detection device and face detection system
CN111353338B (en) Energy efficiency improvement method based on business hall video monitoring
CN104463869A (en) Video flame image composite recognition method
CN115170792B (en) Infrared image processing method, device and equipment and storage medium
CN110602446A (en) Garbage recovery reminding method and system and storage medium
CN116259002A (en) Human body dangerous behavior analysis method based on video
CN110674753A (en) Theft early warning method, terminal device and storage medium
CN112257527A (en) Mobile phone detection method based on multi-target fusion and space-time video sequence
CN111753610A (en) Weather identification method and device
CN105095891A (en) Human face capturing method, device and system
CN112633157B (en) Real-time detection method and system for safety of AGV working area
CN110443197A (en) A kind of visual scene intelligent Understanding method and system
CN106803937B (en) Double-camera video monitoring method, system and monitoring device with text log
CN114463779A (en) Smoking identification method, device, equipment and storage medium
CN115131826B (en) Article detection and identification method, and network model training method and device
CN114821486B (en) Personnel identification method in power operation scene
CN113837138B (en) Dressing monitoring method, dressing monitoring system, dressing monitoring medium and electronic terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191112