CN110197134A - A kind of human action detection method and device - Google Patents

A kind of human action detection method and device Download PDF

Info

Publication number
CN110197134A
CN110197134A CN201910393401.0A CN201910393401A CN110197134A CN 110197134 A CN110197134 A CN 110197134A CN 201910393401 A CN201910393401 A CN 201910393401A CN 110197134 A CN110197134 A CN 110197134A
Authority
CN
China
Prior art keywords
human body
key position
image
motion track
sample data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910393401.0A
Other languages
Chinese (zh)
Inventor
刘晨曦
张旭
颜杰
吴琦
龚纯斌
肖潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Science And Technology Co Ltd
Original Assignee
Xiamen Science And Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Science And Technology Co Ltd filed Critical Xiamen Science And Technology Co Ltd
Priority to CN201910393401.0A priority Critical patent/CN110197134A/en
Publication of CN110197134A publication Critical patent/CN110197134A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a kind of human action detection method and device.Wherein, method includes: the video image that real-time acquisition includes the human body, is detected by the convolutional neural networks model that training is completed to the key position for the human body for including in the video image, and obtain the location information of the key position;Based on the location information of the obtained key position, the motion track of the key position is determined;The determining motion track is compared with preset standard trajectory, judges whether the motion track of the key position closes rule.In this way, may be implemented whether to meet specification to the motion track of human body key position.

Description

A kind of human action detection method and device
Technical field
This application involves technical field of image detection, in particular to a kind of human action detection method and device.
Background technique
Raising with user to the quality requirements of display screen, display screen manufacturer are going out the display screen of production It needs to carry out strict inspection before factory, so that display screen can just introduce to the market after meeting quality standard.
Currently, needing to visually observe display screen display effect by staff for the detection of the partial properties of display screen Fruit searches defect, flaw existing for display screen etc..And artificial detection depends on the subjective judgement of staff, however by It in the diversity of operator work habit, is easy to appear and does not observe position, it is not careful to detect, and acts nonstandard situation, finally The problem of leading to erroneous detection, missing inspection.
For display screen testing staff and work habit diversity, display screen testing staff how is accurately examined to examine dynamic The normalization of work is of great significance for the quality for promoting display screen.
Summary of the invention
In view of this, the application provides a kind of human action detection method and device, with the rule of the movement to the human body Plasticity is detected.
Specifically, the application is achieved by the following technical solution:
In a first aspect, providing a kind of human action detection method in the embodiment of the present application, which comprises
Acquisition in real time includes the video image of the human body, and the convolutional neural networks model completed by training is to the view The key position for the human body for including in frequency image is detected, and obtains the location information of the key position;
Based on the location information of the obtained key position, the motion track of the key position is determined;
The determining motion track is compared with preset standard trajectory, judges the moving rail of the key position Whether mark closes rule.
Second aspect, the embodiment of the present application provide a kind of human action detection device, comprising:
Detection module passes through the convolutional Neural net of training completion for acquiring the video image comprising the human body in real time Network model detects the key position for the human body for including in the video image, and obtains the position letter of the key position Breath;
Determining module determines the movement of the key position for the location information based on the obtained key position Track;
Judgment module judges the pass for the motion track determined to be compared with preset standard trajectory Whether the motion track at key position closes rule.
The third aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence, when described program is executed by processor realize as described in relation to the first aspect method the step of.
Fourth aspect, the embodiment of the present application provide a kind of computer equipment, including memory, processor and are stored in institute The computer program that can be run on memory and on the processor is stated, the processor executes real when the computer program The step of method described in existing above-mentioned first aspect.
A kind of human action detection method and device provided in the embodiment of the present application, can accurate detection go out it is monitored The movement of personnel, and judge whether the movement of human body meets specification.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of human action detection method shown in one exemplary embodiment of the application;
Fig. 2 is the flow diagram of the training neural network shown in one exemplary embodiment of the application;
Fig. 3 is a kind of structural schematic diagram of human action detection device shown in one exemplary embodiment of the application.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the application.On the contrary, they be only with it is such as appended The example of the consistent device and method of some aspects be described in detail in claims, the application.
It is only to be not intended to be limiting the application merely for for the purpose of describing particular embodiments in term used in this application. It is also intended in the application and the "an" of singular used in the attached claims, " described " and "the" including majority Form, unless the context clearly indicates other meaning.It is also understood that term "and/or" used herein refers to and wraps It may be combined containing one or more associated any or all of project listed.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the application A little information should not necessarily be limited by these terms.These terms are only used to for same type of information being distinguished from each other out.For example, not departing from In the case where the application range, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as One information.Depending on context, word as used in this " if " can be construed to " ... when " or " when ... When " or " in response to determination ".
Fig. 1 shows a kind of flow diagram of human action detection method of the application one embodiment offer.Reference Shown in Fig. 1, this method comprises the following steps S101-S103:
S101, in real time acquisition include the video image of the human body, the convolutional neural networks model pair completed by training The key position for the human body for including in the video image is detected, and obtains the location information of the key position.
Referring to embodiment shown in Fig. 2, the above-mentioned convolutional neural networks model completed in training is in the video image Before the key position for the human body for including is detected, need first to obtain the convolutional neural networks model of training completion, the convolution Neural network model is through the following steps that S201-S202 training was completed:
The markup information for each sample image that S201, generation sample data set and the sample data are concentrated.
Specifically, the image comprising human body and image not comprising human body are obtained under target scene as training data, The sample data set is generated, trained yolo model detects to wrap from the sample data of magnanimity in coco data set Image containing human body, head, hand key point to the image labeling testing staff.
It is for display screen detects scene by the target scene, above-mentioned training data mainly includes under display screen detection environment There are testing staff and the image data without testing staff, the image for being adapted under scene, and other scenes of part with and without people Data are used for lift scheme Shandong nation property and compatibility.It is collected image data, length and width dimensions and mark after training data Key point coordinate both maps on 0 to 1, eliminate data between, between coordinate dimension influence so that between each data target It in the same order of magnitude, is comparable, and guarantees that back-propagation gradient towards the direction of minimum value, guarantees model instruction always Accelerate convergence when practicing.
In the case where being directed to display screen detection scene, the normalization of the key position of testing staff is detected, above-mentioned people The key position of body can be include: human body head and hand.
Above-mentioned target scene is also possible to such as in the field that the sitting posture normalization of student before screen is detected Scape;At this point, the key position of above-mentioned human body can be including human body head and trunk portion.
The markup information of S202, each sample image concentrated according to the sample data and each sample image are trained The convolutional neural networks of foundation obtain the convolutional layer of the convolutional neural networks and the parameter of batch normalization layer.
In one embodiment of the application, gives the Recognition with Recurrent Neural Network model that the above-mentioned training of an application is completed and calculate human body pass The method of the position at key position is as follows:
Step A, the position proposal when human detection result has deviation is adjusted using symmetric space converting network, obtained To the feature F of refine0
Step B, a single Attitude estimation network is constructed, by the feature F after refine0It is input to single posture network In, and the former space of Feature Mapping go back to (space before symmetric space transformation) will be exported and obtain feature F1, it is therefore an objective to work as human testing In the case where frame inaccuracy, single Attitude estimation network still has good effect.
Step C, by F1Using non-maxima suppression eliminate redundancy posture, obtain to the end, hand key point.
Wherein, the elimination redundancy formula that non-maxima suppression uses is defined as:
d(Pi,Pj| Λ)=Ksim(Pi,Pj1)+λHsim(Pi,Pj2)
Wherein, λ is weight coefficient, Ksim(Pi,Pj1) it is posture distance metric, indicate of different parts between posture With number, for measuring the similarity between posture, eliminate off closer and more similar posture, Hsim(Pi,Pj2) be posture it Between different parts space length.
Posture distance and space length are defined as follows:
PiAnd PjIndicate F1Redundancy different response point posture coordinates, σ1And σ2Indicate standard deviation criteria,WithIt indicates The response of n-th of characteristic point,WithIndicate the image coordinate of n-th of characteristic point, i and j are the key point index of head, hand.
When obtained non-maxima suppression result is greater than threshold value η, the head as detected, hand position key point.
S102, the location information based on the obtained key position, determine the motion track of the key position.
In the present embodiment, head, the hand position coordinate of each frame image detection personnel are calculated with trained model, uses Kalman Tracking obtains smooth track;
S103, the determining motion track is compared with preset standard trajectory, judges the key position Whether motion track closes rule.
In the present embodiment, according to following publicity calculate the consolidation path between standard operation track and the motion track of determination away from From:
D (i, j)=Dist (i, j)+min [D (i-1, j), D (i, j-1), D (i-1, j-1)]
Wherein, D (i, j) indicates that length is the distance between two track sets of i and j, Dist (wki,wkj) indicate Europe Distance, w are obtained in severalkiAnd wkjI-th of coordinate points for respectively indicating standard trajectory detect j-th of coordinate points of track.
Optionally, the above embodiments of the present application are can be in practical applications when detecting that human action does not conform to specification It can be by way of providing prompt, staff prompted to carry out posture correction.
Method provided by the embodiments of the present application, when being applied to the detection operation normalization of display screen testing staff, first The detection operation video for acquiring display screen testing staff extracts the head of testing staff, the position rail of hand by deep learning algorithm Mark is judged the accuracy of current trial movement further according to the standard operation track accessed in advance, is finally worked as according to threshold decision The qualification of preceding detection operation.
Referring to embodiment shown in Fig. 3, a kind of human action detection device is provided in the present embodiment, comprising:
Detection module 301 passes through the convolutional Neural of training completion for acquiring the video image comprising the human body in real time Network model detects the key position for the human body for including in the video image, and obtains the position of the key position Information;
Determining module 302 determines the shifting of the key position for the location information based on the obtained key position Dynamic rail mark;
Judgment module 303, for the motion track determined to be compared with preset standard trajectory, described in judgement Whether the motion track of key position closes rule.
Optionally, above-mentioned device, further includes:
Generation module, the markup information of each sample image for generating sample data set and sample data concentration;
Training module, the markup information of each sample image and each sample image for being concentrated according to the sample data The convolutional neural networks of foundation are trained, the convolutional layer of the convolutional neural networks and the parameter of batch normalization layer are obtained.
Optionally, above-mentioned generation module, is specifically used for:
The image comprising human body is obtained under target scene and the image not comprising human body generates the sample data set, is used Trained yolo model detects the image comprising human body from the sample data of magnanimity in coco data set, is somebody's turn to do to described The head of image labeling testing staff, hand key point.
Optionally, the key position of the human body includes: head and the hand of human body.
Optionally, above-mentioned judgment module, is specifically used for:
The consolidation path distance between standard trajectory and the detection track is calculated, is preset when the regular path distance meets When threshold range, judge that the motion track of the key position closes rule.
Installation practice can also be realized by software realization by way of hardware or software and hardware combining.
The function of each unit and the realization process of effect are specifically detailed in the above method and correspond to step in above-mentioned apparatus Realization process, details are not described herein.
The third aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence, the step of a kind of human action detection method is realized when described program is executed by processor.
Fourth aspect, the embodiment of the present application provide a kind of computer equipment, including memory, processor and are stored in institute The computer program that can be run on memory and on the processor is stated, the processor executes real when the computer program A kind of the step of existing above-mentioned human action detection method.
For device embodiment, since it corresponds essentially to embodiment of the method, so related place is referring to method reality Apply the part explanation of example.The apparatus embodiments described above are merely exemplary, wherein described be used as separation unit The unit of explanation may or may not be physically separated, and component shown as a unit can be or can also be with It is not physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to actual The purpose for needing to select some or all of the modules therein to realize application scheme.Those of ordinary skill in the art are not paying Out in the case where creative work, it can understand and implement.
It is suitable for storing computer program instructions and the computer-readable medium of data including the non-volatile of form of ownership Memory, medium and memory devices, for example including semiconductor memory devices (such as EPROM, EEPROM and flash memory device), Disk (such as internal disk or removable disk), magneto-optic disk and CD ROM and DVD-ROM disk.Processor and memory can be by special It is supplemented or is incorporated in dedicated logic circuit with logic circuit.
Although this specification includes many specific implementation details, these are not necessarily to be construed as the model for limiting any invention It encloses or range claimed, and is primarily used for describing the feature of the specific embodiment of specific invention.In this specification Certain features described in multiple embodiments can also be combined implementation in a single embodiment.On the other hand, individually implementing Various features described in example can also be performed separately in various embodiments or be implemented with any suitable sub-portfolio.This Outside, although feature can work in certain combinations as described above and even initially so be claimed, institute is come from One or more features in claimed combination can be removed from the combination in some cases, and claimed Combination can be directed toward the modification of sub-portfolio or sub-portfolio.
Similarly, although depicting operation in the accompanying drawings with particular order, this is understood not to require these behaviour Make the particular order shown in execute or sequentially carry out or require the operation of all illustrations to be performed, to realize desired knot Fruit.In some cases, multitask and parallel processing may be advantageous.In addition, the various system modules in above-described embodiment Separation with component is understood not to be required to such separation in all embodiments, and it is to be understood that described Program assembly and system can be usually integrated in together in single software product, or be packaged into multiple software product.
The specific embodiment of theme has been described as a result,.Other embodiments are within the scope of the appended claims.? In some cases, the movement recorded in claims can be executed in different order and still realize desired result.This Outside, the processing described in attached drawing and it is nonessential shown in particular order or sequential order, to realize desired result.In certain realities In existing, multitask and parallel processing be may be advantageous.
The foregoing is merely the preferred embodiments of the application, not to limit the application, all essences in the application Within mind and principle, any modification, equivalent substitution, improvement and etc. done be should be included within the scope of the application protection.

Claims (10)

1. a kind of human action detection method, which is characterized in that the described method includes:
Acquisition in real time includes the video image of the human body, and the convolutional neural networks model completed by training is to the video figure The key position for the human body for including as in is detected, and obtains the location information of the key position;
Based on the location information of the obtained key position, the motion track of the key position is determined;
The determining motion track is compared with preset standard trajectory, judges that the motion track of the key position is No conjunction rule.
2. the method according to claim 1, wherein in the real-time video image of the acquisition comprising the human body Before, comprising:
Generate the markup information of each sample image of sample data set and sample data concentration;
The markup information of each sample image and each sample image concentrated according to the sample data is trained foundation Convolutional neural networks obtain the convolutional layer of the convolutional neural networks and the parameter of batch normalization layer.
3. according to the method described in claim 2, it is characterized in that, the generation sample data set and the sample data are concentrated Each sample image markup information, specifically include:
The image comprising human body is obtained under target scene and the image not comprising human body generates the sample data set, is used in Trained yolo model detects the image comprising human body from the sample data of magnanimity in coco data set, to the described figure Head, hand key point as mark testing staff.
4. the method according to claim 1, wherein the key position of the human body include: human body head and Hand.
5. the method according to claim 1, wherein described by the determining motion track and preset standard Track is compared, and judges whether the motion track of the key position closes rule, comprising:
The consolidation path distance between standard trajectory and the detection track is calculated, when the regular path distance meets preset threshold When range, judge that the motion track of the key position closes rule.
6. a kind of human action detection device characterized by comprising
Detection module passes through the convolutional neural networks mould of training completion for acquiring the video image comprising the human body in real time Type detects the key position for the human body for including in the video image, and obtains the location information of the key position;
Determining module determines the motion track of the key position for the location information based on the obtained key position;
Judgment module judges the key portion for the motion track determined to be compared with preset standard trajectory Whether the motion track of position closes rule.
7. device according to claim 6, which is characterized in that further include:
Generation module, the markup information of each sample image for generating sample data set and sample data concentration;
The markup information of training module, each sample image and each sample image for being concentrated according to the sample data carries out The convolutional neural networks that training is established obtain the convolutional layer of the convolutional neural networks and the parameter of batch normalization layer.
8. device according to claim 7, which is characterized in that the generation module is specifically used for:
The image comprising human body is obtained under target scene and the image not comprising human body generates the sample data set, is used in Trained yolo model detects the image comprising human body from the sample data of magnanimity in coco data set, to the described figure Head, hand key point as mark testing staff.
9. device according to claim 6, which is characterized in that the key position of the human body include: human body head and Hand.
10. device according to claim 6, which is characterized in that the judgment module is specifically used for:
The consolidation path distance between standard trajectory and the detection track is calculated, when the regular path distance meets preset threshold When range, judge that the motion track of the key position closes rule.
CN201910393401.0A 2019-05-13 2019-05-13 A kind of human action detection method and device Pending CN110197134A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910393401.0A CN110197134A (en) 2019-05-13 2019-05-13 A kind of human action detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910393401.0A CN110197134A (en) 2019-05-13 2019-05-13 A kind of human action detection method and device

Publications (1)

Publication Number Publication Date
CN110197134A true CN110197134A (en) 2019-09-03

Family

ID=67752638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910393401.0A Pending CN110197134A (en) 2019-05-13 2019-05-13 A kind of human action detection method and device

Country Status (1)

Country Link
CN (1) CN110197134A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091057A (en) * 2019-11-15 2020-05-01 腾讯科技(深圳)有限公司 Information processing method and device and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106774894A (en) * 2016-12-16 2017-05-31 重庆大学 Interactive teaching methods and interactive system based on gesture
CN108038469A (en) * 2017-12-27 2018-05-15 百度在线网络技术(北京)有限公司 Method and apparatus for detecting human body
CN108216252A (en) * 2017-12-29 2018-06-29 中车工业研究院有限公司 A kind of subway driver vehicle carried driving behavior analysis method, car-mounted terminal and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106774894A (en) * 2016-12-16 2017-05-31 重庆大学 Interactive teaching methods and interactive system based on gesture
CN108038469A (en) * 2017-12-27 2018-05-15 百度在线网络技术(北京)有限公司 Method and apparatus for detecting human body
CN108216252A (en) * 2017-12-29 2018-06-29 中车工业研究院有限公司 A kind of subway driver vehicle carried driving behavior analysis method, car-mounted terminal and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAOSHU FANG 等: ""RMPE: Regional Multi-person Pose Estimation"", 《ARXIV》 *
STAN SALVADOR等: ""FastDTW: Toward Accurate Dynamic Time Warping in Linear Time and Space"", 《RESEARCHGATE》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091057A (en) * 2019-11-15 2020-05-01 腾讯科技(深圳)有限公司 Information processing method and device and computer readable storage medium

Similar Documents

Publication Publication Date Title
Du et al. Articulated multi-instrument 2-D pose estimation using fully convolutional networks
US20160042652A1 (en) Body-motion assessment device, dance assessment device, karaoke device, and game device
CN109816624B (en) Appearance inspection device
CN109241829B (en) Behavior identification method and device based on space-time attention convolutional neural network
CN110245623A (en) A kind of real time human movement posture correcting method and system
CN110009614A (en) Method and apparatus for output information
Hanson et al. Improving walking in place methods with individualization and deep networks
CN108205654A (en) A kind of motion detection method and device based on video
CN109978870A (en) Method and apparatus for output information
CN108354578A (en) A kind of capsule endoscope positioning system
CN115880558A (en) Farming behavior detection method and device, electronic equipment and storage medium
CN112037263A (en) Operation tool tracking system based on convolutional neural network and long-short term memory network
KR20180064907A (en) 3d body information recognition apparatus, apparatus and method for visualizing of health state
GB2596387A (en) Anonymisation apparatus, monitoring device, method, computer program and storage medium
CN115138059A (en) Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system
CN114550027A (en) Vision-based motion video fine analysis method and device
CN110119768A (en) Visual information emerging system and method for vehicle location
CN110197134A (en) A kind of human action detection method and device
CN115359558A (en) Automatic body test judging method, device, system and medium based on computer vision
CN113283334B (en) Classroom concentration analysis method, device and storage medium
CN108549899A (en) A kind of image-recognizing method and device
KR102251704B1 (en) Method and Apparatus for Detecting Object Using Relational Query
CN116740618A (en) Motion video action evaluation method, system, computer equipment and medium
Sharma et al. Digital Yoga Game with Enhanced Pose Grading Model
Komiya et al. Head pose estimation and movement analysis for speech scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190903