CN108157219A - A kind of pet based on convolutional neural networks stops apparatus and method of barking - Google Patents

A kind of pet based on convolutional neural networks stops apparatus and method of barking Download PDF

Info

Publication number
CN108157219A
CN108157219A CN201711407047.XA CN201711407047A CN108157219A CN 108157219 A CN108157219 A CN 108157219A CN 201711407047 A CN201711407047 A CN 201711407047A CN 108157219 A CN108157219 A CN 108157219A
Authority
CN
China
Prior art keywords
pet
convolutional neural
neural networks
sound
barks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711407047.XA
Other languages
Chinese (zh)
Inventor
孙宪福
于波
冯汉炯
闫泽涛
刘春燕
陈绍信
何睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN AEROSPACE INNOTECH CO Ltd
Shenzhen Academy of Aerospace Technology
Original Assignee
SHENZHEN AEROSPACE INNOTECH CO Ltd
Shenzhen Academy of Aerospace Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN AEROSPACE INNOTECH CO Ltd, Shenzhen Academy of Aerospace Technology filed Critical SHENZHEN AEROSPACE INNOTECH CO Ltd
Priority to CN201711407047.XA priority Critical patent/CN108157219A/en
Publication of CN108157219A publication Critical patent/CN108157219A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K15/00Devices for taming animals, e.g. nose-rings or hobbles; Devices for overturning animals in general; Training or exercising equipment; Covering boxes
    • A01K15/02Training or exercising equipment, e.g. mazes or labyrinths for animals ; Electric shock devices ; Toys specially adapted for animals
    • A01K15/021Electronic training devices specially adapted for dogs or cats
    • A01K15/022Anti-barking devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Environmental Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Animal Husbandry (AREA)
  • Zoology (AREA)
  • Artificial Intelligence (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Toys (AREA)

Abstract

The present invention provides a kind of pets based on convolutional neural networks to stop method of barking, and includes the following steps:S1, prepare training sample, several sections of pets is selected to yelp training data of the sound as model;S2, pretreatment, the sound that yelps to original pet pre-process;S3, sound spectrograph is calculated;S4, sound spectrograph is inputted into convolutional neural networks;S5, training pattern;S6, pet identification;S7, sounds trigger of only barking, play and stop bark sound.The present invention also provides a kind of pets based on convolutional neural networks to stop device of barking.The beneficial effects of the invention are as follows:Convolutional neural networks are applied to pet to stop in method of barking, improve the flexibility only barked and anti-interference, and pet will not be damaged, in addition it can carry out identification to pet.

Description

A kind of pet based on convolutional neural networks stops apparatus and method of barking
Technical field
It only barks apparatus and method the present invention relates to only bark method more particularly to a kind of pet based on convolutional neural networks.
Background technology
Tradition only barks method other than cutting sound band of performing an operation to pet (such as dog), wearing pet mask, also makes With skeleton symbol electronics barking stop device, such as oscillating mode, ultrasonic wave or electric shock type.
Conventional method shortcoming is as follows:
(1) it is dumb, easily pet is damaged.
(2) can not yelp to triggering the pet identification of sound.
Invention content
In order to solve the problems in the prior art, the present invention provides a kind of pets based on convolutional neural networks to stop dress of barking It puts and method.
Only bark device the present invention provides a kind of pet based on convolutional neural networks, including microphone, operational amplifier, Embeded processor, memory, power amplifier and loudspeaker, wherein, the output terminal of the microphone and the operational amplifier Input terminal connection, the output terminal of the operational amplifier is connect with the input terminal of the embeded processor, described embedded Processor is connect with the memory, and the output terminal of the embeded processor is connect with the input terminal of the power amplifier, The output terminal of the power amplifier is connect with the input terminal of the loudspeaker.
The present invention also provides a kind of pets based on convolutional neural networks to stop method of barking, and includes the following steps:
S1, prepare training sample, several sections of pets is selected to yelp training data of the sound as model;
S2, pretreatment, the sound that yelps to original pet pre-process;
S3, sound spectrograph is calculated;
S4, sound spectrograph is inputted into convolutional neural networks;
S5, training pattern;
S6, pet identification;
S7, sounds trigger of only barking, play and stop bark sound.
As a further improvement on the present invention, the pretreatment in step S2 include preemphasis, framing adding window, yelp sound end Point detection.
As a further improvement on the present invention, the convolutional neural networks in step S4 include convolutional layer, down-sampled layer and complete Articulamentum;First layer of the convolutional layer as convolutional neural networks directly carries out convolution operation to two-dimentional sound spectrograph signal;Convolution kernel Wave filter size selects 5X5 templates, and the result acted on by different convolution kernel wave filters constitutes characteristic pattern;Each volume Product core wave filter shares identical parameter, and including identical weight matrix and bias term, the convolutional layer mathematical model of use is as follows:
Y=f (x*k+b)
Wherein, x is input signal, and k is convolution kernel, and * is convolution operation, and b is bias term, and f is sigmoid functions, and y is defeated Go out characteristic pattern;
Down-sampled layer is deployed in after convolutional layer, and desampling fir filter selection 2X2 templates, sampling policy takes 4 pixels pair The maximum value answered, full articulamentum give score value to grader.
As a further improvement on the present invention, in step s 5, model training is completed on computer PC, by it is preceding to It propagates and back-propagating, adjusting parameter is optimal training pattern.For the model of the training on PC, also can preferably dispose At the relatively deficient embedded mobile end of computing resource, need to weights be quantified with reduced model, the APK moulds that generation Android is supported Type file.
As a further improvement on the present invention, in step s 6, the apk model files after training are deployed in claim 1 The pet Embedded A ndroid based on convolutional neural networks is only barked in device, which stops Mike's elegance in device of barking Collection pet yelps voice signal, extracts sound sound spectrograph, as the input of convolutional neural networks model, obtains scoring probability Value, the scoring probability value relatively differentiate that, more than threshold value, pet identity to be detected is confirmed with given threshold, otherwise not really Recognize.
As a further improvement on the present invention, in the step s 7, it is determined that after pet identity, then detect pet and yelp sound width Whether value is more than the threshold value set, if being more than, will be stored in the only bark sound of memory, and be played back by loudspeaker.
The beneficial effects of the invention are as follows:Convolutional neural networks are applied to pet to stop in method of barking, improve the spirit only barked Activity and anti-interference, and pet will not be damaged, in addition it can carry out identification to pet.
Description of the drawings
Fig. 1 is the schematic diagram that a kind of pet based on convolutional neural networks of the present invention stops device of barking.
Fig. 2 is the flow diagram that a kind of pet based on convolutional neural networks of the present invention stops method of barking.
Specific embodiment
The invention will be further described for explanation and specific embodiment below in conjunction with the accompanying drawings.
The device as shown in Figure 1, a kind of pet based on convolutional neural networks only barks, including microphone 101, operational amplifier 102nd, embeded processor 103, memory 104, power amplifier 105 and loudspeaker 106, wherein, the output of the microphone 101 End is connect with the input terminal of the operational amplifier 102, output terminal and the embeded processor of the operational amplifier 102 103 input terminal connection, the embeded processor 103 are connect with the memory 104, the embeded processor 103 Output terminal is connect with the input terminal of the power amplifier 105, the output terminal of the power amplifier 105 and the loudspeaker 106 Input terminal connection, should the device of only barking of the pet based on convolutional neural networks only bark device for Embedded A ndroid mobile terminals.
The device as shown in Figure 1, a kind of pet based on convolutional neural networks provided by the invention only barks, the microphone 101 acquisition pets yelp sound and after system pre-processes, and are output to operational amplifier 102, and the pet sound that yelps is put by operation After the 102 signal amplification of big device, it is input in the identification model of embeded processor 103, after the correct pet identity of Model Identification, The only bark sound for the pet owner that memory 104 is recorded in advance, by doting at 106 broadcasting of power amplifier 105 and loudspeaker Object hears that owner stops yelping after stopping bark sound, so as to achieve the purpose that only to bark.
Convolutional neural networks are most studies in current deep learning system, using a model the most successful, wide The numerous areas such as general application and image, voice, video, huge contribution is made that in artificial intelligence field.The present invention, which is broken through, to be passed The technological means of system barks convolutional neural networks in voice recognition with only.This method is divided according to function, mainly including pet Yelp the training of acoustic model, only pet identification, bark sound (referring generally to the prevention bark sound that pet owner is recorded in advance) triggering. Wherein model training and Model Identification stop in computer and Embedded A ndroid mobile terminals being completed on device of barking respectively.
A kind of method as shown in Fig. 2, pet based on convolutional neural networks only barks, includes the following steps:
1st, pet yelps the training of acoustic model
(1) prepare training sample
20 sections of pets (such as pet dog) is selected to yelp sound as model training data, the general 30 seconds left sides of every section of sound length It is right.
(2) it pre-processes
In order to extract the useful acoustical signal that yelps, Environmental Noise Influence is reduced, needs to pre-process it.This programme is adopted Preprocess method is including preemphasis, framing adding window, yelp sound end-point detection etc..
(3) sound spectrograph is calculated
It yelps acoustic information in view of pet, here using input of the sound spectrograph as convolutional neural networks.Sound spectrograph packet Information largely related with the characteristic for the sound that yelps is contained, the advantages of it combines spectrogram and time domain waveform.
(4) convolutional neural networks
Here using typical convolutional neural networks structure, structure includes convolutional layer, down-sampled layer and full articulamentum.Volume First layer of the lamination as convolutional neural networks directly carries out convolution operation to two-dimentional sound spectrograph signal.Convolution and wave filter are big Small selection 5X5 templates.The result acted on by different convolution kernels constitutes characteristic pattern.Each convolution kernel wave filter is shared Identical parameter, including identical weight matrix and bias term.Here the convolutional layer mathematical model used is as follows:
Y=f (x*k+b)
Wherein, x is input signal, and k is convolution kernel, and * is convolution operation, and b is bias term, and f is sigmoid functions, and y is defeated Go out characteristic pattern.
In order to increase the robustness of system, reduce computation complexity, down-sampled to inputting, down-sampled layer is deployed in convolution After layer.Desampling fir filter selects 2X2 templates, and sampling policy takes the corresponding maximum value of 4 pixels.Full articulamentum is by score value Give grader (such as softmax graders).
(5) training pattern
What model training was completed on computer PC, by propagated forward and back-propagating, adjusting parameter makes training pattern It is optimal.For the model of the training on PC, the relatively deficient embedded mobile end of computing resource also can be preferably deployed in, Weights need to be quantified with reduced model, the APK model files that generation Android is supported.
2nd, pet identification
APK models after training, which are deployed in Embedded A ndroid mobile terminals, to be stopped in device of barking.Embedded A ndroid is moved End microphone 101 in device of only barking collects pet and yelps voice signal, extraction sound spectrograph semaphore, using sound spectrograph as convolution The input of neural network model obtains scoring probability value, which is more than the threshold value of setting, and pet to be detected is confirmed, otherwise It is unconfirmed.
3rd, stop sounds trigger of barking
After pet identity is determined, then detects pet and whether yelp magnitude of sound more than the threshold value set, it, will if being more than The only bark sound of memory 104 is stored in, is played back by loudspeaker 106.
A kind of pet based on convolutional neural networks provided by the invention stops apparatus and method of barking, Promethean by convolution god In only barking through network application to pet, the flexibility only barked and anti-interference are improved, and pet will not be damaged, in addition The present invention can also carry out identification to pet.
A kind of pet based on convolutional neural networks provided by the invention stops apparatus and method of barking, and has the characteristics that:
1st, human speech identity recognizing technology, which is applied, yelps in pet in identification.
2nd, model of the convolutional neural networks as training and identification.
3rd, extraction pet yelps input of the sound spectrograph feature as convolutional neural networks of sound.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, it is impossible to assert The specific implementation of the present invention is confined to these explanations.For those of ordinary skill in the art to which the present invention belongs, exist Under the premise of not departing from present inventive concept, several simple deduction or replace can also be made, should all be considered as belonging to the present invention's Protection domain.

Claims (7)

  1. The device 1. a kind of pet based on convolutional neural networks only barks, it is characterised in that:Including microphone, operational amplifier, embedding Enter formula processor, memory, power amplifier and loudspeaker, wherein, the output terminal of the microphone and the operational amplifier Input terminal connects, and the output terminal of the operational amplifier is connect with the input terminal of the embeded processor, the embedded place Reason device is connect with the memory, and the output terminal of the embeded processor is connect with the input terminal of the power amplifier, institute The output terminal for stating power amplifier is connect with the input terminal of the loudspeaker.
  2. A kind of method 2. pet based on convolutional neural networks only barks, which is characterized in that include the following steps:
    S1, prepare training sample, several sections of pets is selected to yelp training data of the sound as model;
    S2, pretreatment, the sound that yelps to original pet pre-process;
    S3, sound spectrograph is calculated;
    S4, sound spectrograph is inputted into convolutional neural networks;
    S5, training pattern;
    S6, pet identification;
    S7, sounds trigger of only barking, play and stop bark sound.
  3. The method 3. pet according to claim 2 based on convolutional neural networks only barks, it is characterised in that:In step S2 Pretreatment includes that preemphasis, framing adding window, yelp sound end-point detection.
  4. The method 4. pet according to claim 2 based on convolutional neural networks only barks, it is characterised in that:In step S4 Convolutional neural networks include convolutional layer, down-sampled layer and full articulamentum;First layer of the convolutional layer as convolutional neural networks, directly Convolution operation is carried out to two-dimentional sound spectrograph signal;Convolution kernel wave filter size selects 5X5 templates, is filtered by different convolution kernels The result that device acts on constitutes characteristic pattern;Each convolution kernel wave filter shares identical parameter, including identical weight square Battle array and bias term, the convolutional layer mathematical model of use are as follows:
    Y=f (x*k+b)
    Wherein, x is input signal, and k is convolution kernel, and * is convolution operation, and b is bias term, and f is sigmoid functions, and y is that output is special Sign figure;
    Down-sampled layer is deployed in after convolutional layer, and desampling fir filter selection 2X2 templates, sampling policy takes 4 pixels corresponding Maximum value, full articulamentum give score value to grader.
  5. The method 5. pet according to claim 2 based on convolutional neural networks only barks, it is characterised in that:In step S5 In, after model training and optimization, the APK model files of generation Android supports.
  6. The method 6. pet according to claim 5 based on convolutional neural networks only barks, it is characterised in that:In step S6 In, the APK model files after training are disposed the pet described in claim 1 based on convolutional neural networks and are stopped in device of barking, The pet microphone acquisition pet in device that only barks yelps signal, sound spectrograph semaphore is extracted, using sound spectrograph as convolutional Neural The input of network model obtains scoring probability value, which is more than threshold value, and pet identity to be detected is confirmed, is not otherwise obtained Confirm.
  7. The method 7. pet according to claim 6 based on convolutional neural networks only barks, it is characterised in that:In step S7 In, it is determined that it after pet identity, then detects pet and whether yelps amplitude more than the threshold value set, if being more than, deposited being stored in The only bark sound of reservoir, is played back by loudspeaker.
CN201711407047.XA 2017-12-22 2017-12-22 A kind of pet based on convolutional neural networks stops apparatus and method of barking Pending CN108157219A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711407047.XA CN108157219A (en) 2017-12-22 2017-12-22 A kind of pet based on convolutional neural networks stops apparatus and method of barking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711407047.XA CN108157219A (en) 2017-12-22 2017-12-22 A kind of pet based on convolutional neural networks stops apparatus and method of barking

Publications (1)

Publication Number Publication Date
CN108157219A true CN108157219A (en) 2018-06-15

Family

ID=62523500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711407047.XA Pending CN108157219A (en) 2017-12-22 2017-12-22 A kind of pet based on convolutional neural networks stops apparatus and method of barking

Country Status (1)

Country Link
CN (1) CN108157219A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322894A (en) * 2019-06-27 2019-10-11 电子科技大学 A kind of waveform diagram generation and giant panda detection method based on sound
CN111866192A (en) * 2020-09-24 2020-10-30 汉桑(南京)科技有限公司 Pet interaction method, system and device based on pet ball and storage medium
CN115104548A (en) * 2022-07-11 2022-09-27 深圳市前海远为科技有限公司 Pet behavior adjustment and human-pet interaction method and device based on multimedia information technology
CN118435880A (en) * 2024-05-06 2024-08-06 深圳市安牛智能创新有限公司 Dynamic adaptation virtual reality dog training method and system and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5927233A (en) * 1998-03-10 1999-07-27 Radio Systems Corporation Bark control system for pet dogs
CN102231117A (en) * 2011-07-08 2011-11-02 盛乐信息技术(上海)有限公司 Software installment method and system for embedded platform
CN102499106A (en) * 2011-09-29 2012-06-20 鲁东大学 Bark stop device with voice recognition function
CN104794527A (en) * 2014-01-20 2015-07-22 富士通株式会社 Method and equipment for constructing classification model based on convolutional neural network
CN106454634A (en) * 2016-11-18 2017-02-22 深圳市航天华拓科技有限公司 Environment sound detection-based barking stopping apparatus and barking stopping method
CN106782504A (en) * 2016-12-29 2017-05-31 百度在线网络技术(北京)有限公司 Audio recognition method and device
CN106782501A (en) * 2016-12-28 2017-05-31 百度在线网络技术(北京)有限公司 Speech Feature Extraction and device based on artificial intelligence
CN106821337A (en) * 2017-04-13 2017-06-13 南京理工大学 A kind of sound of snoring source title method for having a supervision

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5927233A (en) * 1998-03-10 1999-07-27 Radio Systems Corporation Bark control system for pet dogs
CN102231117A (en) * 2011-07-08 2011-11-02 盛乐信息技术(上海)有限公司 Software installment method and system for embedded platform
CN102499106A (en) * 2011-09-29 2012-06-20 鲁东大学 Bark stop device with voice recognition function
CN104794527A (en) * 2014-01-20 2015-07-22 富士通株式会社 Method and equipment for constructing classification model based on convolutional neural network
CN106454634A (en) * 2016-11-18 2017-02-22 深圳市航天华拓科技有限公司 Environment sound detection-based barking stopping apparatus and barking stopping method
CN106782501A (en) * 2016-12-28 2017-05-31 百度在线网络技术(北京)有限公司 Speech Feature Extraction and device based on artificial intelligence
CN106782504A (en) * 2016-12-29 2017-05-31 百度在线网络技术(北京)有限公司 Audio recognition method and device
CN106821337A (en) * 2017-04-13 2017-06-13 南京理工大学 A kind of sound of snoring source title method for having a supervision

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322894A (en) * 2019-06-27 2019-10-11 电子科技大学 A kind of waveform diagram generation and giant panda detection method based on sound
CN110322894B (en) * 2019-06-27 2022-02-11 电子科技大学 Sound-based oscillogram generation and panda detection method
CN111866192A (en) * 2020-09-24 2020-10-30 汉桑(南京)科技有限公司 Pet interaction method, system and device based on pet ball and storage medium
CN115104548A (en) * 2022-07-11 2022-09-27 深圳市前海远为科技有限公司 Pet behavior adjustment and human-pet interaction method and device based on multimedia information technology
CN115104548B (en) * 2022-07-11 2022-12-27 深圳市前海远为科技有限公司 Pet behavior adjustment and human-pet interaction method and device based on multimedia information technology
CN118435880A (en) * 2024-05-06 2024-08-06 深圳市安牛智能创新有限公司 Dynamic adaptation virtual reality dog training method and system and storage medium

Similar Documents

Publication Publication Date Title
CN108157219A (en) A kind of pet based on convolutional neural networks stops apparatus and method of barking
Cai et al. Sensor network for the monitoring of ecosystem: Bird species recognition
CN106504768B (en) Phone testing audio frequency classification method and device based on artificial intelligence
CN109473120A (en) A kind of abnormal sound signal recognition method based on convolutional neural networks
CN104732978B (en) The relevant method for distinguishing speek person of text based on combined depth study
Mielke et al. A method for automated individual, species and call type recognition in free-ranging animals
CN110047510A (en) Audio identification methods, device, computer equipment and storage medium
CN106775198A (en) A kind of method and device for realizing accompanying based on mixed reality technology
CN108875592A (en) A kind of convolutional neural networks optimization method based on attention
CN104036776A (en) Speech emotion identification method applied to mobile terminal
CN108711436A (en) Speaker verification's system Replay Attack detection method based on high frequency and bottleneck characteristic
CN107103903A (en) Acoustic training model method, device and storage medium based on artificial intelligence
CN109924194A (en) A kind of scarer and bird repellent method
Charrier et al. Vocal recognition of mothers by Australian sea lion pups: individual signature and environmental constraints
CN107818366A (en) A kind of lungs sound sorting technique, system and purposes based on convolutional neural networks
CN109255296A (en) A kind of daily Human bodys' response method based on depth convolutional neural networks
ES2849124B2 (en) ENVIRONMENTAL SOUND DETECTION METHOD AND SYSTEM FOR A COCHLEAR IMPLANT
CN113947376B (en) C/S (computer/subscriber line) card punching method and device based on multiple biological characteristics
CN112820275A (en) Automatic monitoring method for analyzing abnormality of suckling piglets based on sound signals
CN110047502A (en) The recognition methods of hierarchical voice de-noising and system under noise circumstance
CN109584904A (en) The sightsinging audio roll call for singing education applied to root LeEco identifies modeling method
CN109447199A (en) A kind of multi-modal criminal's recognition methods and system based on step information
CN112738338A (en) Telephone recognition method, device, equipment and medium based on deep learning
Kastelein et al. Temporary hearing threshold shift in harbor seals (Phoca vitulina) due to a one-sixth-octave noise band centered at 32 kHz
CN115810365A (en) Pig health early warning method and system based on pig sound

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180615