CN114387976A - Underwater sound voice digital communication method based on voiceprint features and semantic compression - Google Patents

Underwater sound voice digital communication method based on voiceprint features and semantic compression Download PDF

Info

Publication number
CN114387976A
CN114387976A CN202111598552.3A CN202111598552A CN114387976A CN 114387976 A CN114387976 A CN 114387976A CN 202111598552 A CN202111598552 A CN 202111598552A CN 114387976 A CN114387976 A CN 114387976A
Authority
CN
China
Prior art keywords
voice
voiceprint
semantic
compression
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111598552.3A
Other languages
Chinese (zh)
Other versions
CN114387976B (en
Inventor
申晓红
王超
赵瑞琴
陈帆
解伟亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202111598552.3A priority Critical patent/CN114387976B/en
Publication of CN114387976A publication Critical patent/CN114387976A/en
Application granted granted Critical
Publication of CN114387976B publication Critical patent/CN114387976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L2019/0001Codebooks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention provides an underwater acoustic voice digital communication method based on voiceprint characteristics and semantic compression. Secondly, the input voice is compressed, so that the data volume required to be transmitted is reduced, the transmission energy consumption can be effectively reduced, and the transmission time can be shortened. Finally, the user identity is matched with the compressed code at the receiving end, so that the safety performance of receiving the voice is effectively improved. The transmission of voice features is ensured while the data transmission amount is reduced, thereby realizing efficient underwater voice communication. The data transmission quantity is reduced, and the effective transmission of the voice characteristics of the transmitting end is ensured.

Description

Underwater sound voice digital communication method based on voiceprint features and semantic compression
Technical Field
The invention relates to the technical field of underwater sound voice communication, in particular to a high-efficiency underwater sound voice communication method, and specifically relates to semantic recognition, voiceprint feature recognition, data compression and other methods.
Background
With the increasing development and utilization of oceans, the research on underwater wireless communication has been paid more and more attention. In an underwater environment, acoustic waves have good propagation performance compared with other energy radiation forms for transmitting information, such as electromagnetic waves and the like, so that underwater acoustic communication is still the most effective means for transmitting underwater information. Among them, underwater acoustic voice communication is an important position in the fields of frogman battles, underwater operations, marine scientific research and the like, and has become a research hotspot in underwater acoustic communication.
The underwater acoustic voice communication system can be divided into an analog underwater acoustic voice communication system and an underwater acoustic voice digital communication system according to transmission of analog signals or digital signals. In the early development of the underwater sound voice communication technology, people mostly adopt the analog underwater sound voice communication mode because the analog communication technology is simpler. In recent years, with the rapid development of the water depth communication technology, the digital communication technology has become a mainstream technology in the field of contemporary underwater acoustic voice communication due to the advantages of strong anti-interference capability, easy realization of signal error detection and correction, convenient establishment of an integrated communication network and equipment, integration and the like.
However, the research on the underwater acoustic voice digital communication technology at present mainly focuses on overcoming the influences of bandwidth limitation, ocean environment noise complexity, multipath effect and the like existing in underwater voice communication through a modulation mode, so that the problems of large transmission data volume, long transmission time and the like of voice communication exist. In order to solve the problem, a method for performing semantic compression on voice is proposed, and the amount of data required to be transmitted is reduced by establishing a mapping relation between the voice and a semantic code and transmitting the semantic code. However, the voice broadcasted by such a method only contains semantic information, and the characteristics of the voice of the sending end are not considered, which may cause inaccuracy in the judgment of the broadcasted voice information by the receiving end.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides the underwater acoustic voice digital communication method based on the voiceprint characteristics and semantic compression, which reduces the data transmission quantity and ensures the effective transmission of the voice characteristics of the transmitting end.
The technical scheme adopted by the invention for solving the technical problem comprises the following detailed steps:
step 1: learning and modeling voiceprints of users through equipment, and distributing different voiceprint identity IDs (identities) to different users according to the voiceprintsvE {1, 1.. I }, so as to obtain different voiceprint characteristic models which correspond to different users one by one, and enable the device to identify a known user from which a certain voice comes or does not belong to the known user;
step 2: enabling different users to input voice K according to a voice content library predefined by requirements, enabling the voice content capacity of the voice content library to be K, enabling K to be equal to { 1.. multidot.K }, and enabling the equipment to have semantic features mkiAnd voiceprint feature vkiExtracting to complete the feature matching of the semantic features and the voiceprint features, so that the input voice and each user establish a matching relation and are recorded into a semantic-voiceprint library L;
and step 3: extracting the speech speed feature s of the input speech while establishing the matching relation between the semantic feature and the voiceprint featurejJ belongs to { 1.,. J }, and a speech rate model is established; and performing mode fitting on the semantic-voiceprint library and the speech speed characteristics to obtain fitting characteristics y ═ f (m) corresponding to the speechki,vki,sj),i∈{1,...,I},k∈{1,...,K},j∈{1,...,J};
And 4, step 4: after the mode fitting is completed, establishing a compression mapping relation of voice;
when a voice is input, if the voice belongs to the semantic-voiceprint library, extracting the speech speed characteristic of the voice, and distributing a compressed code N for the voice by combining the semantic-voiceprintyThe fitting characteristic y is assigned with a unique compression code NyRecording voice and corresponding compression codes into a compression code base by y belonging to { 1.,. and I multiplied by K multiplied by J }, thereby establishing a complete compression mapping relation of each user, each voice content and the speed of speech; otherwise, abandoning the input voice and waiting for new voice input;
and 5: after the compression mapping relation is established, when voice is input at the input end, firstly, whether the voice belongs to a semantic-voiceprint library L is judged: if the voice belongs to the semantic-voiceprint library L, extracting the semantic, voiceprint and speed of the voice to obtain the user identity IDvPerforming voice compression to obtain a compressed code; otherwise, abandoning the input voice and waiting for new voice input;
step 6: after the voice compression is finished, the compression codes are packed into a data packet p, and the data packet p is identified by the identity ID of the sending endtE { 1.., I }, and the identity ID of the receiving endrE { 1.. multidata., I }, user identity IDvAnd the compression code corresponding to the voice, and sending the data packet p to the receiving end; sender identity IDtWith user identity IDvDifferent meaning, sender identity IDtThe ID number of the user in the communication network is characterized, and the user identity IDvA voiceprint ID number characterizing the user;
and 7: when receiving end receives data packet, first judging user ID in packet headvAnd N in the compressed code baseyWhether the corresponding voiceprint information matches: if the data is matched with the voice, decompressing the received data to obtain the semantic, voiceprint and speech speed information of the voice corresponding to the compressed code, and further broadcasting the voice; otherwise, the data packet is regarded as the voice characteristics are not matched, and the data packet is discarded.
The method has the advantages that the method for underwater sound voice digital communication based on the voiceprint features and the semantic compression can effectively solve the problem that the broadcast voice content only contains semantic information but can not embody the voice features in the existing method by utilizing the semantic feature extraction, the voiceprint feature extraction, the speech speed feature extraction, the data compression and other methods. Firstly, a semantic-voiceprint library is established, and mode fitting is carried out on the semantic-voiceprint library and the speech speed characteristics, so that multi-dimensional characteristic extraction of input speech is achieved, and the input speech can be better collected and restored. Secondly, the input voice is compressed, so that the data volume required to be transmitted is reduced, the transmission energy consumption can be effectively reduced, and the transmission time can be shortened. Finally, the user identity is matched with the compressed code at the receiving end, so that the safety performance of receiving the voice is effectively improved. Thus, the invention can ensure the realization of efficient underwater sound voice communication.
Drawings
Fig. 1 is a general flow chart of voice broadcasting of the present invention.
FIG. 2 is a flow chart of semantic-voiceprint library creation of the present invention.
FIG. 3 is a flow chart of the compression process of the present invention.
Fig. 4 is a flow chart of the transmit-side voice compression of the present invention.
Fig. 5 is a data packet format of the present invention.
Fig. 6 is a flow chart of the receiving end voice broadcast of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
Aiming at the problem that the broadcast voice content only contains semantic information but can not embody the voice characteristics in the existing voice digital communication of underwater sound semantic compression, the method of semantic characteristic extraction, voiceprint characteristic extraction, voice speed characteristic extraction, data compression and the like is considered, the data transmission quantity is reduced, and meanwhile, the transmission of the voice characteristics is ensured, so that the efficient underwater sound voice communication is realized.
In consideration of the advantages that the underwater digital communication has strong anti-interference capability, signal error detection and correction are easy to realize, a comprehensive communication network and equipment are convenient to establish and integrated and the like compared with analog communication, the invention adopts a digital voice communication technology. In order to solve the problem that the broadcast voice content only contains semantic information but can not embody the voice characteristics in the existing voice digital communication of underwater sound semantic compression, a method for underwater sound voice digital communication based on voiceprint characteristics and semantic compression is provided. The invention can ensure the realization of high-efficiency underwater sound voice communication by utilizing the methods of semantic feature extraction, voiceprint feature extraction, speech speed feature extraction, data compression and the like.
The following takes 2 users 1 and 2 to perform underwater voice communication as an example, the present invention is further explained with reference to the accompanying drawings, and a corresponding underwater voice communication flow is shown in fig. 1.
The technical scheme mainly comprises three parts: establishing a semantic-voiceprint library, establishing compression mapping and transmitting voice data, and specifically comprising the following implementation steps of:
step 1: the device learns and models the voiceprints of 2 users by an off-line learning method, and then allocates different voiceprint ID for the 2 users respectively according to different voiceprintsv1 and IDv2. Thereby enabling the device to recognize from which known user or not a known user a certain piece of speech came.
Step 2: let user 1 and user 2 input speech K according to a predefined speech content library (the number of speech content is K), where K belongs to { 1.. and K }, respectively, where the capacity of the speech content library is set to be K-50, and each user inputs all speech in the speech content library for the device to learn. Make the device extract the semantic feature m of the user A, B for the speechk1、mk2And voiceprint feature vk1、vk2And k belongs to { 1.,. 50}, completing the feature matching of semantic features and voiceprint features, so that the input voice is matched with each user and recorded in a semantic-voiceprint library L. Semantic-voiceprint library creation flow diagram as shown in figure 2,
and step 3: extracting the speech speed feature s of the input speech while establishing the matching relation between the semantic feature and the voiceprint featurejJ belongs to { 1.,. J }, and a speech rate model is established. Here, the highest level of speech rate of the division is J10. Then, carrying out mode fitting on the semantic-voiceprint library and the speech speed characteristics, wherein at the moment, each user obtains the fitting characteristics corresponding to the speech, namely y ═ f (m)ki,vki,sj),i∈{1,2},k∈{1,...,50},j∈{1,...,10}。
And 4, step 4: after the mode fitting is completed, a compression mapping relation of the voice needs to be established. When a voice is input, if the voice belongs to the semantic-voiceprint library, extracting the speech speed characteristic of the voice, and distributing a compressed code N for the voice by combining the semantic-voiceprintyRecording the data into a compressed code base; otherwise, the input voice is abandoned and a new voice input is waited. Thereby establishing the completion of each user, each voice content and the speed of speechAnd (4) compressing the mapping relation in a whole way. The compression process flow diagram is shown in fig. 3. And 5: after the compression mapping relation is established, when voice is input at the input end, firstly, whether the voice belongs to a semantic-voiceprint library L is judged: if the user ID belongs to the user ID, extracting the semantics, the voiceprint and the speech speed of the user to obtain the user IDvPerforming voice compression to obtain a compressed code; otherwise, the input voice is abandoned and a new voice input is waited. The flow chart of the transmitting end voice compression is shown in fig. 4. If the user 1 inputs the 1 st content of the speech content library and the speech rate is 3, the step 1-step 4 can obtain the compressed code N corresponding to the speechy,y=f(m1A,v1A,s3)。
Step 6: for user 1, after the voice compression is completed, data is packed into a data packet p, and a sending end identity ID is set in the packet header of the data packet ptReceiving end identity ID as 1r2, the user identity IDv is 1 and the compressed code N corresponding to the speechy,y=f(m1A,v1A,s3) And sent to user 2. The packet format is shown in fig. 5.
And 7: when the user 2 receives the data packet, firstly, the user identity ID in the packet header is judgedv1 and N in compressed code baseyWhether the corresponding voiceprint information matches: when the data packet is really sent from the user 1, the data packet is regarded as information matching, and the user 2 decompresses the received data to obtain semantic, voiceprint and speech speed information of the voice corresponding to the compressed code, so as to broadcast the voice; otherwise, when the data packet is forged by other users, the other users do not know the compression codes of different users in the compression code base, so that it is difficult to ensure that the user identity is completely matched with the voiceprint information, and the device discards the data packet. This improves the security of the voice communication. The receiving end voice broadcast flowchart is shown in fig. 6.
According to the invention, the multi-dimensional feature extraction of the input voice is realized by establishing the semantic-voiceprint library and performing mode fitting with the speed features, so that the input voice can be better collected and restored. And the input voice is compressed, so that the data volume required to be transmitted is reduced, the transmission energy consumption can be effectively reduced, and the transmission time is shortened. In addition, the user identity is matched with the compressed code at the receiving end, so that the safety performance of receiving the voice is effectively improved. Thus, the invention can ensure the realization of efficient underwater sound voice communication.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements made to the technical solution of the present invention by those skilled in the art without departing from the spirit of the present invention shall fall within the protection scope defined by the claims of the present invention.

Claims (1)

1. An underwater sound voice digital communication method based on voiceprint features and semantic compression is characterized by comprising the following steps:
step 1: learning and modeling voiceprints of users through equipment, and distributing different voiceprint identity IDs (identities) to different users according to the voiceprintsvE {1, 1.. I }, so as to obtain different voiceprint characteristic models which correspond to different users one by one, and enable the device to identify a known user from which a certain voice comes or does not belong to the known user;
step 2: enabling different users to input voice K according to a voice content library predefined by requirements, enabling the voice content capacity of the voice content library to be K, enabling K to be equal to { 1.. multidot.K }, and enabling the equipment to have semantic features mkiAnd voiceprint feature vkiExtracting to complete the feature matching of the semantic features and the voiceprint features, so that the input voice and each user establish a matching relation and are recorded into a semantic-voiceprint library L;
and step 3: extracting the speech speed feature s of the input speech while establishing the matching relation between the semantic feature and the voiceprint featurejJ belongs to { 1.,. J }, and a speech rate model is established; and performing mode fitting on the semantic-voiceprint library and the speech speed characteristics to obtain fitting characteristics y ═ f (m) corresponding to the speechki,vki,sj),i∈{1,...,I},k∈{1,...,K},j∈{1,...,J};
And 4, step 4: after the mode fitting is completed, establishing a compression mapping relation of voice;
when a voice is input, if the voice belongs to the semantic-voiceprint library, extracting the speech speed characteristic of the voice, and distributing a compressed code N for the voice by combining the semantic-voiceprintyThe fitting characteristic y is assigned with a unique compression code NyRecording voice and corresponding compression codes into a compression code base by y belonging to { 1.,. and I multiplied by K multiplied by J }, thereby establishing a complete compression mapping relation of each user, each voice content and the speed of speech; otherwise, abandoning the input voice and waiting for new voice input;
and 5: after the compression mapping relation is established, when voice is input at the input end, firstly, whether the voice belongs to a semantic-voiceprint library L is judged: if the voice belongs to the semantic-voiceprint library L, extracting the semantic, voiceprint and speed of the voice to obtain the user identity IDvPerforming voice compression to obtain a compressed code; otherwise, abandoning the input voice and waiting for new voice input;
step 6: after the voice compression is finished, the compression codes are packed into a data packet p, and the data packet p is identified by the identity ID of the sending endtE { 1.., I }, and the identity ID of the receiving endrE { 1.. multidata., I }, user identity IDvAnd the compression code corresponding to the voice, and sending the data packet p to the receiving end; sender identity IDtWith user identity IDvDifferent meaning, sender identity IDtThe ID number of the user in the communication network is characterized, and the user identity IDvA voiceprint ID number characterizing the user;
and 7: after receiving the data packet, the receiving end first determines whether the user identity IDv in the packet header matches the voiceprint information corresponding to Ny in the compressed code library: if the data is matched with the voice, decompressing the received data to obtain the semantic, voiceprint and speech speed information of the voice corresponding to the compressed code, and further broadcasting the voice; otherwise, the data packet is regarded as the voice characteristics are not matched, and the data packet is discarded.
CN202111598552.3A 2021-12-24 2021-12-24 Underwater sound voice digital communication method based on voiceprint features and semantic compression Active CN114387976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111598552.3A CN114387976B (en) 2021-12-24 2021-12-24 Underwater sound voice digital communication method based on voiceprint features and semantic compression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111598552.3A CN114387976B (en) 2021-12-24 2021-12-24 Underwater sound voice digital communication method based on voiceprint features and semantic compression

Publications (2)

Publication Number Publication Date
CN114387976A true CN114387976A (en) 2022-04-22
CN114387976B CN114387976B (en) 2024-05-14

Family

ID=81198523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111598552.3A Active CN114387976B (en) 2021-12-24 2021-12-24 Underwater sound voice digital communication method based on voiceprint features and semantic compression

Country Status (1)

Country Link
CN (1) CN114387976B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825857A (en) * 2016-03-11 2016-08-03 无锡吾芯互联科技有限公司 Voiceprint-recognition-based method for assisting deaf patient in determining sound type
US20180144742A1 (en) * 2016-11-18 2018-05-24 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for processing voice data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825857A (en) * 2016-03-11 2016-08-03 无锡吾芯互联科技有限公司 Voiceprint-recognition-based method for assisting deaf patient in determining sound type
US20180144742A1 (en) * 2016-11-18 2018-05-24 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for processing voice data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
党华;仲顺安;陈越洋;: "高速自适应水声语音系统的设计与实现", 北京理工大学学报, no. 04, 15 April 2009 (2009-04-15) *

Also Published As

Publication number Publication date
CN114387976B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN102111314B (en) Smart home voice control system and method based on Bluetooth transmission
CN103714823B (en) A kind of self adaptation subsurface communication method based on integrated voice coding
CN103402171B (en) Method and the terminal of background music is shared in call
CN105551517B (en) It is a kind of to be wirelessly transferred recording pen and recording system with application scenarios identification control
CN104917671A (en) Mobile terminal based audio processing method and device
CN102394724A (en) Highly-reliable data transmission method and device based on dual tone multiple frequency sound waves
CN101719911A (en) Method, device and system capable of transmitting multimedia data by Bluetooth and playing in real time
CN109949801A (en) A kind of smart home device sound control method and system based on earphone
CN106961639A (en) A kind of underwater communications system of interphone communication method under water and application this method
CN111145763A (en) GRU-based voice recognition method and system in audio
CN102781075A (en) Method for reducing communication power consumption of mobile terminal and mobile terminal
CN108964787A (en) A kind of information broadcast method based on ultrasound
CN104410973A (en) Recognition method and system for tape played phone fraud
CN110351419B (en) Intelligent voice system and voice processing method thereof
CN113395116A (en) Underwater sound voice digital transmission method based on semantic compression
CN101753657B (en) Method and device for reducing call noise
CN108399913A (en) High robust audio fingerprinting method and system
WO2011137872A2 (en) Method, system, and corresponding terminal for multimedia communications
CN107689226A (en) A kind of low capacity Methods of Speech Information Hiding based on iLBC codings
CN114387976B (en) Underwater sound voice digital communication method based on voiceprint features and semantic compression
KR20240100384A (en) Signal encoding/decoding methods, devices, user devices, network-side devices, and storage media
CN101478616A (en) Instant voice communication method
CN203278958U (en) Conversation transcription system
CN109089253A (en) A kind of audio compression Transmission system based on low-power consumption bluetooth
CN109637538A (en) A method of realizing voice control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant