CN109040425A - Information processing method and related product - Google Patents
Information processing method and related product Download PDFInfo
- Publication number
- CN109040425A CN109040425A CN201810707457.4A CN201810707457A CN109040425A CN 109040425 A CN109040425 A CN 109040425A CN 201810707457 A CN201810707457 A CN 201810707457A CN 109040425 A CN109040425 A CN 109040425A
- Authority
- CN
- China
- Prior art keywords
- target audio
- headset
- identity information
- preset
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 27
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000004458 analytical method Methods 0.000 claims description 41
- 238000004891 communication Methods 0.000 claims description 26
- 238000001514 detection method Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 14
- 238000005070 sampling Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 2
- 229910052751 metal Inorganic materials 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 206010011469 Crying Diseases 0.000 description 1
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- AMGQUBHHOARCQH-UHFFFAOYSA-N indium;oxotin Chemical compound [In].[Sn]=O AMGQUBHHOARCQH-UHFFFAOYSA-N 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 229910052744 lithium Inorganic materials 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72409—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
- H04M1/72412—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Environmental & Geological Engineering (AREA)
- Telephone Function (AREA)
Abstract
This application discloses a kind of information processing method and Related products, wherein, this method comprises: receiving the target audio that the first headset is sent by electronic equipment, first parsing is carried out to target audio, obtain the corresponding identity information of target audio, if the identity information is default identity information, second parsing is carried out to target audio, obtain the corresponding User Status of target audio, determine prompting message corresponding with User Status, prompting message is sent to the second headset, the prompting message includes the default corresponding User Status of identity information, so, the sleep state that wireless headset monitors user can be controlled by electronic equipment, enrich the function of wireless headset.
Description
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an information processing method and a related product.
Background
With the maturity of wireless technology, wireless earphones are connected with electronic devices such as mobile phones through wireless technology in more and more scenes. People can realize various functions such as listening to music, making a call and the like through the wireless earphone. However, the current wireless headset has a single function, thereby reducing the user experience.
Disclosure of Invention
The embodiment of the application provides an information processing method and a related product, which can realize the control of a wireless earphone to monitor the sleep state of a user through electronic equipment, and enrich the functions of the wireless earphone.
In a first aspect, an embodiment of the present application provides an information processing method, where the method includes:
receiving target audio sent by a first headset;
performing first analysis on the target audio to obtain identity information corresponding to the target audio;
if the identity information is preset identity information, second analysis is carried out on the target audio to obtain a user state corresponding to the target audio;
and determining a prompt message corresponding to the user state, and sending the prompt message to a second headset, wherein the prompt message contains the user state corresponding to the preset identity information.
In a second aspect, an embodiment of the present application provides an information processing apparatus, including:
the receiving unit is used for receiving the target audio transmitted by the first headset;
the analysis unit is used for carrying out first analysis on the target audio to obtain identity information corresponding to the target audio;
the analysis unit is further configured to perform second analysis on the target audio to obtain a user state corresponding to the target audio when the identity information is preset identity information;
and the sending unit is used for determining a prompt message corresponding to the user state and sending the prompt message to a second headset, wherein the prompt message contains the user state corresponding to the preset identity information.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, the program includes instructions for performing the steps in the first aspect of the embodiment of the present application, and the electronic device is a wireless headset or a charging box.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the information processing method and the related product described in the embodiments of the present application, the target audio sent by the first headset is received, the first analysis is performed on the target audio to obtain the identity information corresponding to the target audio, if the identity information is the preset identity information, the second analysis is performed on the target audio to obtain the user state corresponding to the target audio, the prompt message corresponding to the user state is determined, and the prompt message is sent to the second headset, where the prompt message includes the user state corresponding to the preset identity information, so that the wireless headset can be controlled by the electronic device to monitor the sleep state of the user, and the function of the wireless headset is enriched.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application;
fig. 1B is a system network architecture diagram of an information processing method disclosed in an embodiment of the present application;
fig. 1C is a schematic flow chart of an information processing method disclosed in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a wireless headset disclosed in an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of another information processing method disclosed in the embodiments of the present application;
fig. 4 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present application;
fig. 5A is a schematic structural diagram of an information processing apparatus disclosed in an embodiment of the present application;
fig. 5B is a schematic structural diagram of another information processing apparatus disclosed in the embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic devices may include various handheld devices, vehicle mounted devices, wireless headsets, computing devices or other processing devices connected to wireless modems having wireless communication capabilities, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal equipment (terminal device), and the like. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
In this embodiment, the wireless headset may specifically be: the bluetooth headset may be a common bluetooth headset, that is, a headset that plays sound through a speaker, and of course, the bluetooth headset may also be a bone conduction headset, and the sound conduction mode of the wireless headset is not limited in this application. The wireless headset may establish a wireless connection with an electronic device,
optionally, the wireless headset may be an ear-hook headset, an ear-plug headset, or a headset, which is not limited in the embodiments of the present application.
The wireless headset may include a headset housing, a rechargeable battery (e.g., a lithium battery) disposed within the headset housing, a plurality of metal contacts for connecting the battery to a charging device, the driver unit including a magnet, a voice coil, and a diaphragm, the driver unit for emitting sound from a directional sound port, and a speaker assembly including a directional sound port, the plurality of metal contacts disposed on an exterior surface of the headset housing.
The following describes embodiments of the present application in detail.
Referring to fig. 1A, fig. 1A is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application, and the electronic device 100 may include a control circuit, which may include a storage and processing circuit 110. The storage and processing circuitry 110 may be a memory, such as a hard drive memory, a non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), a volatile memory (e.g., static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. Processing circuitry in storage and processing circuitry 110 may be used to control the operation of electronic device 100. The processing circuit may be implemented based on one or more microprocessors, microcontrollers, digital master-slave headphone switch controllers, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the electronic device 100, such as an internet browsing application, a Voice Over Internet Protocol (VOIP) telephone call application, an email application, a media playing application, operating system functions, and so forth. Such software may be used to perform control operations such as, for example, camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on a touch sensor, functionality associated with displaying information on multiple (e.g., layered) displays, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in the electronic device 100, and the like, without limitation of embodiments of the present application.
The electronic device 100 may also include input-output circuitry 150. The input-output circuit 150 may be used to enable the electronic device 100 to input and output data, i.e., to allow the electronic device 100 to receive data from an external device and also to allow the electronic device 100 to output data from the electronic device 100 to the external device. The input-output circuit 150 may further include a sensor 170. The sensors 170 may include ambient light sensors, proximity sensors based on light and capacitance, touch sensors (e.g., based on optical touch sensors and/or capacitive touch sensors, where the touch sensors may be part of a touch display screen or used independently as a touch sensor structure), acceleration sensors, gravity sensors, and other sensors, among others.
Input-output circuitry 150 may also include one or more displays, such as display 130. Display 130 may include one or a combination of liquid crystal displays, organic light emitting diode displays, electronic ink displays, plasma displays, displays using other display technologies. Display 130 may include an array of touch sensors (i.e., display 130 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The audio component 140 may be used to provide audio input and output functionality for the electronic device 100. The audio components 140 in the electronic device 100 may include a speaker, a microphone, a buzzer, a tone generator, and other components for generating and detecting sound.
The communication circuit 120 may be used to provide the electronic device 100 with the capability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 120 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless communication circuitry in communication circuitry 120 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving near field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communications circuitry 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuitry and antenna, and so forth.
The electronic device 100 may further include a battery, power management circuitry, and other input-output units 160. The input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may input commands through input-output circuitry 150 to control the operation of electronic device 100, and may use output data of input-output circuitry 150 to enable receipt of status information and other outputs from electronic device 100.
Referring to fig. 1B, fig. 1B is a system network architecture diagram for implementing the information processing method provided by the present application, where the system includes a first headset and a second headset of a wireless headset, and an electronic device, where the first headset and the second headset may respectively communicate with the electronic device through a wireless network, the wireless network may be bluetooth, infrared, and the like, a first data transmission link is established between the first headset and the electronic device through the wireless network, and the wireless network may be a Wi-Fi technology, a bluetooth technology, a visible light communication technology, an invisible light communication technology (infrared, ultraviolet communication technology), and the like, and based on the first data transmission link, data transmission such as voice data, image data, video data, and the like may be implemented between the first headset and the electronic device; a second data transmission link is established between the second headset and the electronic equipment through a wireless network, and data transmission of voice data, image data, video data and the like can be achieved between the second headset and the electronic equipment based on the second data transmission link.
Referring to fig. 1C, fig. 1C is a schematic flowchart of an information processing method disclosed in an embodiment of the present application, and the information processing method is applied to the electronic device shown in fig. 1A and the system shown in fig. 1B, and includes the following steps.
101. And receiving the target audio transmitted by the first headset.
In this embodiment of the application, the first headset may be disposed near the infant, the electronic device may establish a first communication connection with the first headset, and receive a target audio sent by the first headset, where the target audio is acquired by the first headset, and optionally, the electronic device may establish a wireless communication connection with the first headset at a preset time point, where the preset time point may be, for example, a time when the user goes to sleep.
Optionally, in a sleep monitoring scenario, the target audio sent by the first headset may be received at preset time intervals, so that the electronic device receives one target audio after each interval time, and thus, the size of the target audio may be controlled, and the sleep state of the user may be monitored in real time.
102. And carrying out first analysis on the target audio to obtain identity information corresponding to the target audio.
In the embodiment of the present application, in order to determine that the target audio includes a sound emitted by an infant, for example, a crying sound of the infant, the target audio may be subjected to a first analysis, and identity information corresponding to the target audio is determined.
Optionally, in the step 102, performing the first analysis on the target audio to obtain the identity information corresponding to the target audio may include the following steps:
21. performing feature extraction on the target audio to obtain a plurality of tone feature points;
22. generating a target characteristic curve according to the plurality of tone characteristic points;
23. matching the target characteristic curve with a preset characteristic curve template to obtain a matching value;
24. and if the matching value is larger than a preset matching value, determining the identity information of the target audio as the preset identity information.
On the other hand, a characteristic curve template of a user corresponding to preset identity information, such as a characteristic curve template corresponding to the sound of the infant, can be obtained in advance, so that the target characteristic curve and the preset characteristic curve template can be matched, and if the matching is successful, the identity information corresponding to the target audio is the infant.
Optionally, the preset identity information and the preset characteristic curve template may be set by default of the system, or may be set by the user, specifically, the user may obtain audio data of an infant through the electronic device, analyze the audio data to obtain a plurality of characteristic points, and generate the characteristic curve template according to the plurality of characteristic points, so that the characteristic curve template may be set individually, and when a target characteristic curve is matched with the curve characteristic template, the accuracy of the matching value may be improved.
103. And if the identity information is preset identity information, performing second analysis on the target audio to obtain a user state corresponding to the target audio.
In the embodiment of the application, the user state includes that the user is in a sleep state, or the user is in a non-sleep state, if the identity information corresponding to the target audio is the preset identity information, it is indicated that the target audio includes the sound emitted by the infant, the volume of the sound emitted by the infant can be further determined, and then the infant is determined to be in the sleep state or the non-sleep state according to the volume.
Optionally, in step 103, performing a second analysis on the target audio to obtain a user state corresponding to the target audio, which may include the following steps:
31. sampling volume values of the target audio to obtain a plurality of volume values;
32. determining a volume change trend according to the volume values;
33. if the volume change trend is from low to high, determining that the user state is a non-sleep state;
or,
34. determining a volume average of the plurality of volume values;
35. and if the volume average value is larger than a preset volume threshold value, determining that the user state is a non-sleep state.
The method comprises the steps of sampling a target audio to obtain a plurality of volume values corresponding to a plurality of time points, determining a volume change trend according to the volume values, generating a volume change curve by the volume values according to the time sequence of the time points, wherein the volume change curve can represent the volume change trend, if the volume change curve indicates that the volume change trend is from low to high, the sound of an infant is increased, and the infant is changed from a sleep state to a non-sleep state.
Alternatively, considering that when the actual volume value of the sound emitted by the infant is the same, if the distance between the first headset and the infant is farther, the volume value detected by the first headset will be smaller, and if the distance between the first headset and the infant is closer, the volume value detected by the first headset will be larger, that is, different detection distances will affect the detection result of the volume value, in this embodiment of the present application, the following steps may be performed:
a1, receiving a target distance between users corresponding to the preset identity information and sent by the first headset;
a2, determining a target volume threshold corresponding to the target distance according to the corresponding relation between the preset distance and the volume threshold, and taking the target volume threshold as the preset volume threshold.
In this embodiment of the present application, a target distance sent by the first headset may be received, where the target distance is a distance between the first headset and an infant, and after a volume average value of a plurality of volume values is determined in step 34, a target volume threshold corresponding to the target distance may be determined according to the correspondence, where the target volume threshold is used as a preset volume threshold, and if the volume average value is greater than the preset volume threshold, it indicates that the infant is in a non-sleep state.
104. And determining a prompt message corresponding to the user state, and sending the prompt message to a second headset, wherein the prompt message contains the user state corresponding to the preset identity information.
In the embodiment of the application, the parent can wear the second headset, and when it is determined that the user state is the non-sleep state, a prompt message corresponding to the non-sleep state can be generated, and the prompt message is sent to the second headset, where the prompt message may be a preset voice prompt message, and the prompt message is used to prompt the user wearing the second headset that the infant is already in the non-sleep state, so that the parent can timely know the state of the infant.
Optionally, in this embodiment of the application, in view of that the second headset may be separated from the ear of the user due to turning over during sleeping when the user wears the second headset, before sending the prompt message to the second headset, it may be determined whether the second headset is in a worn state, and specifically, the following steps may be performed:
41. sending a wearing state detection instruction to the second headset, wherein the wearing state detection instruction is used for controlling whether the second headset is in a wearing state or not;
42. and receiving a feedback message sent by the second headset, and if the feedback message comprises the content of the wearing state of the second headset, executing the operation of sending a prompt message to the second headset.
The method includes the steps that a wearing state detection instruction is sent to a second headset, the second headset can be controlled to detect whether the second headset is in a wearing state or not, if the second headset is not in the wearing state, a user may not hear a prompt message through the second headset, and if the second headset is in the wearing state, the prompt message can be sent to the second headset.
Optionally, if the electronic device receives a feedback message that the second headset is not in the wearing state, in order to ensure that the user can know the non-sleep state of the infant, the electronic device may detect a distance between the electronic device and the parent, and if the distance is smaller than a preset distance threshold, the electronic device may send a prompt message, for example, the electronic device may send a vibration prompt message, so that the parent may wake up from the sleep state when hearing the vibration prompt message, and then care for the infant.
It can be seen that, in the information processing method described in the embodiment of the present application, the electronic device receives a target audio sent by the first headset, performs first analysis on the target audio to obtain identity information corresponding to the target audio, and if the identity information is preset identity information, performs second analysis on the target audio to obtain a user state corresponding to the target audio, determines a prompt message corresponding to the user state, and sends a prompt message to the second headset, where the prompt message includes the user state corresponding to the preset identity information.
In accordance with the above, please refer to fig. 2, fig. 2 is a schematic structural diagram of a wireless headset according to an embodiment of the present invention, the wireless headset includes a first headset and a second headset, the two headsets respectively correspond to a left ear and a right ear of a user, the two headsets can be used separately or in pairs, the wireless headset includes: a communication circuit 2001, and a microphone 2002, a speaker 2003 and a sensor 2004 connected to the communication circuit 2001, wherein the sensor 2004 may specifically include at least one of: distance sensors, pressure sensors, proximity sensors, and the like.
The wireless headset described above with reference to fig. 2 can be used to implement the following functions:
the first headset is used for acquiring a target audio and sending the target audio to the electronic equipment, wherein the target audio is used for being subjected to first analysis by the electronic equipment to obtain identity information corresponding to the target audio, and when the identity information is preset identity information, the target audio is subjected to second analysis to obtain a user state corresponding to the target audio;
the second headset is configured to receive a prompt message sent by the electronic device, where the prompt message is a prompt message corresponding to the user state, and the prompt message includes the user state corresponding to the preset identity information.
The target audio can be acquired through the microphone of the first headset, optionally, in a sleep monitoring scene, the target audio is acquired in real time, and the target audio is sent to the electronic device according to a preset time interval, specifically, after every interval time, the target audio within a preset time length is sent to the electronic device, so that the size of the target audio can be controlled, and the sleep state of the user can be monitored in real time.
Optionally, the first headset is further configured to obtain a target distance between the first headset and a user corresponding to the preset identity information, specifically, the target distance may be obtained by a distance sensor, and the target distance is sent to the electronic device, where the target distance is used for determining, by the electronic device, a target volume threshold corresponding to the target distance according to a correspondence between the preset distance and the volume threshold, and the target volume threshold is used as the preset volume threshold.
Optionally, the second headset is further configured to receive a wearing state detection instruction sent by the electronic device, and detect whether the second headset is in a wearing state according to the wearing state detection instruction, and specifically, may detect whether the second headset is in the wearing state through a pressure sensor in the second headset, or detect whether the second headset is in the wearing state through a proximity sensor.
In accordance with the above, please refer to fig. 3, fig. 3 is a flowchart illustrating another information processing method disclosed in the embodiment of the present application, which is applied to the system shown in fig. 1B, and the information processing method includes the following steps.
301. A first headset acquires a target audio and a target distance between the first headset and a user corresponding to the preset identity information;
302. the first headset sends the target audio and the target distance to the electronic equipment.
303. And the electronic equipment performs first analysis on the target audio to obtain identity information corresponding to the target audio.
304. And if the identity information is preset identity information, the electronic equipment performs second analysis on the target audio to obtain a user state corresponding to the target audio.
305. The electronic equipment sends a wearing state detection instruction to the second headset;
306. the second headset acquires sensor data and sends the sensor data to the electronic equipment;
307. the electronic device determines the second headset wearing state according to the sensor data;
308. and the electronic equipment determines a prompt message corresponding to the user state and sends the prompt message to a second headset, wherein the prompt message contains the user state corresponding to the preset identity information.
The specific implementation processes of 301 to 308 may refer to corresponding descriptions in the method shown in fig. 1C, and are not described herein again.
It can be seen that, in the information processing method described in the embodiment of the present application, in the method, the first headset acquires the target audio and acquires the target distance between the first headset and the user corresponding to the preset identity information, the first headset sends the target audio and the target distance to the electronic device, the electronic device performs first analysis on the target audio to obtain the identity information corresponding to the target audio, if the identity information is the preset identity information, the electronic device performs second analysis on the target audio to obtain the user state corresponding to the target audio, the electronic device sends the wearing state detection instruction to the second headset, the second headset acquires the sensor data and sends the sensor data to the electronic device, the electronic device determines the wearing state of the second headset according to the electronic device to determine the prompt message corresponding to the user state, and sending the prompt message to the second headset, wherein the prompt message contains the user state corresponding to the preset identity information, so that the wireless headset can be controlled by electronic equipment to monitor the sleep state of the user, and the functions of the wireless headset are enriched.
Referring to fig. 4, fig. 4 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application, and as shown in fig. 4, the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the following steps:
receiving target audio sent by a first headset;
performing first analysis on the target audio to obtain identity information corresponding to the target audio;
if the identity information is preset identity information, second analysis is carried out on the target audio to obtain a user state corresponding to the target audio;
and determining a prompt message corresponding to the user state, and sending the prompt message to a second headset, wherein the prompt message contains the user state corresponding to the preset identity information.
In one possible example, in terms of the first parsing of the target audio to obtain the identity information corresponding to the target audio, the program includes instructions for performing the following steps:
performing feature extraction on the target audio to obtain a plurality of tone feature points;
generating a target characteristic curve according to the plurality of tone characteristic points;
matching the target characteristic curve with a preset characteristic curve template to obtain a matching value;
and if the matching value is larger than a preset matching value, determining the identity information of the target audio as the preset identity information.
In one possible example, in terms of the second parsing of the target audio to obtain the user state corresponding to the target audio, the program includes instructions for performing the following steps:
sampling volume values of the target audio to obtain a plurality of volume values;
determining a volume change trend according to the volume values;
if the volume change trend is from low to high, determining that the user state is a non-sleep state;
or,
determining a volume average of the plurality of volume values;
and if the volume average value is larger than a preset volume threshold value, determining that the user state is a non-sleep state.
In one possible example, the program further includes instructions for performing the steps of:
receiving a target distance between users corresponding to the preset identity information and sent by the first headset;
and determining a target volume threshold corresponding to the target distance according to the corresponding relation between the preset distance and the volume threshold, and taking the target volume threshold as the preset volume threshold.
In one possible example, the program further includes instructions for performing the steps of:
sending a wearing state detection instruction to the second headset, wherein the wearing state detection instruction is used for controlling whether the second headset is in a wearing state or not;
and receiving a feedback message sent by the second headset, and if the feedback message comprises the content of the wearing state of the second headset, executing the operation of sending a prompt message to the second headset.
It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Referring to fig. 5A, fig. 5A is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application, applied to an electronic device, where the apparatus includes: the receiving unit 501, the analyzing unit 502, and the sending unit 503 are specifically as follows:
a receiving unit 501, configured to receive a target audio sent by a first headset;
an analyzing unit 502, configured to perform a first analysis on the target audio to obtain identity information corresponding to the target audio;
the analyzing unit 502 is further configured to perform a second analysis on the target audio to obtain a user state corresponding to the target audio when the identity information is preset identity information;
a sending unit 503, configured to determine a prompt message corresponding to the user status, and send the prompt message to a second headset, where the prompt message includes the user status corresponding to the preset identity information.
Optionally, in terms of performing the first analysis on the target audio to obtain the identity information corresponding to the target audio, the analysis unit 502 is specifically configured to:
performing feature extraction on the target audio to obtain a plurality of tone feature points;
generating a target characteristic curve according to the plurality of tone characteristic points;
matching the target characteristic curve with a preset characteristic curve template to obtain a matching value;
and if the matching value is larger than a preset matching value, determining the identity information of the target audio as the preset identity information.
Optionally, in terms of performing the second analysis on the target audio to obtain the user state corresponding to the target audio, the analysis unit 502 is specifically configured to:
sampling volume values of the target audio to obtain a plurality of volume values;
determining a volume change trend according to the volume values;
if the volume change trend is from low to high, determining that the user state is a non-sleep state;
or,
determining a volume average of the plurality of volume values;
and if the volume average value is larger than a preset volume threshold value, determining that the user state is a non-sleep state.
Alternatively, as shown in fig. 5B, fig. 5B is a modified structure of the information processing apparatus shown in fig. 5A, which may further include a determination unit 504, compared with fig. 5A, wherein,
the receiving unit 501 is further configured to receive a target distance between users, which is sent by the first headset and corresponds to the preset identity information;
the determining unit 504 is configured to determine a target volume threshold corresponding to the target distance according to a preset correspondence between the distance and the volume threshold, and use the target volume threshold as the preset volume threshold.
Optionally, the sending unit 503 is further configured to send a wearing state detection instruction to the second headset, where the wearing state detection instruction is used to control whether the second headset is in a wearing state;
the receiving unit 501 is further configured to receive a feedback message sent by the second headset, and if the feedback message includes content that the second headset is in a wearing state, execute the operation of sending a prompt message to the second headset.
It can be seen that, in the information processing apparatus described in the embodiment of the present application, the electronic device receives the target audio sent by the first headset, performs first analysis on the target audio to obtain identity information corresponding to the target audio, and if the identity information is preset identity information, performs second analysis on the target audio to obtain a user state corresponding to the target audio, determines a prompt message corresponding to the user state, and sends a prompt message to the second headset, where the prompt message includes the user state corresponding to the preset identity information, so that the electronic device can control the wireless headset to monitor the sleep state of the user, and the function of the wireless headset is enriched.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to perform part or all of the steps of any one of the methods as described in the above method embodiments, and the computer includes a wearable device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising a wearable device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and the like.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific implementation and application scope, and in view of the above, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. An information processing method, characterized in that the method comprises:
receiving target audio sent by a first headset;
performing first analysis on the target audio to obtain identity information corresponding to the target audio;
if the identity information is preset identity information, second analysis is carried out on the target audio to obtain a user state corresponding to the target audio;
and determining a prompt message corresponding to the user state, and sending the prompt message to a second headset, wherein the prompt message contains the user state corresponding to the preset identity information.
2. The method of claim 1, wherein the first parsing the target audio to obtain identity information corresponding to the target audio comprises:
performing feature extraction on the target audio to obtain a plurality of tone feature points;
generating a target characteristic curve according to the plurality of tone characteristic points;
matching the target characteristic curve with a preset characteristic curve template to obtain a matching value;
and if the matching value is larger than a preset matching value, determining the identity information of the target audio as the preset identity information.
3. The method according to claim 1 or 2, wherein the second parsing the target audio to obtain the user state corresponding to the target audio includes:
sampling volume values of the target audio to obtain a plurality of volume values;
determining a volume change trend according to the volume values;
if the volume change trend is from low to high, determining that the user state is a non-sleep state;
or,
determining a volume average of the plurality of volume values;
and if the volume average value is larger than a preset volume threshold value, determining that the user state is a non-sleep state.
4. The method of any of claim 3, further comprising:
receiving a target distance between users corresponding to the preset identity information and sent by the first headset;
and determining a target volume threshold corresponding to the target distance according to the corresponding relation between the preset distance and the volume threshold, and taking the target volume threshold as the preset volume threshold.
5. The method according to any one of claims 1-4, further comprising:
sending a wearing state detection instruction to the second headset, wherein the wearing state detection instruction is used for controlling whether the second headset is in a wearing state or not;
and receiving a feedback message sent by the second headset, and if the feedback message comprises the content of the wearing state of the second headset, executing the operation of sending a prompt message to the second headset.
6. An information processing apparatus characterized in that the apparatus comprises:
the receiving unit is used for receiving the target audio transmitted by the first headset;
the analysis unit is used for carrying out first analysis on the target audio to obtain identity information corresponding to the target audio;
the analysis unit is further configured to perform second analysis on the target audio to obtain a user state corresponding to the target audio when the identity information is preset identity information;
and the sending unit is used for determining a prompt message corresponding to the user state and sending the prompt message to a second headset, wherein the prompt message contains the user state corresponding to the preset identity information.
7. The information processing apparatus according to claim 6, wherein, in the aspect of performing the first analysis on the target audio to obtain the identity information corresponding to the target audio, the analysis unit is specifically configured to:
performing feature extraction on the target audio to obtain a plurality of tone feature points;
generating a target characteristic curve according to the plurality of tone characteristic points;
matching the target characteristic curve with a preset characteristic curve template to obtain a matching value;
and if the matching value is larger than a preset matching value, determining the identity information of the target audio as the preset identity information.
8. The information processing apparatus according to claim 6 or 7, wherein in the aspect of performing the second analysis on the target audio to obtain the user status corresponding to the target audio, the analysis unit is specifically configured to:
sampling volume values of the target audio to obtain a plurality of volume values;
determining a volume change trend according to the volume values;
if the volume change trend is from low to high, determining that the user state is a non-sleep state;
or,
determining a volume average of the plurality of volume values;
and if the volume average value is larger than a preset volume threshold value, determining that the user state is a non-sleep state.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810707457.4A CN109040425B (en) | 2018-07-02 | 2018-07-02 | Information processing method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810707457.4A CN109040425B (en) | 2018-07-02 | 2018-07-02 | Information processing method and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109040425A true CN109040425A (en) | 2018-12-18 |
CN109040425B CN109040425B (en) | 2021-03-05 |
Family
ID=65521220
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810707457.4A Expired - Fee Related CN109040425B (en) | 2018-07-02 | 2018-07-02 | Information processing method and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109040425B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112533097A (en) * | 2019-09-19 | 2021-03-19 | Oppo广东移动通信有限公司 | Earphone in-box detection method, earphone box and storage medium |
CN113495967A (en) * | 2020-03-20 | 2021-10-12 | 华为技术有限公司 | Multimedia data pushing method, equipment, server and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104433008A (en) * | 2014-11-09 | 2015-03-25 | 赵丽 | Intelligent Bluetooth monitoring bracelet |
CN104836905A (en) * | 2015-04-13 | 2015-08-12 | 惠州Tcl移动通信有限公司 | System adjusting method and apparatus based on user state |
CN105162958A (en) * | 2015-07-30 | 2015-12-16 | 广东欧珀移动通信有限公司 | Event reminding processing method, related device, and reminding system |
CN205286341U (en) * | 2015-12-31 | 2016-06-08 | 潍坊歌尔电子有限公司 | Infant guards device, earphone and monitor system |
US20160292576A1 (en) * | 2015-04-05 | 2016-10-06 | Smilables Inc. | Infant learning receptivity detection system |
CN106295158A (en) * | 2016-08-04 | 2017-01-04 | 青岛歌尔声学科技有限公司 | A kind of automatic aided management system of infant, management method and equipment |
US20170084131A1 (en) * | 2013-12-06 | 2017-03-23 | SkyBell Technologies, Inc. | Doorbell chime systems and methods |
CN107224147A (en) * | 2017-08-04 | 2017-10-03 | 无锡智汇空间投资管理有限公司 | It is a kind of that there is monitoring and the infanette of communication function |
CN107424627A (en) * | 2016-05-24 | 2017-12-01 | 葛莱儿婴儿产品股份有限公司 | System and method for autonomous baby soothing |
CN107580118A (en) * | 2017-08-29 | 2018-01-12 | 珠海格力电器股份有限公司 | Alarm clock control method and device and mobile terminal |
-
2018
- 2018-07-02 CN CN201810707457.4A patent/CN109040425B/en not_active Expired - Fee Related
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170084131A1 (en) * | 2013-12-06 | 2017-03-23 | SkyBell Technologies, Inc. | Doorbell chime systems and methods |
CN104433008A (en) * | 2014-11-09 | 2015-03-25 | 赵丽 | Intelligent Bluetooth monitoring bracelet |
US20160292576A1 (en) * | 2015-04-05 | 2016-10-06 | Smilables Inc. | Infant learning receptivity detection system |
CN104836905A (en) * | 2015-04-13 | 2015-08-12 | 惠州Tcl移动通信有限公司 | System adjusting method and apparatus based on user state |
CN105162958A (en) * | 2015-07-30 | 2015-12-16 | 广东欧珀移动通信有限公司 | Event reminding processing method, related device, and reminding system |
CN205286341U (en) * | 2015-12-31 | 2016-06-08 | 潍坊歌尔电子有限公司 | Infant guards device, earphone and monitor system |
CN107424627A (en) * | 2016-05-24 | 2017-12-01 | 葛莱儿婴儿产品股份有限公司 | System and method for autonomous baby soothing |
CN106295158A (en) * | 2016-08-04 | 2017-01-04 | 青岛歌尔声学科技有限公司 | A kind of automatic aided management system of infant, management method and equipment |
CN107224147A (en) * | 2017-08-04 | 2017-10-03 | 无锡智汇空间投资管理有限公司 | It is a kind of that there is monitoring and the infanette of communication function |
CN107580118A (en) * | 2017-08-29 | 2018-01-12 | 珠海格力电器股份有限公司 | Alarm clock control method and device and mobile terminal |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112533097A (en) * | 2019-09-19 | 2021-03-19 | Oppo广东移动通信有限公司 | Earphone in-box detection method, earphone box and storage medium |
CN113495967A (en) * | 2020-03-20 | 2021-10-12 | 华为技术有限公司 | Multimedia data pushing method, equipment, server and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109040425B (en) | 2021-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109040887B (en) | Master-slave earphone switching control method and related product | |
CN109068206B (en) | Master-slave earphone switching control method and related product | |
US11102697B2 (en) | Method for controlling earphone switching and earphone | |
CN108810693B (en) | Wearable device and device control device and method thereof | |
CN108710615B (en) | Translation method and related equipment | |
CN108886653B (en) | Earphone sound channel control method, related equipment and system | |
CN108966067B (en) | Play control method and related product | |
CN108668009B (en) | Input operation control method, device, terminal, earphone and readable storage medium | |
CN108541080B (en) | Method for realizing loop connection between first electronic equipment and second electronic equipment and related product | |
CN109067965B (en) | Translation method, translation device, wearable device and storage medium | |
CN109561420B (en) | Emergency help-seeking method and related equipment | |
CN109150221B (en) | Master-slave switching method for wearable equipment and related product | |
CN108897516B (en) | Wearable device volume adjustment method and related product | |
CN106445457A (en) | Headphone sound channel switching method and device | |
CN114077414A (en) | Audio playing control method and device, electronic equipment and storage medium | |
CN108834013B (en) | Wearable equipment electric quantity balancing method and related product | |
CN108882084B (en) | Wearable equipment electric quantity balancing method and related product | |
CN108600887B (en) | Touch control method based on wireless earphone and related product | |
CN109040425B (en) | Information processing method and related product | |
CN108668018B (en) | Mobile terminal, volume control method and related product | |
CN106126170B (en) | Sound effect setting method of terminal and terminal | |
CN108680181B (en) | Wireless earphone, step counting method based on earphone detection and related product | |
CN108958481B (en) | Equipment control method and related product | |
CN113411702B (en) | Sound channel configuration method and electronic equipment | |
CN108882085B (en) | Wearable equipment electric quantity balancing method and related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210305 |
|
CF01 | Termination of patent right due to non-payment of annual fee |