CN111772583B - Sleep monitoring analysis method and device of intelligent sound box and electronic equipment - Google Patents

Sleep monitoring analysis method and device of intelligent sound box and electronic equipment Download PDF

Info

Publication number
CN111772583B
CN111772583B CN202010612878.6A CN202010612878A CN111772583B CN 111772583 B CN111772583 B CN 111772583B CN 202010612878 A CN202010612878 A CN 202010612878A CN 111772583 B CN111772583 B CN 111772583B
Authority
CN
China
Prior art keywords
user
image
audio
curve
period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010612878.6A
Other languages
Chinese (zh)
Other versions
CN111772583A (en
Inventor
赵涛涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Shanghai Xiaodu Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Shanghai Xiaodu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd, Shanghai Xiaodu Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010612878.6A priority Critical patent/CN111772583B/en
Publication of CN111772583A publication Critical patent/CN111772583A/en
Application granted granted Critical
Publication of CN111772583B publication Critical patent/CN111772583B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4815Sleep quality

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application discloses a sleep monitoring analysis method and device of an intelligent sound box and electronic equipment, and relates to the technical fields of artificial intelligence, computer vision and voice interaction. The specific implementation scheme is as follows: acquiring multi-frame images and audio acquired by an intelligent sound box when a user sleeps; generating an image curve of the sleep of the user according to the multi-frame images; generating an audio curve of sleeping of the user according to the audio; obtaining sleep quality information of the user according to the image curve and the audio curve; outputting sleep information of the user, wherein the sleep information comprises at least one of the following: the image curve, the audio curve, the sleep quality information of the user, the image of the user when sleeping, and the audio of the user when sleeping. The method enables the user to know the detailed change process of the sleep condition and the external factors influencing the sleep, thereby helping the user to better find a method for improving the sleep and greatly improving the experience of the user.

Description

Sleep monitoring analysis method and device of intelligent sound box and electronic equipment
Technical Field
The embodiment of the application relates to the technical fields of artificial intelligence, computer vision and voice interaction in the technical field of computers, in particular to a sleep monitoring analysis method and device of an intelligent sound box and electronic equipment.
Background
With the continuous development of society, the life rhythm of people is gradually accelerated, and the working pressure or life of some people is larger, so that certain influence on sleep can be generated. Therefore, more and more people want to monitor their sleep status in some way and then adjust their work and life in time in combination with their sleep status.
In the prior art, a user may use a sports wristband, sports watch, or the like to monitor sleep. Taking a sport bracelet as an example, a user wears the sport bracelet during sleeping, and a sensor of the sport bracelet can sense the micro-actions of the user and obtain the sleeping condition of the user by using a specific algorithm. The sleep condition may include, for example, a period of time for which the user is sleeping deeply, a period of time for which the user is sleeping shallowly, a period of time for which the user is awake, and so forth.
However, with the method of the prior art, the user cannot learn the change process of the sleep condition and cannot learn the external factors affecting sleep, resulting in poor user experience.
Disclosure of Invention
The application provides a sleep monitoring analysis method and device for an intelligent sound box, and electronic equipment, wherein the sleep monitoring analysis method and device are used for enabling a user to know a sleep state change process and external factors affecting sleep.
According to an aspect of the present application, there is provided a sleep monitoring analysis method of an intelligent sound box, including: acquiring multi-frame images and audio acquired by an intelligent sound box when a user sleeps; generating an image curve of the sleep of the user according to the multi-frame images; generating an audio curve of sleeping of the user according to the audio; obtaining sleep quality information of the user according to the image curve and the audio curve; outputting sleep information of the user, wherein the sleep information comprises at least one of the following: the image curve, the audio curve, the sleep quality information of the user, the image of the user when sleeping, and the audio of the user when sleeping.
According to another aspect of the present application, there is provided a sleep monitoring analysis method of an intelligent sound box, including:
collecting images and audio of a user during sleeping; sending the image and the audio of the sleeping user to a server; receiving sleep information of the user sent by the server, wherein the sleep information comprises at least one of the following: the image curve, the audio curve, the sleep quality information of the user, the image of the user when sleeping, and the audio of the user when sleeping; and outputting the sleeping information of the user by using at least one of a display screen and an audio output module of the intelligent sound box.
According to another aspect of the present application, there is provided a sleep monitoring analysis device of an intelligent sound box, including: the acquisition module is used for acquiring multi-frame images and audio acquired by the intelligent sound box when the user sleeps; the generating module is used for generating an image curve of the sleep of the user according to the multi-frame images; generating an audio curve of sleeping of the user according to the audio; the processing module is used for obtaining the sleep quality information of the user according to the image curve and the audio curve; the output module is used for outputting the sleep information of the user, and the sleep information comprises at least one of the following: the image curve, the audio curve, the sleep quality information of the user, the image of the user when sleeping, and the audio of the user when sleeping.
According to another aspect of the present application, there is provided a sleep monitoring analysis device of an intelligent sound box, including: the acquisition module is used for acquiring images and audio of the user during sleeping; the sending module is used for sending the image and the audio of the user during sleeping to a server; the receiving module is used for receiving the sleep information of the user sent by the server, and the sleep information comprises at least one of the following: the image curve, the audio curve, the sleep quality information of the user, the image of the user when sleeping, and the audio of the user when sleeping; and the output module is used for outputting the sleeping information of the user by using at least one of the display screen of the intelligent sound box and the audio output module.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect or the second aspect.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the first aspect or the second aspect.
According to another aspect of the present application, there is provided a computer program product comprising: a computer program stored in a readable storage medium, from which it can be read by at least one processor of an electronic device, the at least one processor executing the computer program causing the electronic device to perform the method of the first aspect described above. The electronic device may be, for example, a server.
According to another aspect of the present application, there is provided a computer program product comprising: a computer program stored in a readable storage medium from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the method of the second aspect described above. The electronic device may be, for example, a smart speaker.
According to the technology of the application, a user can know the detailed change process of the sleep condition, and meanwhile, by looking up the image and the audio of the user during sleep, the user can also know external factors affecting the sleep, so that the user can be helped to better find a sleep improving method, and the user experience is greatly improved.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
Fig. 1 is an exemplary system architecture diagram of a sleep monitoring analysis method for an intelligent sound box according to an embodiment of the present application;
fig. 2 is an interaction flow chart of a sleep monitoring analysis method of an intelligent sound box provided in an embodiment of the present application;
FIG. 3 is an exemplary diagram of an image curve and an audio curve;
FIG. 4 is an interface diagram of the intelligent sound box outputting sleep quality information and images of a user in a target period;
fig. 5 is a block diagram of a sleep monitoring analysis device of an intelligent sound box according to an embodiment of the present application;
fig. 6 is a block diagram of a sleep monitoring analysis device of another intelligent sound box according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device of a method of sleep monitoring analysis of a smart speaker according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the prior art, when equipment such as a sports bracelet and a sports watch is used for monitoring the sleep of a user, the micro-action of the user is sensed through an internal sensor, and the sleeping conditions such as the deep sleep time, the light sleep time, the awake time and the like of the user are obtained by utilizing a specific algorithm. In this way, the user cannot learn the detailed change process of the sleep condition, cannot learn external factors affecting sleep, and may not help the user to find a method for improving sleep better, resulting in poor user experience.
In view of the above problems, the application provides a sleep monitoring analysis method of an intelligent sound box, which can be applied to the technical fields of computer vision and voice in the technical field of computers.
Fig. 1 is an exemplary system architecture diagram of a sleep monitoring analysis method for an intelligent speaker according to an embodiment of the present application, and as shown in fig. 1, an embodiment of the present application may relate to an intelligent speaker and a server. The intelligent sound box is provided with a camera, a microphone, a display screen and a loudspeaker. The user can instruct intelligent audio amplifier start sleep control before falling asleep, and intelligent audio amplifier gets into sleep monitoring mode promptly, and in the user's sleep process, gather the image when user's sleep by the camera of intelligent audio amplifier, gather the audio frequency of the environment when user sleeps by the microphone of intelligent audio amplifier. The intelligent sound box sends the acquired images and the acquired audio to the server, and the server analyzes and obtains the sleep information of the user by using the method of the embodiment of the application and sends the sleep information of the user to the intelligent sound box. The intelligent sound box can output sleep information to a user through the display screen and the loudspeaker on the intelligent sound box, and the user can better find a sleep improving method according to the sleep information.
Fig. 2 is an interaction flow chart of a sleep monitoring analysis method of an intelligent sound box provided in an embodiment of the present application, and as shown in fig. 2, an interaction process between the intelligent sound box and a server includes:
s201, the intelligent sound box collects multi-frame images and audio when a user sleeps.
The user can place intelligent audio amplifier in the position that can gather and contain the complete health of user. For example, a user may speak an indication, such as "initiate sleep monitor," to the smart speaker before falling asleep, and the smart speaker enters a sleep monitor mode after receiving the indication. In the sleep monitoring mode, the camera of the intelligent sound box can continuously collect images of the user during sleep according to a preset collection period or based on action change of the user, and meanwhile, the microphone of the intelligent sound box continuously collects audio in the sleep environment of the user.
The image acquisition process may include, for example: when the user has no other action, the first frame image holding the action is saved. When the user has an action, a frame of newly-acted image is reserved. When the sleeping environment changes, such as a pet running into, or a companion turns over and touches a user, a frame of new action image is stored.
S202, the intelligent sound box sends multi-frame images and audio when the user sleeps to the server.
Correspondingly, the server acquires multi-frame images and audio when the user sleeps.
The intelligent sound box can send the frame image and the audio frequency of the period between the frame image and the previous frame image to the server after collecting the frame image, namely, the data collected in each collecting period are sent to the server. Or, the intelligent sound box can temporarily store the data of each acquisition period, and after the user instructs to stop sleeping monitoring, the intelligent sound box sends the data of the sleeping process of the user to the server in batches. The specific mode of the intelligent sound box for sending data to the server is not limited.
After receiving the image and the audio frequency sent by the intelligent sound box when the user sleeps, the server can acquire the image and the audio frequency when the user sleeps.
When the intelligent sound box sends the images and the audios, each frame of image and the time for collecting each audio are sent simultaneously, and correspondingly, the server obtains the time for collecting each frame of image and each audio.
S203, the server generates an image curve of the sleep of the user according to the multi-frame images.
Alternatively, the image curve may be a two-dimensional curve, and the horizontal axis of the image curve is time and the vertical axis is amplitude, where the amplitude is used to represent the amplitude of the motion performed when the user sleeps. For example, at a moment when the user changes from lying down to lying on his side, i.e. the user makes a turn-over action, the amplitude of the moment may change significantly in the image curve compared to the previous moment.
The process of generating an image curve of the sleep of the user from the above-described multi-frame images will be described in detail in the following embodiments.
S204, the server generates an audio curve of the sleep of the user according to the audio.
Alternatively, the audio curve may be a two-dimensional curve, where the horizontal axis of the audio curve is time and the vertical axis is amplitude, and the amplitude is used to represent the sound amplitude of the sleeping environment of the user. Illustratively, at a moment in time when the user begins snoring, the amplitude of the moment in the audio curve may vary significantly from the previous moment.
The process of generating an audio profile of the user's sleep from the above audio will be described in detail in the following embodiments.
Optionally, the start time and the end time of the server generated image curve and the audio curve are the same.
The execution sequence of the steps S203 to S204 is not separate.
Fig. 3 is an exemplary diagram of an image curve and an audio curve, wherein the horizontal axis of the two curves represents time, the vertical axis of the image curve represents the motion amplitude of the user, and the vertical axis of the audio curve represents the sound amplitude of the environment in which the user is located, as shown in fig. 3. Taking the time t3 as an example, the amplitude of the image curve and the audio curve at the time is greatly improved compared with the amplitude of the previous time, which indicates that the user takes a larger action at the time t3, and simultaneously, the sound in the sleeping environment of the user is larger at the time t 3.
S205, the server obtains sleep quality information of the user according to the amplitude of the image curve and the amplitude of the audio curve.
The amplitude of the image curve is used for representing the action amplitude of the user, and the amplitude of the audio curve is used for representing the sound amplitude of the sleeping environment of the user.
Wherein the larger the sound amplitude, the larger the volume.
The action amplitude of the user during sleeping and the sound amplitude of the sleeping environment of the user can influence and reflect the sleeping quality of the user, so that the server can analyze the sleeping quality information of the user based on the amplitude of the image curve and the amplitude of the audio curve. Specific analytical procedures will be described in detail in the examples below.
S206, the server sends sleep information of the user to the intelligent sound box. The sleep information includes at least one of: the image curve, the audio curve, the sleep quality information of the user, an image of the user when sleeping, and audio of the user when sleeping.
Correspondingly, the intelligent sound box receives sleeping information of the user sent by the server.
Optionally, the server may select, according to the magnitude of the image curve, a part of the image in the image sent by the intelligent sound box to send to the intelligent sound box. For example, if the amplitude of the image curve remains unchanged in a certain period, the server may select one frame of image in the acquired image in the period to send to the smart speaker. Correspondingly, the server can also select part of audio frequency to send to the intelligent sound box according to the amplitude of the audio frequency curve.
S207, the intelligent sound box outputs the sleeping information of the user by using at least one of a display screen and an audio output module of the intelligent sound box.
The audio output module may be a speaker.
Optionally, the smart speaker may select to use one or both of the display screen and the audio output module to output sleep information of the user based on the user's instruction.
In this embodiment, the intelligent sound box collects images and audio when the user sleeps, the server can generate an image curve and an audio curve of the user sleeping based on the images and the audio, and can obtain sleep quality information of the user based on analysis of the image curve and the audio curve, and after the server sends the image curve, the audio curve, the sleep quality information of the user, the images when the user sleeps and the audio when the user sleeps to the intelligent sound box, the intelligent sound box can output the sleep information to the user according to the needs of the user, so that the user can know the detailed change process of the sleep condition, and meanwhile, by checking the images and the audio when the user sleeps, the user can also know external factors influencing the sleep, so that the user can be helped to find a method for improving the sleep better, and the experience of the user is greatly improved.
In the above embodiment, the process is described in which the intelligent sound box collects multi-frame images and audio signals when the user sleeps, the server outputs sleep information of the user based on the multi-frame images and audio signals, and the intelligent sound box outputs the sleep information to the user. In the implementation process, the multi-frame images and the audio can be executed by other devices with image and audio acquisition functions besides the intelligent sound box, and in addition, after the multi-frame images and the sound box are acquired by the intelligent sound box when the user sleeps, the sleep information of the user can be directly output based on the multi-frame images and the audio and is output to the user without being transmitted to a server for processing. In this case, the smart speaker performs the processes performed in the server in the above and the following embodiments, and the present application will not be repeated.
The following describes the process of outputting the sleep information of the user by the intelligent speaker in step S207.
Optionally, the intelligent sound box may display an image curve of the sleep of the user and an audio curve of the sleep of the user on the display screen, and output at least one of sleep quality information of the user in the target period, an image of the target period and audio of the target period based on indication information of the image curve or the audio curve of the user for the target period.
In this embodiment, at least one of sleep quality information of a target period, an image of the target period and audio of the target period is output based on indication information of a user, so that the output information can be matched with requirements of the user, and experience of the user is further improved.
The target period may refer to any period of a total period corresponding to the image curve and the audio curve.
For example, the indication information of the image curve or the audio curve of the target period for the user may refer to a click or touch operation of the user on the image curve of the target period.
When the intelligent sound box outputs the sleep quality information of the target period based on the indication information of the user, at least one of the sleep quality information of the target period, the image of the target period and the audio of the target period can be output. For example, the user clicks the image curve of the target period, the sleep quality information of the target period and the image of the target period may be output, and the user clicks the audio curve of the target period, the audio of the target period may be output. Alternatively, the user clicks on the image profile or the audio profile of the target period, and may simultaneously output the sleep quality information of the target period, the image of the target period, and the audio of the target period.
In the following, the process of outputting all the information is described by taking the example of outputting all the information as an example, and only a part of the information can be output by referring to the output mode of the corresponding information, which is not described separately.
Optionally, the intelligent sound box may display the sleep quality information of the user in the target period and the image of the target period on the display screen, and simultaneously output the audio of the target period through the audio output module.
Fig. 4 is an interface schematic diagram of the intelligent sound box outputting the sleep quality information and the image of the user in the target period, as shown in fig. 4, after the user speaks "check sleep status late" to the intelligent sound box, the intelligent sound box first displays an image curve and an audio curve on the display screen, after the user clicks the curve of the period t4-t5 in the image curve, the intelligent sound box displays the sleep quality information of "light sleep" and the image of the period t4-t5 above the curve, and simultaneously, the intelligent sound box plays the audio of the period t4-t5 through the loudspeaker. Through the images, the user can know the motion of the user himself in detail in the period and other pictures of the bedroom, for example, the image is captured and the pet comes to the side of the user, so that the user can know that the poor sleep quality in the period is caused by the influence of the pet. For another example, if the audio display of this period shows a large car noise, the user can learn that the poor sleep quality of this period is due to the influence of the noise.
In the embodiment, the intelligent sound box displays the sleep quality information and the image of the target period through the display screen, and outputs the audio of the target period through the audio output module, so that the user can know the sleep data in detail and comprehensively.
The following describes a process in which the server obtains the sleep quality information of the user according to the amplitude of the image profile and the amplitude of the audio profile in step S205.
As an alternative embodiment, if the difference between the amplitude variation of the image curve and the amplitude variation of the audio curve in the first period is smaller than a first preset value, and the amplitude variation of the image curve in the first period is smaller than a second preset value, it is determined that the sleep quality of the user in the first period is a first level.
The first period is any period in the sleeping process of the user.
Optionally, each time in the image curve has an amplitude, and the amplitude variation of the image curve in the first period can be known based on the amplitude of the start time and the amplitude of the end time of the first period. Accordingly, each time in the audio curve has an amplitude, and the amplitude variation of the audio curve in the first period can be known based on the amplitude of the starting time and the amplitude of the ending time of the first period.
If the difference between the amplitude variation of the image curve in the first period and the amplitude variation of the audio curve is smaller than a first preset value, the action of the user in the first period is consistent with the change of the sound in the environment, if the amplitude variation of the image curve in the first period is smaller than a second preset value, the user does not do action with larger amplitude in the first period, and when the two conditions are met, the user is in a quieter environment and the user sleeps quietly, so that the sleep quality of the user can be determined to be a first level.
Alternatively, the first level may be, for example, good, better, etc.
Taking the image curve and the audio curve of fig. 3 as an example, in the period t0-t1, the amplitude variation amounts of the image curve and the audio curve are both 0, and meanwhile, the amplitude of the image curve is smaller, so that the sleep quality of the user in the period t0-t1 can be determined as the first level representing higher sleep quality.
In this embodiment, according to the amplitude variation of the image curve and the amplitude variation of the audio curve in the first period, the sleep quality in the first period may be obtained as the first level.
As another alternative embodiment, if the amplitude variation of the image curve of the first period is greater than or equal to the third preset value and the amplitude variation of the audio curve of the first period is greater than or equal to the fourth preset value, it is determined that the sleep quality of the user in the first period is the second level.
Wherein the second level characterizes a sleep quality lower than the first level.
The second level may be, for example, general, poor, etc.
Wherein the third preset value may be a value greater than or equal to the second preset value.
If the amplitude variation of the image curve in the first period is greater than or equal to a third preset value, the amplitude variation of the audio curve in the first period is greater than or equal to a fourth preset value, the surrounding environment is noisy, and when the two conditions are met, the user is likely to influence sleep due to noisy environment or other reasons, so that the sleep quality of the user can be determined to be a second level lower than the first level.
Taking the image curve and the audio curve of fig. 3 as an example, the amplitude variation of the image curve and the audio curve is larger in the period t2-t3, so that the sleep quality of the user can be determined to be the second level representing poor sleep quality in the period t2-t 3.
In this embodiment, according to the amplitude variation of the image curve and the amplitude variation of the audio curve in the first period, the sleep quality in the first period may be obtained as the second level.
The procedure of generating an image curve of the user' S sleep from the multi-frame images in the above step S202 is described below.
As an alternative embodiment, the server may generate the image curves of the time periods in which the first image and the second image are located according to the motion information of the user in the first image and the motion information of the user in the second image.
The first image and the second image are two adjacent frames of images in the multi-frame image, the starting time of the period where the first image and the second image are located is the time for collecting the first image, and the ending time of the period where the first image and the second image are located is the time for collecting the second image.
Alternatively, the amplitude of the motion of the user in the first image and the amplitude of the motion of the user in the second image may be determined respectively, image curves of the time periods where the first image and the second image are located are generated, and the difference between the amplitude of the motion of the user in the second image and the amplitude of the motion of the user in the first image is used as the amplitude variation of the image curves of the time periods where the first image and the second image are located.
The acquisition time of the first image is earlier than that of the second image, the first image comprises the action of the user at the moment of acquiring the first image, and the amplitude of the action in the first image can be obtained by carrying out image analysis on the first image. The second image contains the action of the user at the moment of acquiring the second image, and the amplitude of the action in the second image can be obtained by carrying out image analysis on the second image. And connecting the two amplitudes to obtain an image curve of the time period where the first image and the second image are. The magnitude change amount of the image curve of the period is a difference between the magnitude of the motion of the user in the second image and the magnitude of the motion of the user in the first image.
In this embodiment, based on the magnitude of the motion of the user in the first image and the magnitude of the motion of the user in the second image, an image curve and the magnitude variation of the image curve can be generated, so that the generated image curve can accurately represent the motion of the user in the sleeping process.
The procedure of the server generating an image curve of the user' S sleep from the multi-frame image in the above step S203 will be described below.
As an alternative embodiment, the server may generate the audio profile for the second period from the sound amplitude of the start time of the second period and the sound amplitude of the end time of the second period.
In addition, a difference between the sound amplitude at the end time of the second period and the sound amplitude at the start time of the second period may also be used as the amplitude variation of the audio profile of the second period.
The second period is any period in the sleeping process of the user.
Optionally, the audio profile of the second period may be obtained by connecting the sound amplitude of the start time and the sound amplitude of the end time of the second period. The amplitude variation of the audio profile of the period is the difference between the sound amplitude of the end time of the second period and the sound amplitude of the start time of the second period.
In this embodiment, based on the sound amplitude of the start time of the second period and the sound amplitude of the end time of the second period, an audio curve and the amplitude variation of the audio curve can be generated, so that the generated audio curve can accurately represent the sound information of the environment where the user is located.
Fig. 5 is a block diagram of a sleep monitoring analysis device of an intelligent sound box according to an embodiment of the present application, as shown in fig. 5, the device includes:
the acquisition module 501 is configured to acquire multiple frames of images and audio acquired by the smart speaker when the user sleeps.
The generating module 502 is configured to generate an image curve of sleep of the user according to the multi-frame image; and generating an audio curve of the sleep of the user according to the audio.
And a processing module 503, configured to obtain sleep quality information of the user according to the image curve and the audio curve.
An output module 504, configured to output sleep information of the user, where the sleep information includes at least one of the following: the image curve, the audio curve, the sleep quality information of the user, the image of the user when sleeping, and the audio of the user when sleeping.
As an alternative embodiment, the processing module 503 is specifically configured to:
and obtaining sleep quality information of the user according to the amplitude of the image curve and the amplitude of the audio curve, wherein the amplitude of the image curve is used for representing the action amplitude of the user, and the amplitude of the audio curve is used for representing the sound amplitude of the sleep environment of the user.
As an alternative embodiment, the processing module 503 is specifically configured to:
if the difference between the amplitude variation of the image curve and the amplitude variation of the audio curve in the first period is smaller than a first preset value, and the amplitude variation of the image curve in the first period is smaller than a second preset value, determining that the sleep quality of the user in the first period is a first grade, wherein the first period is any period in the sleep process of the user.
As an alternative embodiment, the processing module 503 is specifically configured to:
if the amplitude variation of the image curve in the first period is greater than or equal to a third preset value, and the amplitude variation of the audio curve in the first period is greater than or equal to a fourth preset value, determining that the sleep quality of the user in the first period is a second level, wherein the sleep quality represented by the second level is lower than the sleep quality represented by the first level.
As an alternative embodiment, the generating module 502 is specifically configured to:
generating image curves of time periods of the first image and the second image according to action information of a user in the first image and action information of the user in the second image, wherein the first image and the second image are two adjacent frames of images in the multi-frame image, the starting time of the time periods of the first image and the second image is the time for collecting the first image, and the ending time of the time periods of the first image and the second image is the time for collecting the second image.
As an alternative embodiment, the generating module 502 is specifically configured to:
determining the amplitude of the action of the user in the first image and the amplitude of the action of the user in the second image respectively; generating image curves of time periods where the first image and the second image are located based on the amplitude of the actions of the user in the first image and the amplitude of the actions of the user in the second image, and taking the difference value between the amplitude of the actions of the user in the second image and the amplitude of the actions of the user in the first image as the amplitude variation of the image curves of the time periods where the first image and the second image are located.
As an alternative embodiment, the generating module 502 is specifically configured to:
and generating an audio curve of a second time period according to the sound amplitude of the starting time of the second time period and the sound amplitude of the ending time of the second time period, wherein the second time period is any time period in the sleeping process of the user.
As an alternative embodiment, the generating module 502 is further configured to:
and taking the difference value of the sound amplitude of the ending time of the second period and the sound amplitude of the starting time of the second period as the amplitude variation of the audio curve of the second period.
As an alternative embodiment, the obtaining module 501 is specifically configured to:
and acquiring multi-frame images and audio acquired by the intelligent sound box when the user sleeps.
Fig. 6 is a block diagram of another sleep monitoring analysis device for an intelligent sound box according to an embodiment of the present application, as shown in fig. 6, where the device includes:
the acquisition module 601 is configured to acquire an image and audio of a user during sleep.
And the sending module 602 is used for sending the image and the audio of the sleeping user to a server.
A receiving module 603, configured to receive sleep information of the user sent by the server, where the sleep information includes at least one of the following: the image curve, the audio curve, the sleep quality information of the user, the image of the user when sleeping, and the audio of the user when sleeping.
And the output module 604 is used for outputting the sleeping information of the user by using at least one of the display screen of the intelligent sound box and the audio output module.
As an alternative embodiment, the output module 604 is specifically configured to:
displaying an image curve of the sleep of the user and an audio curve of the sleep of the user on the display screen; and outputting at least one of sleep quality information of the user in the target period, an image of the target period and audio of the target period in response to the indication information of the image curve or the audio curve of the user for the target period.
As an alternative embodiment, the output module 604 is specifically configured to:
displaying sleep quality information of the user in the target period and an image of the target period on the display screen; and outputting the audio of the target period through the audio output module.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
According to an embodiment of the present application, there is also provided a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above. The electronic device may be, for example, a server or a smart box.
Fig. 7 is a block diagram of an electronic device according to a method of sleep monitoring analysis of a smart speaker according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein. Alternatively, the electronic device is intended to represent the aforementioned smart speakers.
As shown in fig. 7, the electronic device includes: one or more processors 701, memory 702, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 701 is illustrated in fig. 7.
Memory 702 is a non-transitory computer-readable storage medium provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for sleep monitoring analysis of the intelligent sound box provided by the application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of sleep monitoring analysis of a smart speaker provided herein.
The memory 702 is used as a non-transitory computer readable storage medium, and is used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the acquisition module 601, the transmission module 602, the receiving module 603, and the output module 604 shown in fig. 6) corresponding to the method for sleep monitoring analysis of the smart speaker in the embodiments of the present application. The processor 701 executes the non-transitory software programs, instructions, and modules stored in the memory 702 to perform various functional applications and data processing of the server, that is, a method for implementing sleep monitoring analysis of the smart speakers in the above method embodiment.
Memory 702 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created from the use of the electronic device analyzed from sleep monitoring of the smart speakers, etc. In addition, the memory 702 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 702 may optionally include memory remotely located with respect to processor 701, which may be connected to the electronics of the sleep monitoring analysis of the smart speakers via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the sleep monitoring analysis method of the intelligent sound box can further comprise: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or otherwise, in fig. 7 by way of example.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device for sleep monitor analysis of the smart speakers, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, etc. input devices. The sleep monitoring analysis 04 of the output device smart speaker may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibration motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (14)

1. A sleep monitoring analysis method of an intelligent sound box comprises the following steps:
acquiring multi-frame images and audio when a user sleeps, wherein the multi-frame images comprise: when the user has no other actions, the image of the current action is kept; when the user has actions, the image of the new action; when the sleeping environment changes or other users turn over to touch the user, the new action image is displayed, and the audio is sound data in the sleeping environment of the user;
Generating an image curve of the sleep of the user according to the multi-frame images;
generating an audio curve of sleeping of the user according to the audio;
obtaining sleep quality information of the user according to the image curve and the audio curve;
outputting sleep information of the user, wherein the sleep information comprises at least one of the following: the image curve, the audio curve, the sleep quality information of the user, the image of the user when sleeping, and the audio of the user when sleeping;
the step of obtaining sleep quality information of the user according to the image curve and the audio curve comprises the following steps:
if the difference between the amplitude variation of the image curve and the amplitude variation of the audio curve in the first period is smaller than a first preset value, and the amplitude variation of the image curve in the first period is smaller than a second preset value, determining that the sleep quality of the user in the first period is a first grade, wherein the first period is any period in the sleep process of the user;
if the amplitude variation of the image curve in the first period is greater than or equal to a third preset value, and the amplitude variation of the audio curve in the first period is greater than or equal to a fourth preset value, determining that the sleep quality of the user in the first period is a second level, wherein the sleep quality represented by the second level is lower than the sleep quality represented by the first level.
2. The method of claim 1, wherein the obtaining sleep quality information of the user from the image profile and the audio profile comprises:
and obtaining sleep quality information of the user according to the amplitude of the image curve and the amplitude of the audio curve, wherein the amplitude of the image curve is used for representing the action amplitude of the user, and the amplitude of the audio curve is used for representing the sound amplitude of the sleep environment of the user.
3. The method of claim 1, wherein the generating an image profile of the user's sleep from the multi-frame image comprises:
generating image curves of time periods of the first image and the second image according to action information of a user in the first image and action information of the user in the second image, wherein the first image and the second image are two adjacent frames of images in the multi-frame image, the starting time of the time periods of the first image and the second image is the time for collecting the first image, and the ending time of the time periods of the first image and the second image is the time for collecting the second image.
4. The method of claim 3, wherein the generating an image curve of a period in which the first image and the second image are located according to the motion information of the user in the first image and the motion information of the user in the second image includes:
Determining the amplitude of the action of the user in the first image and the amplitude of the action of the user in the second image respectively;
generating image curves of time periods where the first image and the second image are located based on the amplitude of the actions of the user in the first image and the amplitude of the actions of the user in the second image, and taking the difference value between the amplitude of the actions of the user in the second image and the amplitude of the actions of the user in the first image as the amplitude variation of the image curves of the time periods where the first image and the second image are located.
5. The method of any of claims 1-4, wherein the generating an audio profile of the user's sleep from the audio comprises:
and generating an audio curve of a second time period according to the sound amplitude of the starting time of the second time period and the sound amplitude of the ending time of the second time period, wherein the second time period is any time period in the sleeping process of the user.
6. The method of claim 5, further comprising:
and taking the difference value of the sound amplitude of the ending time of the second period and the sound amplitude of the starting time of the second period as the amplitude variation of the audio curve of the second period.
7. The method of claim 6, wherein the acquiring multi-frame images and audio while the user is sleeping comprises:
and acquiring multi-frame images and audio acquired by the intelligent sound box when the user sleeps.
8. A sleep monitoring analysis method of an intelligent sound box comprises the following steps:
collecting multi-frame images and audio when a user sleeps, wherein the multi-frame images comprise: when the user has no other actions, the image of the current action is kept; when the user has actions, the image of the new action; when the sleeping environment changes or other users turn over to touch the user, the new action image is displayed, and the audio is sound data in the sleeping environment of the user;
the method comprises the steps of sending an image and audio of a user during sleeping to a server, enabling the server to generate an image curve of the user sleeping according to multi-frame images, generating an audio curve of the user sleeping according to the audio, and determining that the sleeping quality of the user in a first period is a first grade if the difference between the amplitude variation of the image curve in the first period and the amplitude variation of the audio curve is smaller than a first preset value and the amplitude variation of the image curve in the first period is smaller than a second preset value, wherein the first period is any period in the sleeping process of the user; if the amplitude variation of the image curve in the first period is greater than or equal to a third preset value, and the amplitude variation of the audio curve in the first period is greater than or equal to a fourth preset value, determining that the sleep quality of the user in the first period is a second level, wherein the sleep quality represented by the second level is lower than the sleep quality represented by the first level;
Receiving sleep information of the user sent by the server, wherein the sleep information comprises at least one of the following: the image curve, the audio curve, the sleep quality information of the user, the image of the user when sleeping, and the audio of the user when sleeping;
and outputting the sleeping information of the user by using at least one of a display screen and an audio output module of the intelligent sound box.
9. The method of claim 8, wherein the outputting the sleep information of the user using at least one of a display screen of a smart speaker and an audio output module comprises:
displaying an image curve of the sleep of the user and an audio curve of the sleep of the user on the display screen;
and outputting at least one of sleep quality information of the user in the target period, an image of the target period and audio of the target period in response to the indication information of the image curve or the audio curve of the user for the target period.
10. The method of claim 9, wherein the outputting at least one of sleep quality information of the user during the target period, an image of the target period, and audio of the target period comprises:
Displaying sleep quality information of the user in the target period and an image of the target period on the display screen;
and outputting the audio of the target period through the audio output module.
11. A sleep monitoring analysis device of an intelligent sound box, comprising:
the acquisition module is used for acquiring multi-frame images and audio when a user sleeps, wherein the multi-frame images are acquired by the intelligent sound box and comprise: when the user has no other actions, the image of the current action is kept; when the user has actions, the image of the new action; when the sleeping environment changes or other users turn over to touch the user, the new action image is displayed, and the audio is sound data in the sleeping environment of the user;
the generating module is used for generating an image curve of the sleep of the user according to the multi-frame images; generating an audio curve of sleeping of the user according to the audio;
the processing module is used for obtaining the sleep quality information of the user according to the image curve and the audio curve;
the output module is used for outputting the sleep information of the user, and the sleep information comprises at least one of the following: the image curve, the audio curve, the sleep quality information of the user, the image of the user when sleeping, and the audio of the user when sleeping;
The processing module is specifically configured to: if the difference between the amplitude variation of the image curve and the amplitude variation of the audio curve in the first period is smaller than a first preset value, and the amplitude variation of the image curve in the first period is smaller than a second preset value, determining that the sleep quality of the user in the first period is a first grade, wherein the first period is any period in the sleep process of the user;
if the amplitude variation of the image curve in the first period is greater than or equal to a third preset value, and the amplitude variation of the audio curve in the first period is greater than or equal to a fourth preset value, determining that the sleep quality of the user in the first period is a second level, wherein the sleep quality represented by the second level is lower than the sleep quality represented by the first level.
12. A sleep monitoring analysis device of an intelligent sound box, comprising:
the acquisition module is used for acquiring multi-frame images and audio when a user sleeps, and the multi-frame images comprise: when the user has no other actions, the image of the current action is kept; when the user has actions, the image of the new action; when the sleeping environment changes or other users turn over to touch the user, the new action image is displayed, and the audio is sound data in the sleeping environment of the user;
The sending module is used for sending the image and the audio frequency of the user sleeping to the server so that the server generates an image curve of the user sleeping according to the multi-frame image, generates an audio curve of the user sleeping according to the audio frequency, and determines that the sleeping quality of the user in the first period is a first grade and the first period is any period in the sleeping process of the user if the difference between the amplitude variation of the image curve and the amplitude variation of the audio curve in the first period is smaller than a first preset value and the amplitude variation of the image curve in the first period is smaller than a second preset value; if the amplitude variation of the image curve in the first period is greater than or equal to a third preset value, and the amplitude variation of the audio curve in the first period is greater than or equal to a fourth preset value, determining that the sleep quality of the user in the first period is a second level, wherein the sleep quality represented by the second level is lower than the sleep quality represented by the first level;
the receiving module is used for receiving the sleep information of the user sent by the server, and the sleep information comprises at least one of the following: the image curve, the audio curve, the sleep quality information of the user, the image of the user when sleeping, and the audio of the user when sleeping;
And the output module is used for outputting the sleeping information of the user by using at least one of the display screen of the intelligent sound box and the audio output module.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7 or any one of claims 8-10.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7 or any one of claims 8-10.
CN202010612878.6A 2020-06-30 2020-06-30 Sleep monitoring analysis method and device of intelligent sound box and electronic equipment Active CN111772583B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010612878.6A CN111772583B (en) 2020-06-30 2020-06-30 Sleep monitoring analysis method and device of intelligent sound box and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010612878.6A CN111772583B (en) 2020-06-30 2020-06-30 Sleep monitoring analysis method and device of intelligent sound box and electronic equipment

Publications (2)

Publication Number Publication Date
CN111772583A CN111772583A (en) 2020-10-16
CN111772583B true CN111772583B (en) 2023-08-08

Family

ID=72761532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010612878.6A Active CN111772583B (en) 2020-06-30 2020-06-30 Sleep monitoring analysis method and device of intelligent sound box and electronic equipment

Country Status (1)

Country Link
CN (1) CN111772583B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106236013A (en) * 2016-06-22 2016-12-21 京东方科技集团股份有限公司 A kind of sleep monitor method and device
CN110477866A (en) * 2019-08-16 2019-11-22 百度在线网络技术(北京)有限公司 Detect method, apparatus, electronic equipment and the storage medium of sleep quality

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10357199B2 (en) * 2017-03-30 2019-07-23 Intel Corporation Sleep and environment monitor and recommendation engine
US20190279481A1 (en) * 2018-03-07 2019-09-12 Google Llc Subject detection for remote biometric monitoring

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106236013A (en) * 2016-06-22 2016-12-21 京东方科技集团股份有限公司 A kind of sleep monitor method and device
CN110477866A (en) * 2019-08-16 2019-11-22 百度在线网络技术(北京)有限公司 Detect method, apparatus, electronic equipment and the storage medium of sleep quality

Also Published As

Publication number Publication date
CN111772583A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
JP6143975B1 (en) System and method for providing haptic feedback to assist in image capture
CN107784357B (en) Personalized intelligent awakening system and method based on multi-mode deep neural network
WO2016103988A1 (en) Information processing device, information processing method, and program
JP6760271B2 (en) Information processing equipment, information processing methods and programs
WO2021017860A1 (en) Information processing method and apparatus, electronic device and storage medium
WO2017130486A1 (en) Information processing device, information processing method, and program
CN105122353A (en) Natural human-computer interaction for virtual personal assistant systems
US11720814B2 (en) Method and system for classifying time-series data
EP2922212A2 (en) Method, apparatus and system for controlling emission
CN113763958B (en) Voice wakeup method, voice wakeup device, electronic equipment and storage medium
WO2015068440A1 (en) Information processing apparatus, control method, and program
US9921796B2 (en) Sharing of input information superimposed on images
CN111755002B (en) Speech recognition device, electronic apparatus, and speech recognition method
CN111966212A (en) Multi-mode-based interaction method and device, storage medium and smart screen device
CN107016996B (en) Audio data processing method and device
CN115702993A (en) Rope skipping state detection method and electronic equipment
CN111916203A (en) Health detection method and device, electronic equipment and storage medium
WO2022001791A1 (en) Intelligent device interaction method based on ppg information
CN111772583B (en) Sleep monitoring analysis method and device of intelligent sound box and electronic equipment
CN111524123B (en) Method and apparatus for processing image
CN111243585B (en) Control method, device and equipment under multi-user scene and storage medium
WO2016143415A1 (en) Information processing apparatus, information processing method, and program
CN108837271B (en) Electronic device, output method of prompt message and related product
CN111160318B (en) Electronic equipment control method and device
CN112164396A (en) Voice control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210512

Address after: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Applicant after: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.

Applicant after: Shanghai Xiaodu Technology Co.,Ltd.

Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Applicant before: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant