CN110868495A - Message display method and device - Google Patents

Message display method and device Download PDF

Info

Publication number
CN110868495A
CN110868495A CN201810983623.3A CN201810983623A CN110868495A CN 110868495 A CN110868495 A CN 110868495A CN 201810983623 A CN201810983623 A CN 201810983623A CN 110868495 A CN110868495 A CN 110868495A
Authority
CN
China
Prior art keywords
user
message
target message
playing
current state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810983623.3A
Other languages
Chinese (zh)
Inventor
孙永利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201810983623.3A priority Critical patent/CN110868495A/en
Publication of CN110868495A publication Critical patent/CN110868495A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72484User interfaces specially adapted for cordless or mobile telephones wherein functions are triggered by incoming communication events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a message display method and device. The method comprises the following steps: when receiving a target message, acquiring the current state of a user; and when the current state of the user is a motion state, the target message is played in a voice mode. The technical scheme can achieve the purpose that the user can know the content of the message without checking the terminal, thereby avoiding the situation that the user is inconvenient to check the terminal message in motion, and improving the experience of the user in checking the message notification.

Description

Message display method and device
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a message display method and apparatus.
Background
With the continuous popularization of smart phones, the role played by the smart phones in people's life is more and more important, and users can download and install different application programs on the smart phones, so that the users can use and browse richer functions, such as chatting, shopping, videos and the like. Generally, the application programs are related to specific functions used by users, and can also perform network communication with respective server terminals, and receive push messages of the respective server terminals to the current users, that is, when the server terminals have notification messages to the current users, the notification messages are pushed to mobile phones of the users through the application programs used by the users, and the mobile phones have ring tones or vibration prompts so as to remind the users that the messages can be clicked and viewed on user interfaces of the application programs in time, such as pushing shopping promotions, states of intelligent devices in families, and the like.
Disclosure of Invention
The embodiment of the disclosure provides a message display method and device. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, a message display method is provided, the method including:
when receiving a target message, acquiring the current state of a user;
and when the current state of the user is a motion state, the target message is played in a voice mode.
In one embodiment, the method further comprises:
collecting the sound volume of the environment where the user is located;
determining output volume according to the sound volume of the environment where the user is located;
the voice playing the target message comprises:
and playing the target message according to the output volume voice.
In one embodiment, the voice playing the target message comprises:
identifying text content of the target message;
and when the text content comprises preset keywords, the message source of the target message is played in a voice mode.
In one embodiment, when the text content does not include the preset keyword, the text content is played in a voice mode.
In one embodiment, the method further comprises:
and when the message of the preset application is received, determining that the received message is a target message.
In one embodiment, the obtaining the current state of the user includes:
acquiring physiological parameters of a user;
determining the current state of the user according to the physiological parameter.
According to a second aspect of the embodiments of the present disclosure, there is provided a message presentation apparatus, the apparatus comprising:
the acquisition module is used for acquiring the current state of the user when receiving the target message;
and the playing module is used for playing the target message in a voice mode when the current state of the user is a motion state.
In one embodiment, the apparatus further comprises:
the acquisition module is used for acquiring the sound volume of the environment where the user is located;
the first determining module is used for determining the output volume according to the sound volume of the environment where the user is located;
the playing module comprises:
and the first playing submodule is used for playing the target message according to the output volume voice.
In one embodiment, the playback module includes:
the recognition submodule is used for recognizing the text content of the target message;
and the second playing submodule is used for playing the message source of the target message in a voice mode when the text content comprises preset keywords.
In one embodiment, the third playing sub-module is configured to play the text content in a voice manner when the text content does not include a preset keyword.
In one embodiment, the apparatus further comprises:
and the second determining module is used for determining the received message as a target message when the message of the preset application is received.
In one embodiment, the obtaining module comprises:
the acquisition submodule is used for acquiring the physiological parameters of the user;
and the determining submodule is used for determining the current state of the user according to the physiological parameters.
According to a third aspect of the embodiments of the present disclosure, there is provided a message presentation apparatus, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
when receiving a target message, acquiring the current state of a user;
and when the current state of the user is a motion state, the target message is played in a voice mode.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps in the above-mentioned method.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the embodiment can acquire the current state of the user when receiving the target message, and play the target message in voice when the current state of the user is a motion state; therefore, the purpose that the user can know the message content without checking the terminal can be achieved, the situation that the user is inconvenient to check the terminal message in motion is avoided, and the experience of checking the message notification by the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart illustrating a message presentation method according to an example embodiment.
Fig. 2 is a message presentation diagram illustrating an exemplary embodiment.
Fig. 3 is a flow chart illustrating a message presentation method according to an example embodiment.
Fig. 4 is a block diagram illustrating a message presentation device according to an example embodiment.
Fig. 5 is a block diagram illustrating a message presentation device according to an example embodiment.
Fig. 6 is a block diagram illustrating a message presentation device according to an example embodiment.
Fig. 7 is a block diagram illustrating a message presentation device according to an example embodiment.
Fig. 8 is a block diagram illustrating a message presentation device according to an example embodiment.
Fig. 9 is a block diagram illustrating a message presentation device according to an example embodiment.
Fig. 10 is a block diagram illustrating a message presentation device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In the actual use process of the mobile phone, if the user happens to be in a motion state, the user cannot check the mobile phone in time to a great extent, or the user cannot hear the prompt of the message notification, which often causes the user to miss a lot of useful messages and fails to achieve the instantaneity of the message notification.
In order to solve the above problem, in this embodiment, when a target message is received, a current state of a user may be obtained, and when the current state of the user is a motion state, the target message is played in a voice; therefore, the purpose that the user can know the message content without checking the terminal can be achieved, the situation that the user is inconvenient to check the terminal message in motion is avoided, and the experience of checking the message notification by the user is improved.
Fig. 1 is a flowchart illustrating a message presentation method according to an exemplary embodiment, where as shown in fig. 1, the message presentation method is used in a terminal and includes the following steps 101 and 102:
in step 101, upon receiving a target message, the current status of the user is obtained.
In step 102, when the current state of the user is a motion state, the target message is played in a voice mode.
Here, different application programs may be installed on the terminal of the user, and these application programs may support the message notification function, that is, if there is a message to be pushed to the user at the server side of the current application program, the message content to be pushed is sent to the terminal of the user through the network connection with the terminal.
Here, when the terminal receives the target message, the current state of the user can be acquired, if the current user is found to be in a motion state, the current user is indicated to be inconvenient to immediately check the push message, and if a ring or vibration is adopted to remind the user, the user may not hear, so that the situation that the user is inconvenient to check the mobile phone message in motion is avoided.
Here, the user can carry wearable equipment such as an intelligent bracelet and an intelligent watch, the terminal of the user can establish wireless connection such as bluetooth connection with the wearable equipment, the wearable equipment can detect the current state (such as a sleep state and a motion state) of the user and report the detected current state of the user to the mobile phone terminal, so that the user can browse own state information through the terminal and can obtain the current state of the user through other application programs, and thus, the terminal can obtain the current state of the user from the wearable equipment.
It should be noted that, if the current state of the user acquired by the terminal is a sleep state, the terminal performs a mute operation without displaying the target message, so as to avoid affecting the sleep of the user.
By way of example, fig. 2 is a message presentation diagram illustrating a user 20 with a smart bracelet 21 on his wrist, running on an indoor treadmill 22, the smart band 21 is in a binding state with the user's terminal 23 (the terminal 23 may be located on the treadmill 22 or located elsewhere in the room), the smart band 21 may detect that the user's current state is a motion state, thus, when the terminal 23 receives a target message such as a wanalways-transmitted mail having a mail title of "XXXX" and a content of "YYYYY", after the terminal 23 acquires that the current state of the user is a motion state from the smart bracelet 21, it can play the mail with voice "receiving waning total transmission, the main title of the mail is XXXX, the content of the mail is YYYYY", and the user 20 knows the waning total transmission mail and the content after hearing the voice play, so that the situation that the user is not convenient to view the terminal message in motion is avoided. If the mail content is not processed urgently, the user 20 can continue to move, and if the mail content is processed urgently, the user can stop moving to process the mail, so that the user is prevented from missing urgent information and missing a processing time limit.
The embodiment can acquire the current state of the user when receiving the target message, and play the target message in voice when the current state of the user is a motion state; therefore, the purpose that the user can know the message content without checking the terminal can be achieved, the situation that the user is inconvenient to check the terminal message in motion is avoided, and the experience of checking the message notification by the user is improved.
In a possible implementation manner, the message presentation method may further include the following steps a1 and a2, and the voice playing of the target message in the step 102 may be implemented as the following step A3.
In step a1, the volume of sound of the environment in which the user is located is collected.
In step a2, an output volume is determined based on the volume of sound in the environment in which the user is located.
In step a3, the target message is played in voice at the output volume.
Here, the user may be in a noisy outdoor environment or a quiet indoor environment during exercise, and if the terminal plays the target message according to a fixed volume voice all the time, the user may not hear the voice in the noisy outdoor environment, or may affect other people in the quiet indoor environment; therefore, in this embodiment, the terminal may collect the sound volume of the environment where the user is located, and determine the output volume according to the sound volume of the environment where the user is located, where in a general case, the output volume may be greater than the sound volume of the environment where the user is located, for example, a certain fixed value is different from the sound volume of the environment where the user is located; it may also be determined that there is an output volume for the sound volume range of different environments, as long as it is ensured that the user can hear the voice clearly in noisy outdoors and does not affect others in quiet indoors. Then, when playing the target message, the terminal can play the target message according to the output volume voice.
This embodiment can be in the sound volume of gathering user place environment to according to the sound volume of user place environment, confirm the output volume, just so can according to output volume pronunciation broadcast the target message can guarantee that the user can hear this pronunciation clearly in noisy open air, also can prevent to influence other people in quiet indoor.
In a possible implementation manner, the voice playing of the target message in step 102 in the above message presentation method may be implemented as the following steps B1 to B2.
In step B1, the textual content of the targeted message is identified.
In step B2, when the text content includes the preset keyword, the message source of the target message is played in voice.
Here, the terminal may recognize the text content in the target message after receiving the target message, and then the terminal may activate a voice device, such as a speaker, in the terminal to play the text content. However, if the text content in the target message relates to the privacy of the user, the terminal broadcasts the text content, which may cause the privacy of the user to be leaked, and further cause the loss of the user; therefore, the user can preset some keywords related to the privacy information in the terminal, such as passwords, accounts and the like. Thus, when the terminal recognizes that the text content in the target message includes the preset keyword, it indicates that the target message relates to the user privacy, at this time, the terminal may only play the message source of the target message, for example, if the terminal receives information related to an account and a password in a mail message sent by the friend XX, at this time, the terminal may only play the source of the mail message, such as "XX sends a mail", and the user hears the voice, knows that the friend XX sends a mail to himself and the mail may relate to the user privacy, and if the user considers that the message is urgent, the user may manually view the message.
The embodiment can identify the text content of the target message; when the text content comprises preset keywords, the message source of the target message is played in a voice mode; the privacy of the user is prevented from being revealed, and the loss of the user is avoided.
In a possible implementation, the message presentation method may further include the following step B3.
In step B3, when the preset keyword is not included in the text content, the text content is played in voice.
Here, when the terminal recognizes that the text content in the target message does not include the preset keyword, it indicates that the text content of the target message does not relate to the privacy of the user, and at this time, the terminal may play the text content in a voice manner to let the user know the detailed content of the message notification, so that the user can know the detailed content of the message notification without manually checking the terminal.
According to the method and the device, the text content can be played in a voice mode when the recognized text content does not include the preset keywords, the privacy of the user cannot be leaked, and the user can know the detailed content of the message notification, so that the purpose that the user can know the detailed content of the message notification without manually checking the terminal is achieved.
In a possible implementation, the message presentation method may further include the following step C1.
In step C1, when the message of the preset application is received, it is determined that the received message is the target message.
Here, a terminal of a user may be installed with various applications, where messages of some applications are not important, and the user does not need the terminal to perform voice broadcast, and if the terminal performs voice broadcast on messages of all applications, it may affect the user to listen to important messages, and waste of resources of the terminal may also be caused.
According to the embodiment, when the message of the preset application is received, the received message is determined to be the target message, and then when the current state of the user is the motion state, the target message is played in a voice mode, so that the user can conveniently listen to the important target message, and only the target message is played in the voice mode, so that the resource waste of the terminal can be reduced.
In a possible implementation manner, the obtaining of the current state of the user in step 101 of the above message presentation method may be implemented as the following steps D1 and D2.
In step D1, physiological parameters of the user are acquired.
In step D2, the current state of the user is determined from the physiological parameter.
Here, the wearable device carried by the user can detect physiological parameters of the user, such as a heartbeat rate, a movement speed, a swing arm frequency and the like, and then the wearable device can transmit the physiological parameters of the user to the terminal through a wireless connection with the terminal, so that the terminal can acquire the physiological parameters of the user.
Here, after the terminal acquires the physiological parameter, the current state of the user may be determined according to the physiological parameter, and for example, when the heartbeat rate exceeds a certain rate, the swing arm frequency exceeds a certain frequency, or the movement speed is within a certain speed range, the current state of the user may be determined to be a movement state.
The embodiment can acquire the physiological parameters of the user, and determine the current state of the user according to the physiological parameters, so that the determination basis is accurate.
The implementation is described in detail below by way of several embodiments.
Fig. 3 is a flowchart illustrating a message presentation method according to an exemplary embodiment, where the message presentation method may be implemented by a terminal as shown in fig. 3, and includes steps 301 and 308.
In step 301, the sound volume of the environment where the user is located is collected.
In step 302, an output volume is determined according to the volume of the sound of the environment in which the user is located.
In step 303, when the message of the preset application is received, it is determined that the received message is the target message.
In step 304, upon receipt of the targeted message, the physiological parameters of the user are acquired.
In step 305, a current state of the user is determined from the physiological parameter.
In step 306, when the current state of the user is a motion state, the text content of the target message is identified.
In step 307, when the text content includes a preset keyword, playing a message source of the target message according to the output volume voice.
In step 308, when the text content does not include the preset keyword, the text content is played according to the output volume voice.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 4 is a block diagram illustrating a message presentation apparatus, which may be implemented as part or all of an electronic device through software, hardware, or a combination of both, according to an example embodiment. As shown in fig. 4, the message presentation apparatus includes:
an obtaining module 401, configured to obtain a current state of a user when a target message is received;
a playing module 402, configured to play the target message in voice when the current state of the user is a motion state.
As a possible embodiment, fig. 5 is a block diagram illustrating a message presentation apparatus according to an exemplary embodiment, and as shown in fig. 5, the message presentation apparatus disclosed above may be further configured to include: the acquisition module 403 and the first determination module 404 configure the playing module 402 to include a first playing sub-module 4021, where:
the acquisition module 403 is configured to acquire sound volume of an environment where the user is located;
a first determining module 404, configured to determine an output volume according to a sound volume of an environment where the user is located;
the first playing sub-module 4021 is configured to play the target message according to the output volume voice.
As a possible embodiment, fig. 6 is a block diagram of a message presentation apparatus according to an exemplary embodiment, and as shown in fig. 6, the above disclosed message presentation apparatus may further configure the playing module 402 to include an identification sub-module 4022 and a second playing sub-module 4023, where:
the identification sub-module 4022 is configured to identify text content of the target message;
the second playing sub-module 4023 is configured to play a message source of the target message in a voice manner when the text content includes a preset keyword;
as a possible embodiment, fig. 7 is a block diagram of a message presentation apparatus according to an exemplary embodiment, and as shown in fig. 7, the above disclosed message presentation apparatus may further configure the playing module 402 to include a third playing sub-module 4024, where:
the third playing sub-module 4024 is configured to play the text content in a voice manner when the text content does not include a preset keyword.
As a possible embodiment, fig. 8 is a block diagram illustrating a message presentation apparatus according to an exemplary embodiment, and as shown in fig. 8, the message presentation apparatus disclosed above may be further configured to include a second determining module 405, wherein:
a second determining module 405, configured to determine, when a message of a preset application is received, that the received message is a target message.
As a possible embodiment, fig. 9 is a block diagram of a message presentation apparatus according to an exemplary embodiment, and as shown in fig. 9, the above-disclosed message presentation apparatus may further configure the obtaining module 401 to include a obtaining sub-module 4011 and a determining sub-module 4012, where:
the obtaining sub-module 4011 is configured to obtain a physiological parameter of a user;
a determining sub-module 4012, configured to determine a current state of the user according to the physiological parameter.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 10 is a block diagram illustrating a message presentation apparatus adapted for use with a terminal device according to an exemplary embodiment. For example, the apparatus 1000 may be a mobile phone, a game console, a computer, a tablet device, a personal digital assistant, and the like.
The apparatus 1000 may include one or more of the following components: processing component 1001, memory 1002, power component 1003, multimedia component 1004, audio component 1005, input/output (I/O) interface 1006, sensor component 1007, and communications component 1008.
The processing component 1001 generally controls the overall operation of the device 1000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1001 may include one or more processors 1020 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 1001 may include one or more modules that facilitate interaction between the processing component 1001 and other components. For example, the processing component 1001 may include a multimedia module to facilitate interaction between the multimedia component 1004 and the processing component 1001.
The memory 1002 is configured to store various types of data to support operations at the device 1000. Examples of such data include instructions for any application or method operating on device 1000, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1002 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply components 1003 provide power to the various components of device 1000. The power components 1003 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 1000.
The multimedia component 1004 includes a screen that provides an output interface between the device 1000 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1004 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 1000 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1005 is configured to output and/or input audio signals. For example, audio component 1005 includes a Microphone (MIC) configured to receive external audio signals when apparatus 1000 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1002 or transmitted via the communication component 1008. In some embodiments, audio component 1005 also includes a speaker for outputting audio signals.
The I/O interface 1006 provides an interface between the processing component 1001 and peripheral interface modules, such as keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1007 includes one or more sensors for providing various aspects of status assessment for the device 1000. For example, the sensor assembly 1007 can detect the open/closed status of the device 1000, the relative positioning of the components, such as the display and keypad of the device 1000, the sensor assembly 1007 can also detect a change in the position of the device 1000 or a component of the device 1000, the presence or absence of user contact with the device 1000, the orientation or acceleration/deceleration of the device 1000, and a change in the temperature of the device 1000. The sensor assembly 1007 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1007 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1007 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1008 is configured to facilitate communications between the apparatus 1000 and other devices in a wired or wireless manner. The device 1000 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1008 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1008 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1000 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 1002 comprising instructions, executable by the processor 1020 of the device 1000 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium, wherein instructions of the storage medium, when executed by a processor of a device 1000, enable the device 1000 to perform the above message presentation method, the method comprising:
when receiving a target message, acquiring the current state of a user;
and when the current state of the user is a motion state, the target message is played in a voice mode.
In one embodiment, the method further comprises:
collecting the sound volume of the environment where the user is located;
determining output volume according to the sound volume of the environment where the user is located;
the voice playing the target message comprises:
and playing the target message according to the output volume voice.
In one embodiment, the voice playing the target message comprises:
identifying text content of the target message;
when the text content comprises preset keywords, the message source of the target message is played in a voice mode;
and when the text content does not comprise preset keywords, the text content is played in a voice mode.
In one embodiment, the method further comprises:
and when the message of the preset application is received, determining that the received message is a target message.
In one embodiment, the obtaining the current state of the user includes:
acquiring physiological parameters of a user;
determining the current state of the user according to the physiological parameter.
The present embodiment further provides a message display apparatus, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
when receiving a target message, acquiring the current state of a user;
and when the current state of the user is a motion state, the target message is played in a voice mode.
In one embodiment, the processor may be further configured to:
the method further comprises the following steps:
collecting the sound volume of the environment where the user is located;
determining output volume according to the sound volume of the environment where the user is located;
the voice playing the target message comprises:
and playing the target message according to the output volume voice.
In one embodiment, the processor may be further configured to:
the voice playing the target message comprises:
identifying text content of the target message;
when the text content comprises preset keywords, the message source of the target message is played in a voice mode;
and when the text content does not comprise preset keywords, the text content is played in a voice mode.
In one embodiment, the processor may be further configured to:
the method further comprises the following steps:
and when the message of the preset application is received, determining that the received message is a target message.
In one embodiment, the processor may be further configured to:
the acquiring the current state of the user comprises:
acquiring physiological parameters of a user;
determining the current state of the user according to the physiological parameter.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A method for message presentation, the method comprising:
when receiving a target message, acquiring the current state of a user;
and when the current state of the user is a motion state, the target message is played in a voice mode.
2. The method of claim 1, further comprising:
collecting the sound volume of the environment where the user is located;
determining output volume according to the sound volume of the environment where the user is located;
the voice playing the target message comprises:
and playing the target message according to the output volume voice.
3. The method of claim 1, wherein the voice playing the target message comprises:
identifying text content of the target message;
and when the text content comprises preset keywords, the message source of the target message is played in a voice mode.
4. The method of claim 3, further comprising:
and when the text content does not comprise preset keywords, the text content is played in a voice mode.
5. The method of claim 1, further comprising:
and when the message of the preset application is received, determining that the received message is a target message.
6. The method of claim 1, wherein the obtaining the current status of the user comprises:
acquiring physiological parameters of a user;
determining the current state of the user according to the physiological parameter.
7. A message presentation device, the device comprising:
the acquisition module is used for acquiring the current state of the user when receiving the target message;
and the playing module is used for playing the target message in a voice mode when the current state of the user is a motion state.
8. The apparatus of claim 7, further comprising:
the acquisition module is used for acquiring the sound volume of the environment where the user is located;
the first determining module is used for determining the output volume according to the sound volume of the environment where the user is located;
the playing module comprises:
and the first playing submodule is used for playing the target message according to the output volume voice.
9. The apparatus of claim 7, wherein the playback module comprises:
the recognition submodule is used for recognizing the text content of the target message;
and the second playing submodule is used for playing the message source of the target message in a voice mode when the text content comprises preset keywords.
10. The apparatus of claim 9, wherein the play module further comprises:
and the third playing sub-module is used for playing the text content in a voice mode when the text content does not include preset keywords.
11. The apparatus of claim 7, further comprising:
and the second determining module is used for determining the received message as a target message when the message of the preset application is received.
12. The apparatus of claim 7, wherein the obtaining module comprises:
the acquisition submodule is used for acquiring the physiological parameters of the user;
and the determining submodule is used for determining the current state of the user according to the physiological parameters.
13. A message presentation device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
when receiving a target message, acquiring the current state of a user;
and when the current state of the user is a motion state, the target message is played in a voice mode.
14. A computer readable storage medium storing computer instructions, wherein the computer instructions, when executed by a processor, implement the steps of the method of any one of claims 1 to 6.
CN201810983623.3A 2018-08-27 2018-08-27 Message display method and device Pending CN110868495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810983623.3A CN110868495A (en) 2018-08-27 2018-08-27 Message display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810983623.3A CN110868495A (en) 2018-08-27 2018-08-27 Message display method and device

Publications (1)

Publication Number Publication Date
CN110868495A true CN110868495A (en) 2020-03-06

Family

ID=69651166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810983623.3A Pending CN110868495A (en) 2018-08-27 2018-08-27 Message display method and device

Country Status (1)

Country Link
CN (1) CN110868495A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111385423A (en) * 2020-03-12 2020-07-07 北京小米移动软件有限公司 Voice broadcasting method, voice broadcasting device and computer storage medium
CN111756930A (en) * 2020-06-28 2020-10-09 维沃移动通信有限公司 Communication control method, communication control device, electronic apparatus, and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104660795A (en) * 2013-11-25 2015-05-27 联想(北京)有限公司 Information processing method and electronic equipment
CN104811533A (en) * 2015-03-24 2015-07-29 广东欧珀移动通信有限公司 Automatic voice message play method and system and intelligent sound box
CN105100472A (en) * 2015-07-23 2015-11-25 小米科技有限责任公司 Terminal processing method and apparatus
CN105306671A (en) * 2015-07-02 2016-02-03 太仓埃特奥数据科技有限公司 Method and device for processing terminal message based on user status
CN105791545A (en) * 2016-02-24 2016-07-20 宇龙计算机通信科技(深圳)有限公司 Anti-disturbing method and device for terminal equipment
CN106027801A (en) * 2016-07-06 2016-10-12 广东小天才科技有限公司 Method and device for processing communication message and mobile device
US20170064084A1 (en) * 2014-05-15 2017-03-02 Huawei Technologies Co., Ltd. Method and Apparatus for Implementing Voice Mailbox
CN106506804A (en) * 2016-09-29 2017-03-15 维沃移动通信有限公司 A kind of based reminding method of notification message and mobile terminal
CN107026929A (en) * 2016-02-01 2017-08-08 广州市动景计算机科技有限公司 Reminding method, device and the electronic equipment of applicative notifications
CN107094203A (en) * 2017-04-26 2017-08-25 北京小米移动软件有限公司 Message treatment method, device and computer-readable recording medium
CN107896278A (en) * 2017-11-10 2018-04-10 珠海市魅族科技有限公司 Phonetic prompt method, device and the storage medium of text notification information

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104660795A (en) * 2013-11-25 2015-05-27 联想(北京)有限公司 Information processing method and electronic equipment
US20170064084A1 (en) * 2014-05-15 2017-03-02 Huawei Technologies Co., Ltd. Method and Apparatus for Implementing Voice Mailbox
CN104811533A (en) * 2015-03-24 2015-07-29 广东欧珀移动通信有限公司 Automatic voice message play method and system and intelligent sound box
CN105306671A (en) * 2015-07-02 2016-02-03 太仓埃特奥数据科技有限公司 Method and device for processing terminal message based on user status
CN105100472A (en) * 2015-07-23 2015-11-25 小米科技有限责任公司 Terminal processing method and apparatus
CN107026929A (en) * 2016-02-01 2017-08-08 广州市动景计算机科技有限公司 Reminding method, device and the electronic equipment of applicative notifications
CN105791545A (en) * 2016-02-24 2016-07-20 宇龙计算机通信科技(深圳)有限公司 Anti-disturbing method and device for terminal equipment
CN106027801A (en) * 2016-07-06 2016-10-12 广东小天才科技有限公司 Method and device for processing communication message and mobile device
CN106506804A (en) * 2016-09-29 2017-03-15 维沃移动通信有限公司 A kind of based reminding method of notification message and mobile terminal
CN107094203A (en) * 2017-04-26 2017-08-25 北京小米移动软件有限公司 Message treatment method, device and computer-readable recording medium
CN107896278A (en) * 2017-11-10 2018-04-10 珠海市魅族科技有限公司 Phonetic prompt method, device and the storage medium of text notification information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111385423A (en) * 2020-03-12 2020-07-07 北京小米移动软件有限公司 Voice broadcasting method, voice broadcasting device and computer storage medium
CN111756930A (en) * 2020-06-28 2020-10-09 维沃移动通信有限公司 Communication control method, communication control device, electronic apparatus, and readable storage medium

Similar Documents

Publication Publication Date Title
EP3113466B1 (en) Method and device for warning
US20170034430A1 (en) Video recording method and device
US10334282B2 (en) Methods and devices for live broadcasting based on live broadcasting application
CN104902059A (en) Call reminding method and device
CN105898032B (en) method and device for adjusting prompt tone
CN107743244B (en) Video live broadcasting method and device
CN109087650B (en) Voice wake-up method and device
CN110691268B (en) Message sending method, device, server, mobile terminal and storage medium
CN105898573B (en) Multimedia file playing method and device
CN107454204B (en) User information labeling method and device
CN106406175B (en) Door opening reminding method and device
CN110475134A (en) A kind of comment content display method, device, electronic equipment and storage medium
CN112291631A (en) Information acquisition method, device, terminal and storage medium
CN109040651B (en) Video communication method and device
CN111181844A (en) Message processing method, device and medium
CN105721705B (en) Call quality control method and device and mobile terminal
JP6279815B2 (en) Method and apparatus for reporting status
CN111009239A (en) Echo cancellation method, echo cancellation device and electronic equipment
CN110868495A (en) Message display method and device
US11561278B2 (en) Method and device for processing information based on radar waves, terminal, and storage medium
CN106101441B (en) Terminal control method and device
CN109194808B (en) Volume adjusting method and device
CN104486489A (en) Method and device for outputting call background voice
CN107026941B (en) Method and device for processing reply of unread message
CN107677363B (en) Noise prompting method and intelligent terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200306