WO2018166241A1 - 一种生成展示内容的方法与设备 - Google Patents

一种生成展示内容的方法与设备 Download PDF

Info

Publication number
WO2018166241A1
WO2018166241A1 PCT/CN2017/113456 CN2017113456W WO2018166241A1 WO 2018166241 A1 WO2018166241 A1 WO 2018166241A1 CN 2017113456 W CN2017113456 W CN 2017113456W WO 2018166241 A1 WO2018166241 A1 WO 2018166241A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target content
target
feature information
text
Prior art date
Application number
PCT/CN2017/113456
Other languages
English (en)
French (fr)
Inventor
钱超
Original Assignee
上海掌门科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海掌门科技有限公司 filed Critical 上海掌门科技有限公司
Priority to SG11201908577W priority Critical patent/SG11201908577WA/en
Publication of WO2018166241A1 publication Critical patent/WO2018166241A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Definitions

  • the present application relates to the field of communications technologies, and in particular, to a technology for generating display content.
  • the user's content is usually generated by the user's personal input on the entity or the virtual device, or by analyzing the current emotional state of the user by analyzing certain keywords, thereby generating specific displayable content.
  • the user burden is increased, and the application of the user input content is relatively simple through the analysis of certain keywords, and the accuracy of expressing the user's emotional state is not high.
  • a method of generating display content comprising:
  • the determining the display effect of the target content based on the physical feature information comprises:
  • the determining, according to the physical feature information, the psychological state of the target user when inputting the target content includes:
  • the sample body feature information includes self sample body feature information, and determining, based on the body feature information, a mental state of the target user when inputting the target content includes:
  • the body feature information is compared with the self-sample body feature information, and based on the psychological state corresponding to the self-sample body feature information, the psychological state of the target user when inputting the target content is determined.
  • the target content includes text information
  • the display effect of the target content includes text display processing on the text information, wherein the text display processing includes adding a text color, a text font deformation, and adding a text background color.
  • the text display processing includes adding a text color, a text font deformation, and adding a text background color.
  • the target content includes voice information
  • the display effect of the target content includes performing voice display processing on the voice information, where the voice display processing includes adding a corresponding emoji, adding a background image, adding Background music.
  • the target content includes video information
  • the display effect of the target content includes performing video display processing on the video information, where the video display processing includes adding a corresponding emoticon, adding a corresponding image, Add the appropriate text message.
  • the target content includes picture information
  • the display effect of the target content includes performing a picture display process on the picture information, where the picture display process includes: cropping a picture, beautifying a picture, and deforming a picture.
  • the physical feature information comprises at least one of the following:
  • physiological data information is used to reflect physiological characteristics of the target user in different psychological states
  • Behavior data information the behavior data information is used to reflect behavior characteristics of the target user in different mental states.
  • physiological data information comprises at least one of the following: pulse information; blood pressure information; heartbeat information.
  • the behavior data information comprises at least one of the following: facial expression information; input speed information; grip pressure information.
  • Equipment includes:
  • a first device configured to acquire body feature information of the target user when inputting the target content
  • a second device configured to determine a display effect of the target content based on the physical feature information
  • a third device configured to generate display content corresponding to the target content based on the display effect.
  • the second device comprises:
  • a first unit configured to determine, according to the physical feature information, a mental state of the target user when inputting the target content
  • a second unit configured to determine a display effect of the target content based on the mental state.
  • the first unit is used to:
  • sample body feature information comprises at least one of the following:
  • the first unit is configured to:
  • the target content includes text information
  • the display effect of the target content includes text display processing on the text information, wherein the text display processing includes adding a text color, a text font deformation, and adding a text background color.
  • the text display processing includes adding a text color, a text font deformation, and adding a text background color.
  • the target content includes voice information
  • the display effect of the target content includes performing voice display processing on the voice information, where the voice display processing includes adding a corresponding emoji, adding a background image, adding Background music.
  • the target content includes video information
  • the display effect of the target content includes performing video display processing on the video information
  • the video display processing includes adding Add the corresponding emoji, add the corresponding picture, and add the corresponding text information.
  • the target content includes picture information
  • the display effect of the target content includes performing a picture display process on the picture information, where the picture display process includes: cropping a picture, beautifying a picture, and deforming a picture.
  • the physical feature information comprises at least one of the following:
  • physiological data information is used to reflect physiological characteristics of the target user in different psychological states
  • Behavior data information the behavior data information is used to reflect behavior characteristics of the target user in different mental states.
  • physiological data information comprises at least one of the following: pulse information;
  • Blood pressure information heartbeat information.
  • the behavior data information comprises at least one of the following: facial expression information; input speed information; grip pressure information.
  • the present application obtains physical feature information of a target user when inputting target content, and determines a display effect of the target content based on the physical feature information, and then generates the target content based on the display effect. Corresponding display content.
  • the user can automatically generate the corresponding display content when the user inputs the target content, and the display content can better and more accurately express the state of the target user, thereby improving the user.
  • the display content can better and more accurately express the state of the target user, thereby improving the user.
  • the present application may further determine a mental state of the target user when inputting the target content based on the physical feature information, and determine a display effect of the target content based on the mental state.
  • the display effect of the target content is determined by the mental state of the target user, and the psychological feeling and emotional state of the target user when inputting the target content can be well expressed, thereby getting closer. The distance of the user makes the long-distance interaction more realistic.
  • the target content in the present application includes at least one of the following: text information, voice information, video information, picture information, and the like, such that the target user publishes text, voice, video, or picture, etc.
  • the corresponding display content can be generated according to the physical feature information of the target user, thereby enriching the user's feelings.
  • FIG. 1 shows a flow chart of a method of generating display content in accordance with an aspect of the present application
  • FIG. 2 is a schematic diagram showing a corresponding display effect when a target content is text information according to a preferred embodiment of the present application
  • FIG. 3 is a schematic diagram showing a corresponding display effect when a target content is voice information according to another preferred embodiment of the present application
  • FIG. 4 is a schematic diagram showing a corresponding display effect when a target content is video information according to still another preferred embodiment of the present application.
  • FIG. 5 illustrates a schematic diagram of an apparatus for generating presentation content in accordance with another aspect of the present application.
  • the terminal, the device of the service network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridge, tape storage or other magnetic storage device or any other non-transportable media, Can be used to store information that can be accessed by a computing device.
  • computer readable media does not include non-transitory computer readable media, such as modulated data signals and carrier waves.
  • FIG. 1 shows a flow chart of a method of generating display content according to an aspect of the present application, the method comprising:
  • S1 acquires physical feature information of the target user when inputting the target content
  • S3 generates display content corresponding to the target content based on the display effect.
  • the body feature information of the target user when the target content is input is acquired.
  • the target content includes at least one of the following: text information, voice information, video information, and picture. Information and more. It should be understood that when the user inputs the target content, since the emotional state of the target user is different, there may be different body feature information, preferably, wherein the body feature information includes but is not limited to: physiological data. Information, behavioral data information.
  • the physiological data information is used to reflect physiological characteristics of the target user in different psychological states, and the physiological data information includes, but is not limited to, pulse information, blood pressure information, heartbeat information, and the like;
  • the behavior data information includes, but is not limited to, facial expression information, input speed information, and grip pressure information.
  • the physical feature information is merely an example, and other existing or future physical feature information may be included in the scope of protection of the present application, and is hereby incorporated by reference. this.
  • the manner of acquiring the physical feature information of the target user when inputting the target content may be collected by a corresponding hardware device, including but not limited to a gyroscope, a pressure sensor, a pulse sensor, a blood pressure sensor, a temperature sensor, A blood glucose detecting sensor, a camera, etc.; or can be obtained by inputting a device corresponding to the target content.
  • a gyroscope a pressure sensor, a pulse sensor, a blood pressure sensor, a temperature sensor, A blood glucose detecting sensor, a camera, etc.
  • the manner of obtaining the physical feature information of the target user when inputting the target content is only an example, and other existing or future possible manners of obtaining the physical feature information of the target user when inputting the target content are applicable to This application is also intended to be included within the scope of the present application, The way of reference is included here.
  • step S2 determining a display effect of the target content based on the physical feature information, where different target content may correspond to different display effects, wherein, when the target content
  • the display effect of the text information includes performing text display processing on the text information, wherein the text display processing includes adding text color, text font deformation, adding text background color, adding a background image, adding a background Music, add text to show special effects.
  • the target content includes voice information
  • the display effect of the target content includes performing voice display processing on the voice information, wherein the voice display processing includes adding a corresponding emoji, adding a background image, adding Background music.
  • the voice information can also be displayed in the form of text information, and accordingly, the display effect can be presented according to the display effect of the aforementioned text information.
  • the target content includes video information
  • the display effect of the target content includes performing video display processing on the video information
  • the video display processing includes, but is not limited to, adding a corresponding emoticon, adding a corresponding The picture, add the corresponding text information.
  • the target content includes picture information
  • the display effect of the target content includes performing a picture display process on the picture information, wherein the picture display process comprises: cropping a picture, beautifying a picture, and deforming a picture. For example, when displaying a picture, the image is filtered to make the picture look better.
  • the step S2 includes: S21 (not shown) determining a mental state of the target user when inputting the target content based on the body feature information; S22 (not shown) determining the location based on the mental state The display effect of the target content.
  • determining the psychological state of the target user when inputting the target content based on the body feature information it should be understood that different body feature information corresponds to different mental states, for example, when the pulse or heartbeat When the speed is faster, the corresponding psychological state will be more exciting. For example, comparing anger and the like, for example, when inputting voice information, the size of speech or the speed of speech rate will correspond to different psychological states, such as happiness, sadness, grief, and the like.
  • the step S21 includes: comparing the body feature information with the sample body feature information, and determining a mental state of the target user when inputting the target content based on the mental state corresponding to the sample body feature information. .
  • the sample body feature information includes historical data of body feature information such as combination of other users, the target user or other users, and the target user, and is determined by a combination of automatic machine learning and manual training.
  • the range of physiological and behavioral characteristics corresponding to different mental states such as blood pressure range, heartbeat range, pulse beat range, grip strength, and speed range of input text. Therefore, after the body feature information of the target user is acquired, the body feature information is compared with the sample body feature information, thereby determining a mental state of the target user when the target content is input.
  • the step S21 includes: when there is self sample body feature information, Comparing the body feature information with the self sample body feature information, and determining a mental state of the target user when inputting the target content based on the mental state corresponding to the self sample body feature information.
  • each person's physical characteristics information may be different. For example, in a calm state, the average adult's pulse averages 75 beats per minute, while the athlete's pulse may be less than 60 beats per minute in a calm state. Therefore, the self-sample body feature information can better reflect the range of physiological and behavioral features corresponding to the different mental states of the user, so when the self-sample body feature information exists, the acquired body feature information is preferably The self-sample body feature information is compared to determine the mental state of the target user when inputting the target content.
  • the self-sample body feature information includes a range of physiological and behavioral characteristics corresponding to different psychological states determined by a combination of machine automatic learning and manual training on historical data of the target user's own body feature information, such as Blood pressure range, heartbeat range, pulse beat range, grip strength, and speed range of input text.
  • the different mental states of the user may be determined according to the range of one or more physiological and behavioral features, for example, by the self-sample physical feature information of the target user, the heartbeat corresponding to the target user when happy The range is A1-B1, the corresponding blood pressure range is C1-D1, and the corresponding grip strength is E1-F1, then the current body characteristic information of the target user obtained by comparing is obtained within the range.
  • the above examples are merely examples and are not intended to be limiting.
  • the display effect of the target content is determined based on the mental state.
  • Different mental states correspond to different display effects, which can help other users who communicate with the target users to better perceive the psychological state expressed by the target users, thereby facilitating communication and better pulling Near each other's distance.
  • the font color of the target content may be set to a bright color system, or when the target content is presented, a corresponding emoticon indicating happiness may be added.
  • FIG. 2 a schematic diagram of a display effect corresponding to when the target content is text information and the mental state of the target user is happy, here, a text display special effect is added to the text information, which is better.
  • the expression of the user's mental state as shown in FIG.
  • FIG. 3 is a schematic diagram of the corresponding display effect when the target content is voice information, and the target user's psychological state is happy, where the voice information is Corresponding emoticons are added; as shown in FIG. 4, which is a schematic diagram of a corresponding display effect when the target content is video information and the mental state of the target user is happy, where the characters in the video information are The emoticon is added with the corresponding emoticon.
  • the program when executed, includes the steps of: acquiring body feature information of the target user when the target content is input; determining a display effect of the target content based on the body feature information; generating the target within the target based on the display effect Content corresponding to the display content.
  • the storage medium is, for example, a ROM/RAM, a magnetic disk, an optical disk, or the like.
  • the present application obtains physical feature information of a target user when inputting target content, and determines a display effect of the target content based on the physical feature information, and then generates the target content based on the display effect. Corresponding display content.
  • the user can automatically generate the corresponding display content when the user inputs the target content, and the display content can better and more accurately express the state of the target user, thereby improving the user.
  • the display content can better and more accurately express the state of the target user, thereby improving the user.
  • the present application may further determine a mental state of the target user when inputting the target content based on the physical feature information, and determine a display effect of the target content based on the mental state.
  • the display effect of the target content is determined by the mental state of the target user, and the psychological feeling and emotional state of the target user when inputting the target content can be well expressed, thereby getting closer. The distance of the user makes the long-distance interaction more realistic.
  • the target content in the present application includes at least one of the following: text information, voice information, video information, picture information, and the like, such that the target user publishes text, voice, video, or picture, etc.
  • the corresponding display content can be generated according to the physical feature information of the target user, thereby enriching the user's feelings.
  • FIG. 5 is a schematic diagram of a device for generating display content according to another aspect of the present application, the device 1 includes:
  • a first device configured to acquire body feature information of the target user when inputting the target content
  • a second device configured to determine a display effect of the target content based on the physical feature information
  • a third device configured to generate display content corresponding to the target content based on the display effect.
  • the first device of the device 1 acquires body feature information when the target user inputs the target content, and preferably, the target content includes at least one of the following: text information, voice information, video information, Picture information and more. It should be understood that when the user inputs the target content, since the emotional state of the target user is different, there may be different body feature information, preferably, wherein the body feature information includes but is not limited to: physiological data. Information, behavioral data information.
  • the physiological data information is used to reflect physiological characteristics of the target user in different psychological states, and the physiological data information includes, but is not limited to, pulse information, blood pressure information, heartbeat information, and the like; To reflect the behavior of the target user in different mental states
  • the behavior data information includes but is not limited to: facial expression information, input speed information, and grip pressure information.
  • the physical feature information is merely an example, and other existing or future physical feature information may be included in the scope of protection of the present application, and is hereby incorporated by reference. this.
  • the manner of acquiring the physical feature information of the target user when inputting the target content may be collected by a corresponding hardware device, including but not limited to a gyroscope, a pressure sensor, a pulse sensor, a blood pressure sensor, a temperature sensor, A blood glucose detecting sensor, a camera, etc., wherein the hardware device can perform data transmission with the device 1 or can be acquired by inputting the device 1 corresponding to the target content.
  • a gyroscope a pressure sensor
  • a pulse sensor a blood pressure sensor
  • a temperature sensor e.g., a temperature sensor
  • a blood glucose detecting sensor e.g., a camera, etc.
  • the hardware device can perform data transmission with the device 1 or can be acquired by inputting the device 1 corresponding to the target content.
  • the manner of obtaining the physical feature information of the target user when inputting the target content is only an example, and other existing or future possible manners of obtaining the physical feature information of the target user when inputting the target content are applicable to
  • the second device of the device 1 determines a display effect of the target content based on the body feature information, where different target content may correspond to different display effects, wherein, when the target When the content includes text information, the display effect of the text information includes performing text display processing on the text information, wherein the text display processing includes adding text color, text font deformation, adding text background color, adding a background image, adding Background music, add text display effects.
  • the target content includes voice information
  • the display effect of the target content includes performing voice display processing on the voice information, wherein the voice display processing includes adding a corresponding emoji, adding a background image, adding Background music.
  • the voice information can also be displayed in the form of text information, and accordingly, the display effect can be presented according to the display effect of the aforementioned text information.
  • the target content includes video information
  • the display effect of the target content includes performing video display processing on the video information
  • the video display processing includes, but is not limited to, adding a corresponding emoticon, adding a corresponding The picture, add the corresponding text information.
  • the target content includes picture information
  • the display effect of the target content includes performing a picture display process on the picture information, wherein the picture display process comprises: cropping a picture, beautifying a picture, and deforming a picture. For example, when displaying a picture, the image is filtered. Make the picture look better.
  • the second device comprises: a first unit (not shown) for determining a mental state of the target user when inputting the target content based on the body feature information; a second unit (not shown) And determining a display effect of the target content based on the mental state.
  • the first unit determines a mental state of the target user when inputting the target content based on the body feature information, and it should be understood that different body feature information corresponds to different mental states, for example, when the pulse or heart rate is relatively high.
  • the corresponding psychological state will be more exciting, for example, more angry, etc., for example, when inputting voice information, the size of speech or the speed of speech will correspond to different psychological states, such as happiness, sadness, grief, etc. Wait.
  • the first unit is configured to: compare the body feature information with sample body feature information, and determine, according to the mental state corresponding to the sample body feature information, the target user when inputting the target content Mental state.
  • the sample body feature information includes historical data of body feature information such as combination of other users, the target user or other users, and the target user, and is determined by a combination of automatic machine learning and manual training.
  • the range of physiological and behavioral characteristics corresponding to different mental states such as blood pressure range, heartbeat range, pulse beat range, grip strength, and speed range of input text. Therefore, after the body feature information of the target user is acquired, the body feature information is compared with the sample body feature information, thereby determining a mental state of the target user when the target content is input.
  • the first unit is configured to: when there is self sample body feature information, Comparing the body feature information with the self sample body feature information, and determining a mental state of the target user when inputting the target content based on the psychological state corresponding to the self sample body feature information.
  • each person's physical characteristics information may be different. For example, in a calm state, the average adult's pulse averages 75 beats per minute, while the athlete's pulse is calm. The next minute may be less than 60 times. Therefore, the physical characteristics of the sample itself can better reflect the range of physiological and behavioral characteristics corresponding to different mental states of the user, so when there is self-sample physical feature information, preferably Comparing the acquired body feature information with the self sample body feature information to determine a mental state of the target user when inputting the target content.
  • the self-sample body feature information includes a range of physiological and behavioral characteristics corresponding to different psychological states determined by a combination of machine automatic learning and manual training on historical data of the target user's own body feature information, such as Blood pressure range, heartbeat range, pulse beat range, grip strength, and speed range of input text.
  • the different mental states of the user may be determined according to the range of one or more physiological and behavioral features, for example, by the self-sample physical feature information of the target user, and the heartbeat range corresponding to the target user when they are happy
  • the corresponding blood pressure range is C1-D1
  • the corresponding holding strength is E1-F1
  • the target can be determined by comparing whether the obtained current physical feature information of the target user is within the range.
  • the above examples are merely examples and are not intended to be limiting.
  • the second unit determines a display effect of the target content based on the mental state.
  • Different mental states correspond to different display effects, which can help other users who communicate with the target users to better perceive the psychological state expressed by the target users, thereby facilitating communication and better pulling Near each other's distance.
  • FIG. 2 it is a schematic diagram of a display effect corresponding to when the target content is text information and the mental state of the target user is happy, where the text is The word information is added with a text display special effect, which can better express the psychological state of the user; as shown in FIG. 3, the corresponding display is when the target content is voice information and the target user's psychological state is happy.
  • An effect diagram where a corresponding emoticon is added to the voice information; and as shown in FIG. 4, the corresponding display effect is when the target content is video information and the target user's psychological state is happy.
  • the present application obtains physical feature information of a target user when inputting target content, and determines a display effect of the target content based on the physical feature information, and then generates the target content based on the display effect. Corresponding display content.
  • the user can automatically generate the corresponding display content when the user inputs the target content, and the display content can better and more accurately express the state of the target user, thereby improving the user.
  • the display content can better and more accurately express the state of the target user, thereby improving the user.
  • the present application may further determine a mental state of the target user when inputting the target content based on the physical feature information, and determine a display effect of the target content based on the mental state.
  • the display effect of the target content is determined by the mental state of the target user, and the psychological feeling and emotional state of the target user when inputting the target content can be well expressed, thereby getting closer. The distance of the user makes the long-distance interaction more realistic.
  • the target content in the present application includes at least one of the following: text information, voice information, video information, picture information, and the like, such that the target user publishes text, voice, video, or picture, etc.
  • the corresponding display content can be generated according to the physical feature information of the target user, thereby enriching the user's feelings.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请的目的是提供一种生成展示内容的方法与设备。本申请通过获取目标用户在输入目标内容时的身体特征信息,并基于所述身体特征信息确定所述目标内容的显示效果,然后基于所述显示效果生成所述目标内容对应的展示内容。通过这种方式,无需用户操作即可在用户输入目标内容时自动生成对应的展示内容,且所述展示内容能够更好、更准确的表达所述目标用户的状态,从而可以很好的提高用户体验。

Description

一种生成展示内容的方法与设备 技术领域
本申请涉及通信技术领域,尤其涉及一种生成展示内容的技术。
背景技术
随着网络技术的发展,用户越来越多的通过网络交互软件进行交流,与真实的面对面交流不同,通过交互软件的交互,不能很好的表达用户的心理状态,用户感受不好,而现有技术中,用户产出内容的方案通常是通过用户亲自在实体或虚拟设备上的输入内容、或者通过对某些关键字分析来标记用户当前的情绪状态,从而产生具体的可展示内容。通过用户主动输入的方式,增加了用户负担,而通过对某些关键字分析的方式对用户输入内容的应用比较单一,而且表达用户的情绪状态的准确度不高。
发明内容
本申请的目的是提供一种生成展示内容的方法与设备。
根据本申请的一个方面,提供了一种生成展示内容的方法,其中,该方法包括:
获取目标用户在输入目标内容时的身体特征信息;
基于所述身体特征信息确定所述目标内容的显示效果;
基于所述显示效果生成所述目标内容对应的展示内容。
进一步地,其中,所述基于所述身体特征信息确定所述目标内容的显示效果包括:
基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态;
基于所述心理状态确定所述目标内容的显示效果。
进一步地,其中,所述基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态包括:
将所述身体特征信息与样本身体特征信息作比较,基于所述样本身体特征信息对应的心理状态,确定所述目标用户在输入目标内容时的心理状态。
进一步地,其中,所述样本身体特征信息包括自身样本身体特征信息,所述基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态包括:
当存在自身样本身体特征信息,将所述身体特征信息与自身样本身体特征信息作比较,基于所述自身样本身体特征信息对应的心理状态,确定所述目标用户在输入目标内容时的心理状态。
进一步地,其中,所述目标内容包括文字信息,所述目标内容的显示效果包括对所述文字信息进行文字显示处理,其中所述文字显示处理包括添加文字颜色、文字字体变形、添加文字背景颜色、添加背景图片、添加背景音乐、添加文字显示特效。
进一步地,其中,所述目标内容包括语音信息,所述目标内容的显示效果包括对所述语音信息进行语音显示处理,其中,所述语音显示处理包括添加对应的表情符号、添加背景图片、添加背景音乐。
进一步地,其中,所述目标内容包括视频信息,所述目标内容的显示效果包括对所述视频信息进行视频显示处理,其中,所述视频显示处理包括添加对应的表情符号、添加对应的图片、添加相应的文字信息。
进一步地,其中,所述目标内容包括图片信息,所述目标内容的显示效果包括对所述图片信息进行图片显示处理,其中,所述图片显示处理包括:裁剪图片、美化图片、变形图片。
进一步地,其中,所述身体特征信息包括以下至少任一项:
生理数据信息,所述生理数据信息用以反应所述目标用户在不同心理状态下的生理特征;
行为数据信息,所述行为数据信息用以反应所述目标用户在不同心理状态下的行为特征。
进一步地,其中,所述生理数据信息包括以下至少任一项:脉搏信息;血压信息;心跳信息。
进一步地,其中,所述行为数据信息包括以下至少任一项:面部表情信息;输入速度信息;握持压力信息。
根据本申请的另一方面,还提供了一种生成展示内容的设备,其中,该 设备包括:
第一装置,用于获取目标用户在输入目标内容时的身体特征信息;
第二装置,用于基于所述身体特征信息确定所述目标内容的显示效果;
第三装置,用于基于所述显示效果生成所述目标内容对应的展示内容。
进一步地,其中,所述第二装置包括:
第一单元,用于基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态;
第二单元,用于基于所述心理状态确定所述目标内容的显示效果。
进一步地,其中,所述第一单元用于:
将所述身体特征信息与样本身体特征信息作比较,基于所述样本身体特征信息对应的心理状态,确定所述目标用户在输入目标内容时的心理状态。
进一步地,其中,所述样本身体特征信息包括以下至少任一项:
自身样本身体特征信息;
其他样本身体特征信息;
综合样本身体特征信息。
进一步地,其中,当所述样本身体特征信息包含自身样本身体特征信息,所述第一单元用于:
将所述身体特征信息与自身样本身体特征信息作比较,基于所述自身样本身体特征信息对应的心理状态,确定所述目标用户在输入目标内容时的心理状态。
进一步地,其中,所述目标内容包括文字信息,所述目标内容的显示效果包括对所述文字信息进行文字显示处理,其中所述文字显示处理包括添加文字颜色、文字字体变形、添加文字背景颜色、添加背景图片、添加背景音乐、添加文字显示特效。
进一步地,其中,所述目标内容包括语音信息,所述目标内容的显示效果包括对所述语音信息进行语音显示处理,其中,所述语音显示处理包括添加对应的表情符号、添加背景图片、添加背景音乐。
进一步地,其中,所述目标内容包括视频信息,所述目标内容的显示效果包括对所述视频信息进行视频显示处理,其中,所述视频显示处理包括添 加对应的表情符号、添加对应的图片、添加相应的文字信息。
进一步地,其中,所述目标内容包括图片信息,所述目标内容的显示效果包括对所述图片信息进行图片显示处理,其中,所述图片显示处理包括:裁剪图片、美化图片、变形图片。
进一步地,其中,所述身体特征信息包括以下至少任一项:
生理数据信息,所述生理数据信息用以反应所述目标用户在不同心理状态下的生理特征;
行为数据信息,所述行为数据信息用以反应所述目标用户在不同心理状态下的行为特征。
进一步地,其中,所述生理数据信息包括以下至少任一项:脉搏信息;
血压信息;心跳信息。
进一步地,其中,所述行为数据信息包括以下至少任一项:面部表情信息;输入速度信息;握持压力信息。
与现有技术相比,本申请通过获取目标用户在输入目标内容时的身体特征信息,并基于所述身体特征信息确定所述目标内容的显示效果,然后基于所述显示效果生成所述目标内容对应的展示内容。通过这种方式,无需用户操作即可在用户输入目标内容时自动生成对应的展示内容,且所述展示内容能够更好、更准确的表达所述目标用户的状态,从而可以很好的提高用户体验。
而且,本申请还可以基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态,并基于所述心理状态确定所述目标内容的显示效果。这种方式,通过所述目标用户的心理状态来确定所述目标内容的显示效果,能够很好的表达出所述目标用户在输入所述目标内容时的心理感受和情绪状态,从而拉近了用户的距离,使远距离交互更加真实化。
此外,本申请中的所述目标内容包括以下至少任一项:文字信息、语音信息、视频信息、图片信息等等,这样,所述目标用户无论是发布文字、语音、视频或者图片等等,都可以根据所述目标用户的身体特征信息,生成对应的展示内容,从而丰富了用户的感受。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:
图1示出根据本申请一个方面的一种生成展示内容的方法流程图;
图2示出根据本申请一个优选实施例的一种目标内容为文字信息时对应的显示效果示意图;
图3示出根据本申请另一个优选实施例的一种目标内容为语音信息时对应的显示效果示意图;
图4示出根据本申请又一个优选实施例的一种目标内容为视频信息时对应的显示效果示意图;
图5示出根据本申请另一个方面的一种用于生成展示内容的设备示意图。
附图中相同或相似的附图标记代表相同或相似的部件。
具体实施方式
下面结合附图对本发明作进一步详细描述。
在本申请一个典型的配置中,终端、服务网络的设备和可信方均包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质, 可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括非暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
为更进一步阐述本申请所采取的技术手段及取得的效果,下面结合附图及较佳实施例,对本申请的技术方案,进行清楚和完整的描述。
图1示出根据本申请一个方面的一种生成展示内容的方法流程图,该方法包括:
S1获取目标用户在输入目标内容时的身体特征信息;
S2基于所述身体特征信息确定所述目标内容的显示效果;
S3基于所述显示效果生成所述目标内容对应的展示内容。
在该实施例中,在所述步骤S1中,获取目标用户在输入目标内容时的身体特征信息,优选地,所述目标内容包括以下至少任一项:文字信息、语音信息、视频信息、图片信息等等。应能理解,当用户在输入目标内容时,由于所述目标用户的情绪状态是不同的,因此会具有不同的身体特征信息,优选地,其中,所述身体特征信息包括但不限于:生理数据信息、行为数据信息。其中,所述生理数据信息用以反应所述目标用户在不同心理状态下的生理特征,所述生理数据信息包括但不限于:脉搏信息、血压信息、心跳信息等等;所述行为数据信息用以反应所述目标用户在不同心理状态下的行为特征,所述行为数据信息包括但不限于:面部表情信息、输入速度信息、握持压力信息。在此,所述身体特征信息仅为举例,其他现有的或者今后可能出现的身体特征信息,如适用于本申请也应包含在本申请的保护范围内,在此,以引用的方式包含于此。
具体地,获取目标用户在输入目标内容时的身体特征信息的方式,可以通过对应的硬件设备进行采集,所述硬件设备包括但不限于陀螺仪、压力传感器、脉搏传感器、血压传感器、温度传感器、血糖检测传感器、摄像头等等;或者可以通过输入所述目标内容对应的设备来获取。在此,所述获取目标用户在输入目标内容时的身体特征信息的方式仅为举例,其他现有的或者今后可能出现的获取目标用户在输入目标内容时的身体特征信息的方式,如适用于本申请也应包含在本申请的保护范围内,在此,以 引用的方式包含于此。
继续在该实施例中,在所述步骤S2中,基于所述身体特征信息确定所述目标内容的显示效果,在此,不同的目标内容可以对应不同的显示效果,其中,当所述目标内容包括文字信息时,所述文字信息的显示效果包括对所述文字信息进行文字显示处理,其中,所述文字显示处理包括添加文字颜色、文字字体变形、添加文字背景颜色、添加背景图片、添加背景音乐、添加文字显示特效。
优选地,其中,所述目标内容包括语音信息,所述目标内容的显示效果包括对所述语音信息进行语音显示处理,其中,所述语音显示处理包括添加对应的表情符号、添加背景图片、添加背景音乐。在此,语音信息还可以通过文字信息的形式显示出来,相应地,显示效果可以根据前述文字信息的显示效果呈现。
优选地,其中,所述目标内容包括视频信息,所述目标内容的显示效果包括对所述视频信息进行视频显示处理,其中,所述视频显示处理包括但不限于添加对应的表情符号、添加对应的图片、添加相应的文字信息。
优选地,其中,所述目标内容包括图片信息,所述目标内容的显示效果包括对所述图片信息进行图片显示处理,其中,所述图片显示处理包括:裁剪图片、美化图片、变形图片。例如,在显示图片时,对图片进行滤镜处理,使图片的视觉效果更好。
在此,上述显示效果仅为举例,其他现有的或者今后可能出现的显示效果,如适用于本申请也应包含在本申请的保护范围内,在此,以引用的方式包含于此。用户在实际应用中,可以根据自己的需求选择不同的显示效果。
优选地,其中,所述步骤S2包括:S21(未示出)基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态;S22(未示出)基于所述心理状态确定所述目标内容的显示效果。
具体地,在所述步骤S21中,基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态,应能理解,不同身体特征信息对应不同的心理状态,例如,当脉搏或者心跳速度较快时,对应的心理状态会比较激动, 例如,比较愤怒等等,又例如,当输入语音信息时,语音的大小或者语速的快慢都会对应不同的心理状态,例如,高兴、伤心、悲痛等等。
优选地,其中,所述步骤S21包括:将所述身体特征信息与样本身体特征信息作比较,基于所述样本身体特征信息对应的心理状态,确定所述目标用户在输入目标内容时的心理状态。
在此,所述样本身体特征信息包括对其他用户、所述目标用户或者其他用户以及所述目标用户的结合等的身体特征信息的历史数据通过机器自动学习、人工训练相结合的方式,确定的不同心理状态所对应的生理和行为特征的范围,比如血压范围、心跳范围、脉搏跳动范围、握持力度的大小以及输入文字的速度范围等等。因此,当获取到所述目标用户的身体特征信息后,将所述身体特征信息与所述样本身体特征信息作比较,从而确定所述目标用户在输入目标内容时的心理状态。
在一种优选的情形中,当所述样本身体特征信息是由所述目标用户的历史身体特征数据确定的自身样本身体特征信息,所述步骤S21包括:当存在自身样本身体特征信息,将所述身体特征信息与自身样本身体特征信息作比较,基于所述自身样本身体特征信息对应的心理状态,确定所述目标用户在输入目标内容时的心理状态。
由于个体差异的不同,每个人的身体特征信息可能会不同,例如,在平静状态下普通成年人的脉搏平均为每分钟75次,而运动员的脉搏在平静状态下每分钟可能低于60次,因此,自身样本身体特征信息能够更好的反应出用户自身的不同心理状态所对应的生理和行为特征的范围,所以当存在自身样本身体特征信息时,优选地将所述获取到的身体特征信息与自身样本身体特征信息作比较,从而确定所述目标用户在输入目标内容时的心理状态。
其中,所述自身样本身体特征信息包括对所述目标用户自身身体特征信息的历史数据通过机器自动学习、人工训练相结合的方式,确定的不同心理状态所对应的生理和行为特征的范围,比如血压范围、心跳范围、脉搏跳动范围、握持力度的大小以及输入文字的速度范围等等。在此,可以根据一个或多个生理和行为特征的范围来确定用户的不同心理状态,例如,通过所述目标用户的自身样本身体特征信息得出,所述目标用户高兴时所对应的心跳 范围为A1-B1、对应的血压范围为C1-D1、对应的握持力度为E1-F1,那么可以通过比较获取到的所述目标用户当前的身体特征信息是否在此范围内来确定所述目标用户的心理状态是否为高兴等等,在此,上述示例仅为举例,并不做任何限定。在另一种情形中,当所述样本身体特征信息中不存在由所述目标用户的历史身体特征数据确定的自身样本身体特征信息,而存在由其他用户的历史身体特征数据确定的其他样本身体特征信息,或者存在由所述目标用户及其他用户的历史身体特征数据共同确定的综合样本身体特征信息,则根据由其他用户的历史数据确定的其他样本身体特征信息,或者存在由所述目标用户及其他用户的历史数据共同确定的综合样本身体特征信息来确定所述目标用户的心理状态。
进一步地,在所述步骤S22中,基于所述心理状态确定所述目标内容的显示效果。通过不同的心理状态对应不同的显示效果,这样能够帮助与所述目标用户交流的其他用户可以更好的感知所述目标用户所表达的心理状态,可以更好的促进交流,从而更好的拉近彼此的距离。
在此,也可以通过机器自动学习、人工训练相结合的方式来确定不同的心理状态应该对应哪一种显示效果。例如,当心理状态为开心时,可以将所述目标内容的字体颜色设为亮色系,或者,在呈现所述目标内容时,添加对应的表示开心的表情符号。如图2所示,为当所述目标内容为文字信息、且所述目标用户的心理状态为愉快时对应的显示效果示意图,在此,对所述文字信息添加了文字显示特效,能够更好的表达所述用户的心理状态;如图3所示,为当所述目标内容为语音信息、且所述目标用户的心理状态为愉快时对应的显示效果示意图,在此,为所述语音信息添加了对应的表情符号;又如图4所示,为当所述目标内容为视频信息、且所述目标用户的心理状态为愉快时对应的显示效果示意图,在此,根据视频信息中的人物表情添加了对应的表情符号。本领域普通技术人员可以理解实现上述生成展示内容的一个实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,包括如下步骤:获取目标用户在输入目标内容时的身体特征信息;基于所述身体特征信息确定所述目标内容的显示效果;基于所述显示效果生成所述目标内 容对应的展示内容。其中,所述的存储介质,如ROM/RAM、磁碟、光盘等。
与现有技术相比,本申请通过获取目标用户在输入目标内容时的身体特征信息,并基于所述身体特征信息确定所述目标内容的显示效果,然后基于所述显示效果生成所述目标内容对应的展示内容。通过这种方式,无需用户操作即可在用户输入目标内容时自动生成对应的展示内容,且所述展示内容能够更好、更准确的表达所述目标用户的状态,从而可以很好的提高用户体验。
而且,本申请还可以基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态,并基于所述心理状态确定所述目标内容的显示效果。这种方式,通过所述目标用户的心理状态来确定所述目标内容的显示效果,能够很好的表达出所述目标用户在输入所述目标内容时的心理感受和情绪状态,从而拉近了用户的距离,使远距离交互更加真实化。
此外,本申请中的所述目标内容包括以下至少任一项:文字信息、语音信息、视频信息、图片信息等等,这样,所述目标用户无论是发布文字、语音、视频或者图片等等,都可以根据所述目标用户的身体特征信息,生成对应的展示内容,从而丰富了用户的感受。
图5示出根据本申请另一个方面的一种生成展示内容的设备示意图,该设备1包括:
第一装置,用于获取目标用户在输入目标内容时的身体特征信息;
第二装置,用于基于所述身体特征信息确定所述目标内容的显示效果;
第三装置,用于基于所述显示效果生成所述目标内容对应的展示内容。
在该实施例中,所述设备1的第一装置获取目标用户在输入目标内容时的身体特征信息,优选地,所述目标内容包括以下至少任一项:文字信息、语音信息、视频信息、图片信息等等。应能理解,当用户在输入目标内容时,由于所述目标用户的情绪状态是不同的,因此会具有不同的身体特征信息,优选地,其中,所述身体特征信息包括但不限于:生理数据信息、行为数据信息。其中,所述生理数据信息用以反应所述目标用户在不同心理状态下的生理特征,所述生理数据信息包括但不限于:脉搏信息、血压信息、心跳信息等等;所述行为数据信息用以反应所述目标用户在不同心理状态下的行为 特征,所述行为数据信息包括但不限于:面部表情信息、输入速度信息、握持压力信息。在此,所述身体特征信息仅为举例,其他现有的或者今后可能出现的身体特征信息,如适用于本申请也应包含在本申请的保护范围内,在此,以引用的方式包含于此。
具体地,获取目标用户在输入目标内容时的身体特征信息的方式,可以通过对应的硬件设备进行采集,所述硬件设备包括但不限于陀螺仪、压力传感器、脉搏传感器、血压传感器、温度传感器、血糖检测传感器、摄像头等等,在此,所述硬件设备可以与所述设备1之间进行数据传输;或者可以通过输入所述目标内容对应的设备1来获取。在此,所述获取目标用户在输入目标内容时的身体特征信息的方式仅为举例,其他现有的或者今后可能出现的获取目标用户在输入目标内容时的身体特征信息的方式,如适用于本申请也应包含在本申请的保护范围内,在此,以引用的方式包含于此。
继续在该实施例中,所述设备1的第二装置基于所述身体特征信息确定所述目标内容的显示效果,在此,不同的目标内容可以对应不同的显示效果,其中,当所述目标内容包括文字信息时,所述文字信息的显示效果包括对所述文字信息进行文字显示处理,其中,所述文字显示处理包括添加文字颜色、文字字体变形、添加文字背景颜色、添加背景图片、添加背景音乐、添加文字显示特效。
优选地,其中,所述目标内容包括语音信息,所述目标内容的显示效果包括对所述语音信息进行语音显示处理,其中,所述语音显示处理包括添加对应的表情符号、添加背景图片、添加背景音乐。在此,语音信息还可以通过文字信息的形式显示出来,相应地,显示效果可以根据前述文字信息的显示效果呈现。
优选地,其中,所述目标内容包括视频信息,所述目标内容的显示效果包括对所述视频信息进行视频显示处理,其中,所述视频显示处理包括但不限于添加对应的表情符号、添加对应的图片、添加相应的文字信息。
优选地,其中,所述目标内容包括图片信息,所述目标内容的显示效果包括对所述图片信息进行图片显示处理,其中,所述图片显示处理包括:裁剪图片、美化图片、变形图片。例如,在显示图片时,对图片进行滤镜处理, 使图片的视觉效果更好。
在此,上述显示效果仅为举例,其他现有的或者今后可能出现的显示效果,如适用于本申请也应包含在本申请的保护范围内,在此,以引用的方式包含于此。用户在实际应用中,可以根据自己的需求选择不同的显示效果。
优选地,其中,所述第二装置包括:第一单元(未示出),用于基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态;第二单元(未示出),用于基于所述心理状态确定所述目标内容的显示效果。
具体地,所述第一单元基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态,应能理解,不同身体特征信息对应不同的心理状态,例如,当脉搏或者心跳速度较快时,对应的心理状态会比较激动,例如,比较愤怒等等,又例如,当输入语音信息时,语音的大小或者语速的快慢都会对应不同的心理状态,例如,高兴、伤心、悲痛等等。
优选地,其中,所述第一单元用于:将所述身体特征信息与样本身体特征信息作比较,基于所述样本身体特征信息对应的心理状态,确定所述目标用户在输入目标内容时的心理状态。
在此,所述样本身体特征信息包括对其他用户、所述目标用户或者其他用户以及所述目标用户的结合等的身体特征信息的历史数据通过机器自动学习、人工训练相结合的方式,确定的不同心理状态所对应的生理和行为特征的范围,比如血压范围、心跳范围、脉搏跳动范围、握持力度的大小以及输入文字的速度范围等等。因此,当获取到所述目标用户的身体特征信息后,将所述身体特征信息与所述样本身体特征信息作比较,从而确定所述目标用户在输入目标内容时的心理状态。
在一种优选的情形中,当所述样本身体特征信息是由所述目标用户的历史身体特征数据确定的自身样本身体特征信息,所述第一单元用于:当存在自身样本身体特征信息,将所述身体特征信息与自身样本身体特征信息作比较,基于所述自身样本身体特征信息对应的心理状态,确定所述目标用户在输入目标内容时的心理状态。
由于个体差异的不同,每个人的身体特征信息可能会不同,例如,在平静状态下普通成年人的脉搏平均为每分钟75次,而运动员的脉搏在平静状态 下每分钟可能低于60次,因此,自身样本身体特征信息能够更好的反应出用户自身的不同心理状态所对应的生理和行为特征的范围,所以当存在自身样本身体特征信息时,优选地将所述获取到的身体特征信息与自身样本身体特征信息作比较,从而确定所述目标用户在输入目标内容时的心理状态。
其中,所述自身样本身体特征信息包括对所述目标用户自身身体特征信息的历史数据通过机器自动学习、人工训练相结合的方式,确定的不同心理状态所对应的生理和行为特征的范围,比如血压范围、心跳范围、脉搏跳动范围、握持力度的大小以及输入文字的速度范围等等。在此,可以根据一个或多个生理和行为特征的范围来确定用户的不同心理状态,例如,通过所述目标用户的自身样本身体特征信息得出,所述目标用户高兴时所对应的心跳范围为A1-B1、对应的血压范围为C1-D1、对应的握持力度为E1-F1,那么可以通过比较获取到的所述目标用户当前的身体特征信息是否在此范围内来确定所述目标用户的心理状态是否为高兴等等,在此,上述示例仅为举例,并不做任何限定。在另一种情形中,当所述样本身体特征信息中不存在由所述目标用户的历史身体特征数据确定的自身样本身体特征信息,而存在由其他用户的历史身体特征数据确定的其他样本身体特征信息,或者存在由所述目标用户及其他用户的历史身体特征数据共同确定的综合样本身体特征信息,则根据由其他用户的历史数据确定的其他样本身体特征信息,或者存在由所述目标用户及其他用户的历史数据共同确定的综合样本身体特征信息来确定所述目标用户的心理状态。
进一步地,所述第二单元基于所述心理状态确定所述目标内容的显示效果。通过不同的心理状态对应不同的显示效果,这样能够帮助与所述目标用户交流的其他用户可以更好的感知所述目标用户所表达的心理状态,可以更好的促进交流,从而更好的拉近彼此的距离。
在此,也可以通过机器自动学习、人工训练相结合的方式来确定不同的心理状态应该对应哪一种显示效果。例如,当心理状态为开心时,可以将所述目标内容的字体颜色设为亮色系,或者,在呈现所述目标内容时,添加对应的表示开心的表情符号。如图2所示,为当所述目标内容为文字信息、且所述目标用户的心理状态为愉快时对应的显示效果示意图,在此,对所述文 字信息添加了文字显示特效,能够更好的表达所述用户的心理状态;如图3所示,为当所述目标内容为语音信息、且所述目标用户的心理状态为愉快时对应的显示效果示意图,在此,为所述语音信息添加了对应的表情符号;又如图4所示,为当所述目标内容为视频信息、且所述目标用户的心理状态为愉快时对应的显示效果示意图,在此,根据视频信息中的人物表情添加了对应的表情符号。
与现有技术相比,本申请通过获取目标用户在输入目标内容时的身体特征信息,并基于所述身体特征信息确定所述目标内容的显示效果,然后基于所述显示效果生成所述目标内容对应的展示内容。通过这种方式,无需用户操作即可在用户输入目标内容时自动生成对应的展示内容,且所述展示内容能够更好、更准确的表达所述目标用户的状态,从而可以很好的提高用户体验。
而且,本申请还可以基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态,并基于所述心理状态确定所述目标内容的显示效果。这种方式,通过所述目标用户的心理状态来确定所述目标内容的显示效果,能够很好的表达出所述目标用户在输入所述目标内容时的心理感受和情绪状态,从而拉近了用户的距离,使远距离交互更加真实化。
此外,本申请中的所述目标内容包括以下至少任一项:文字信息、语音信息、视频信息、图片信息等等,这样,所述目标用户无论是发布文字、语音、视频或者图片等等,都可以根据所述目标用户的身体特征信息,生成对应的展示内容,从而丰富了用户的感受。
对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其他的具体形式实现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本发明内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。装置权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件 或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。

Claims (24)

  1. 一种生成展示内容的方法,其中,该方法包括:
    获取目标用户在输入目标内容时的身体特征信息;
    基于所述身体特征信息确定所述目标内容的显示效果;
    基于所述显示效果生成所述目标内容对应的展示内容。
  2. 根据权利要求1所述的方法,其中,所述基于所述身体特征信息确定所述目标内容的显示效果包括:
    基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态;
    基于所述心理状态确定所述目标内容的显示效果。
  3. 根据权利要求2所述的方法,其中,所述基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态包括:
    将所述身体特征信息与样本身体特征信息作比较,基于所述样本身体特征信息对应的心理状态,确定所述目标用户在输入目标内容时的心理状态。
  4. 根据权利要求3所述的方法,其中,所述样本身体特征信息包括以下至少任一项:
    自身样本身体特征信息;
    其他样本身体特征信息;
    综合样本身体特征信息。
  5. 根据权利要求4所述的方法,其中,当所述样本身体特征信息包含自身样本身体特征信息,所述基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态包括:
    将所述身体特征信息与自身样本身体特征信息作比较,基于所述自身样本身体特征信息对应的心理状态,确定所述目标用户在输入目标内容时的心理状态。
  6. 根据权利要求1至5中任一项所述的方法,其中,所述目标内容包括文字信息,所述目标内容的显示效果包括对所述文字信息进行文字显示处理,其中,所述文字显示处理包括添加文字颜色、文字字体变形、添加文字背景颜色、添加背景图片、添加背景音乐、添加文字显示特效。
  7. 根据权利要求1至5中任一项所述的方法,其中,所述目标内容包括语音信息,所述目标内容的显示效果包括对所述语音信息进行语音显示处理, 其中,所述语音显示处理包括添加对应的表情符号、添加背景图片、添加背景音乐。
  8. 根据权利要求1至5中任一项所述的方法,其中,所述目标内容包括视频信息,所述目标内容的显示效果包括对所述视频信息进行视频显示处理,其中,所述视频显示处理包括添加对应的表情符号、添加对应的图片、添加相应的文字信息。
  9. 根据权利要求1至5中任一项所述的方法,其中,所述目标内容包括图片信息,所述目标内容的显示效果包括对所述图片信息进行图片显示处理,其中,所述图片显示处理包括:裁剪图片、美化图片、变形图片。
  10. 根据权利要求1至5中任一项所述的方法,其中,所述身体特征信息包括以下至少任一项:
    生理数据信息,所述生理数据信息用以反应所述目标用户在不同心理状态下的生理特征;
    行为数据信息,所述行为数据信息用以反应所述目标用户在不同心理状态下的行为特征。
  11. 根据权利要求10所述的方法,其中,所述生理数据信息包括以下至少任一项:
    脉搏信息;
    血压信息;
    心跳信息。
  12. 根据权利要求10所述的方法,其中,所述行为数据信息包括以下至少任一项:
    面部表情信息;
    输入速度信息;
    握持压力信息。
  13. 一种生成展示内容的设备,其中,该设备包括:
    第一装置,用于获取目标用户在输入目标内容时的身体特征信息;
    第二装置,用于基于所述身体特征信息确定所述目标内容的显示效果;
    第三装置,用于基于所述显示效果生成所述目标内容对应的展示内容。
  14. 根据权利要求13所述的设备,其中,所述第二装置包括:
    第一单元,用于基于所述身体特征信息确定所述目标用户在输入目标内容时的心理状态;
    第二单元,用于基于所述心理状态确定所述目标内容的显示效果。
  15. 根据权利要求14所述的设备,其中,所述第一单元用于:
    将所述身体特征信息与样本身体特征信息作比较,基于所述样本身体特征信息对应的心理状态,确定所述目标用户在输入目标内容时的心理状态。
  16. 根据权利要求15所述的设备,其中,所述样本身体特征信息包括以下至少任一项:
    自身样本身体特征信息;
    其他样本身体特征信息;
    综合样本身体特征信息。
  17. 根据权利要求16所述的设备,其中,当所述样本身体特征信息包含自身样本身体特征信息,所述第一单元用于:
    将所述身体特征信息与自身样本身体特征信息作比较,基于所述自身样本身体特征信息对应的心理状态,确定所述目标用户在输入目标内容时的心理状态。
  18. 根据权利要求13至17中任一项所述的设备,其中,所述目标内容包括文字信息,所述目标内容的显示效果包括对所述文字信息进行文字显示处理,其中所述文字显示处理包括添加文字颜色、文字字体变形、添加文字背景颜色、添加背景图片、添加背景音乐、添加文字显示特效。
  19. 根据权利要求13至17中任一项所述的设备,其中,所述目标内容包括语音信息,所述目标内容的显示效果包括对所述语音信息进行语音显示处理,其中,所述语音显示处理包括添加对应的表情符号、添加背景图片、添加背景音乐。
  20. 根据权利要求13至17中任一项所述的设备,其中,所述目标内容包括视频信息,所述目标内容的显示效果包括对所述视频信息进行视频显示处理,其中,所述视频显示处理包括添加对应的表情符号、添加对应的图片、添加相应的文字信息。
  21. 根据权利要求13至17中任一项所述的设备,其中,所述目标内容包括图片信息,所述目标内容的显示效果包括对所述图片信息进行图片显示处理,其中,所述图片显示处理包括:裁剪图片、美化图片、变形图片。
  22. 根据权利要求13至17中任一项所述的设备,其中,所述身体特征信息包括以下至少任一项:
    生理数据信息,所述生理数据信息用以反应所述目标用户在不同心理状态下的生理特征;
    行为数据信息,所述行为数据信息用以反应所述目标用户在不同心理状态下的行为特征。
  23. 根据权利要求22所述的设备,其中,所述生理数据信息包括以下至少任一项:
    脉搏信息;
    血压信息;
    心跳信息。
  24. 根据权利要求22所述的设备,其中,所述行为数据信息包括以下至少任一项:
    面部表情信息;
    输入速度信息;
    握持压力信息。
PCT/CN2017/113456 2017-03-17 2017-11-29 一种生成展示内容的方法与设备 WO2018166241A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
SG11201908577W SG11201908577WA (en) 2017-03-17 2017-11-29 A method and a device for generating a presentation content

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710161738.XA CN108628504A (zh) 2017-03-17 2017-03-17 一种生成展示内容的方法与设备
CN201710161738.X 2017-03-17

Publications (1)

Publication Number Publication Date
WO2018166241A1 true WO2018166241A1 (zh) 2018-09-20

Family

ID=63521963

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/113456 WO2018166241A1 (zh) 2017-03-17 2017-11-29 一种生成展示内容的方法与设备

Country Status (3)

Country Link
CN (1) CN108628504A (zh)
SG (1) SG11201908577WA (zh)
WO (1) WO2018166241A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109688264B (zh) * 2018-12-17 2021-02-12 咪咕数字传媒有限公司 一种电子设备显示状态调整方法、装置及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323919A (zh) * 2011-08-12 2012-01-18 百度在线网络技术(北京)有限公司 一种基于用户情绪指示信息显示输入信息的方法与设备
CN103926997A (zh) * 2013-01-11 2014-07-16 北京三星通信技术研究有限公司 一种基于用户的输入确定情绪信息的方法和终端
CN105955490A (zh) * 2016-06-28 2016-09-21 广东欧珀移动通信有限公司 一种基于增强现实的信息处理方法、装置和移动终端

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323919A (zh) * 2011-08-12 2012-01-18 百度在线网络技术(北京)有限公司 一种基于用户情绪指示信息显示输入信息的方法与设备
CN103926997A (zh) * 2013-01-11 2014-07-16 北京三星通信技术研究有限公司 一种基于用户的输入确定情绪信息的方法和终端
CN105955490A (zh) * 2016-06-28 2016-09-21 广东欧珀移动通信有限公司 一种基于增强现实的信息处理方法、装置和移动终端

Also Published As

Publication number Publication date
SG11201908577WA (en) 2019-10-30
CN108628504A (zh) 2018-10-09

Similar Documents

Publication Publication Date Title
US10992619B2 (en) Messaging system with avatar generation
US11748579B2 (en) Augmented reality speech balloon system
US10514876B2 (en) Gallery of messages from individuals with a shared interest
US9674485B1 (en) System and method for image processing
US20180077095A1 (en) Augmentation of Communications with Emotional Data
KR20220070565A (ko) 애니메이션된 채팅 프레즌스
CN112041891A (zh) 增强表情系统
CN113892096A (zh) 动态媒体选择菜单
WO2022072328A1 (en) Music reactive animation of human characters
EP4165607A1 (en) Machine learning in augmented reality content items
US20170185365A1 (en) System and method for screen sharing
US11853399B2 (en) Multimodal sentiment classification
US11443554B2 (en) Determining and presenting user emotion
US20220319078A1 (en) Customizable avatar generation system
EP4164760A1 (en) Game result overlay system
US20220100351A1 (en) Media content transmission and management
WO2020053172A1 (en) Invoking chatbot in online communication session
US11798675B2 (en) Generating and searching data structures that facilitate measurement-informed treatment recommendation
US11477397B2 (en) Media content discard notification system
WO2018166241A1 (zh) 一种生成展示内容的方法与设备
US11301615B2 (en) Information processing device using recognition difficulty score and information processing method
JP2020086559A (ja) 感情分析システム
US20220301347A1 (en) Information processing apparatus, nonverbal information conversion system, and information processing method
KR20240077627A (ko) 비언어적 요소 기반 확장현실을 위한 사용자 감정 상호 작용 방법 및 시스템
WO2023279028A1 (en) Hybrid search system for customizable media

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17901179

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 10/12/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17901179

Country of ref document: EP

Kind code of ref document: A1