CN112948622A - Display content control method and device - Google Patents

Display content control method and device Download PDF

Info

Publication number
CN112948622A
CN112948622A CN202110282092.7A CN202110282092A CN112948622A CN 112948622 A CN112948622 A CN 112948622A CN 202110282092 A CN202110282092 A CN 202110282092A CN 112948622 A CN112948622 A CN 112948622A
Authority
CN
China
Prior art keywords
weather
information
emotion
current
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110282092.7A
Other languages
Chinese (zh)
Inventor
曹琦
李禹�
何维
张聪
王骁逸
胡震宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huole Science and Technology Development Co Ltd
Original Assignee
Shenzhen Huole Science and Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huole Science and Technology Development Co Ltd filed Critical Shenzhen Huole Science and Technology Development Co Ltd
Priority to CN202110282092.7A priority Critical patent/CN112948622A/en
Publication of CN112948622A publication Critical patent/CN112948622A/en
Priority to PCT/CN2021/102085 priority patent/WO2022193465A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/64Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the disclosure relates to the technical field of display control, and provides a method and a device for controlling display contents; the method comprises the steps of obtaining current weather influence information and emotion information of a current user, then obtaining display data matched with the current weather influence information and the emotion information, determining a weather visualization page and weather audio and video data according to the current weather information and current time, and finally determining display content according to the weather visualization page, the weather audio and video data and the display data. According to the method and the device, in the process of determining the display content of the weather information, the emotion of the current user is fully considered, and the attaching degree of the display effect of the display content and the emotion of the user is improved.

Description

Display content control method and device
Technical Field
The embodiment of the disclosure relates to the technical field of display control, in particular to a method and a device for controlling display content.
Background
Weather information affects daily life and work of people, and becomes important information which people pay attention to. With the rapid development of projector display technology, it is not difficult to display weather information by projection of a projector, but when weather information is displayed by projection of a projector at present, only the display contents such as temperature and weather conditions are displayed simply, and the contents are single.
Disclosure of Invention
The embodiment of the disclosure provides a display content control method and device, which are used for relieving the technical problem that the current weather information display content is single.
In a first aspect, an embodiment of the present disclosure provides a method for controlling display content, including:
acquiring current weather influence information and acquiring emotion information of a current user, wherein the current weather influence information is used for representing the influence of the current weather information on the emotion of the user;
acquiring display data matched with the current weather influence information and the emotion information, wherein the display data is preset display data for adjusting emotion;
determining a weather visualization page and weather audio/video data according to the current weather information and the current time;
and determining display content according to the weather visualization page, the weather audio and video data and the display data.
Optionally, the step of obtaining display data matched with the current weather influence information and the emotion information includes:
determining a display data type matched with the current weather influence information and the emotion information from a display data type database;
and acquiring display data matched with the display data type from a local display database.
Optionally, the step of obtaining display data matched with the current weather influence information and the emotion information includes:
determining a display data type matched with the current weather influence information and the emotion information;
sending the display data type to a cloud platform;
receiving display data sent by the cloud platform, wherein the display data are sent by the cloud platform after receiving the display data types.
Optionally, the display data includes audio data, and the step of obtaining the display data matched with the current weather influence information and the emotion information includes at least one of the following ways:
when the current weather influence information represents positive emotion influence or neutral emotion influence, if the emotion information represents pleasure or neutrality, randomly selecting first audio data from a preset audio library, if the emotion information represents anger, randomly selecting second audio data of a relieving type from the preset audio library, and if the emotion information represents sadness, randomly selecting third audio data of a cheerful type, a relieving type or a joke type from the preset audio library;
when the current weather influence information represents negative emotional influence, if the emotion information represents pleasure or neutrality, randomly selecting fourth audio data of cheerful, exciting or joke types from the preset audio library, if the emotion information represents angry, randomly selecting fifth audio data of cheerful or relaxing types from the preset audio library, and if the emotion information represents sadness, randomly selecting sixth audio data of cheerful, relaxing, exciting or joke types from the preset audio library.
Optionally, the step of obtaining the current weather influence information includes:
acquiring the current weather information;
acquiring current weather influence information corresponding to the current weather information according to the mapping relation between the weather information and the weather influence information; wherein the weather-effect information characterizes a positive emotional effect, a neutral emotional effect, or a negative emotional effect.
Optionally, the step of obtaining the emotion information of the current user includes:
collecting human body characteristic information of a current user;
and determining the emotion information of the current user according to the human body characteristic information.
Optionally, the step of determining emotion information of the current user according to the human body feature information includes:
inputting the human body characteristic information into an emotion classification model obtained through pre-training to obtain an emotion type output by the emotion classification model;
and determining the emotion information of the current user according to the emotion type.
Optionally, the step of determining a weather visualization page and weather audio/video data according to the current weather information and the current time includes:
determining a time period to which the current time belongs;
and determining a corresponding weather visualization page from a weather visualization effect library and determining corresponding weather audio and video data from a weather audio and video library according to the current weather information and the time period.
Optionally, the step of determining the display content according to the weather visualization page, the weather audio/video data, and the display data includes:
when the weather visualization page and the display data meet the display conditions, determining to display the weather visualization page and the display data in a projection manner;
and when the weather audio and video data are detected to meet the playing condition, determining to play the weather audio and video data.
In a second aspect, an embodiment of the present disclosure provides a control apparatus for displaying content, including:
the first obtaining unit is used for obtaining current weather influence information and obtaining emotion information of a current user, wherein the current weather influence information is used for representing influence of the current weather information on emotion of the user;
the second acquisition unit is used for acquiring display data matched with the current weather influence information and the emotion information, wherein the display data is preset display data used for adjusting emotion;
the first determining unit is used for determining a weather visualization page and weather audio/video data according to the current weather information and the current time;
and the second determining unit is used for determining display content according to the weather visualization page, the weather audio and video data and the display data.
A further aspect of embodiments of the present disclosure provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the processor is configured to call the computer program in the memory to execute the method provided in the first aspect and the various optional implementations of the first aspect.
A further aspect of embodiments of the present disclosure provides a storage medium including instructions that, when executed on a computer, cause the computer to perform the method provided in the first aspect and the various alternative implementations of the first aspect.
Compared with the prior art, in the scheme provided by the embodiment of the disclosure, the current weather influence information and the emotion information of the current user are firstly acquired, then the display data matched with the current weather influence information and the emotion information are acquired, the weather visualization page and the weather audio/video data are determined according to the current weather information and the current time, and finally the display content is determined according to the weather visualization page, the weather audio/video data and the display data. According to the method and the device, in the process of determining the display content of the weather information, the emotion of the current user is fully considered, and the attaching degree of the display effect of the display content and the emotion of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a system architecture diagram illustrating the operation of a control device for displaying content according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a method for controlling display content according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a weather visualization page provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a display content of a projector controlled by a terminal according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a control device for displaying content according to an embodiment of the present disclosure;
fig. 6 is a schematic physical structure diagram of a computer device according to an embodiment of the present disclosure.
Description of reference numerals:
100-a camera; 101-a microphone; 102-other sensors; 103-a weather information acquisition module; 104-emotion perception module; 105-an emotion matching module; 106-weather display module; 107-an audio playing module; 108-network storage; 109-local memory; 110-cloud interface.
Detailed Description
The terms "first," "second," and the like in the description and in the claims, and the above-described drawings of embodiments of the present disclosure, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules expressly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that the division of modules presented in the disclosed embodiments is merely a logical division and may be implemented in a practical application in a different manner, such that multiple modules may be combined or integrated into another system or some features may be omitted or not implemented, and such that couplings or direct couplings or communicative connections between modules shown or discussed may be through interfaces, indirect couplings or communicative connections between modules may be electrical or the like, the embodiments of the present disclosure are not limited. Moreover, the modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiments of the present disclosure.
The embodiment of the present disclosure provides a control method for display content, which is mainly applied to scenes such as weather forecast, projection display, and the like, and is executed by a control device for display content, the control device for display content may be specifically installed in a projector, and for convenience of description, the control method for display content will be described below with the projector as an execution subject.
Fig. 1 is a system architecture diagram illustrating operation of a control device for displaying content according to an embodiment of the present disclosure, referring to fig. 1, the system may include a camera 100, a microphone 101, other sensors 102, a weather information obtaining module 103, an emotion sensing module 104, an emotion matching module 105, a weather display module 106, an audio playing module 107, a network memory 108, a local memory 109, and a cloud interface 110. Wherein:
the camera 100 is used to capture a facial image of a user, the microphone 101 is used to capture a voice of the user, the other sensors 102 are used for collecting other human body characteristic information of the user, the weather information acquisition module 103 is used for acquiring weather information through the cloud interface 110, the emotion perception module 104 is used for analyzing facial images and voice, to determine emotional information of the user, the emotion matching module 105 is configured to retrieve audio data from the network storage 108 or the local storage 109 that matches the weather information and the emotional information, and transmits the audio data to the audio playing module 107, the weather display module 106 is used for displaying a weather visualization page associated with the weather information and the current time, and when the weather display module 106 displays the weather visualization page, the audio playing module 107 plays the audio data, and the audio playing module 107 can also play the weather audio and video data associated with the weather information and the current time.
Therefore, the system can intelligently push corresponding audio according to current weather information and current user emotion while displaying the weather information, so that the blending sense of people and the environment is improved, the improvement of the mood of people is facilitated, and the user experience is improved.
It should be noted that the system architecture diagram shown in fig. 1 is only an example, the system architecture described in the embodiment of the present disclosure is for more clearly illustrating the technical solution of the embodiment of the present disclosure, and does not constitute a limitation to the technical solution provided in the embodiment of the present disclosure, and as a person having ordinary skill in the art knows, the technical solution provided in the embodiment of the present disclosure is equally applicable to similar technical problems with the evolution of the system and the emergence of new business scenarios. The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
Fig. 2 is a flowchart of a method for controlling display content according to an embodiment of the present disclosure, referring to fig. 2, the embodiment of the present disclosure at least includes the following steps:
201. and acquiring current weather influence information and acquiring emotion information of a current user.
In this embodiment, the current weather influence information is used to represent the influence of the current weather information on the emotion of the user.
In this embodiment, the projector may obtain current weather information from the network, and then determine current weather influence information corresponding to the current weather information, it should be noted that different weather information has different influences on the emotion of the user, for example, a sunny day has a positive influence on the emotion of the user, a cloudy day, a rainy day, a snowy day, and the like have a neutral influence on the emotion of the user, a cloudy day, a rain shower, a rainstorm, a haze, and the like have a negative influence on the emotion of the user, and for convenience of description, the influence of the weather information on the emotion of the user is represented by the weather influence information.
In one embodiment, the step of obtaining the current weather influence information includes:
acquiring the current weather information;
acquiring current weather influence information corresponding to the current weather information according to the mapping relation between the weather information and the weather influence information; wherein the weather-effect information characterizes a positive emotional effect, a neutral emotional effect, or a negative emotional effect.
Specifically, the mapping relationship between the weather information and the weather influence information may be stored in a form of a data table, as shown in table 1 below:
weather influence information Weather information
Front side In sunny days
Neutral property Cloudy, rainy and snowy
Negative side effect Cloudy day, rain shower, heavy rain, haze, heavy snow, sand storm, etc
TABLE 1
As can be seen from the content in table 1, when the weather information represents a sunny day, the weather influence information represents a positive side, and at this time, it represents that the sunny weather has a positive influence on the emotion of the user; when the weather information represents cloudy, light rain or small snow, the weather influence information represents neutrality, and at the moment, the weather information represents that the cloudy, light rain or small snow produces neutral influence on the emotion of the user; when the weather information represents overcast days, rain showers, rainstorms, haze, heavy snow, snowstorms or sand storms, the weather influence information represents negative, and at the moment, the weather influence information represents that the overcast days, rain showers, rainstorms, haze, heavy snow, snowstorms or sand storms have negative influence on the emotion of the user. The specific contents of the weather information described in table 1 are merely examples, and should not be construed as specific limitations of the weather information of the present disclosure. As can be seen, based on the corresponding relationship (mapping relationship) between the weather effect information and the weather information in table 1, after the projector acquires the weather information, the projector may determine the corresponding weather effect information based on table 1.
In one embodiment, the step of obtaining the emotion information of the current user includes:
collecting human body characteristic information of a current user;
and determining the emotion information of the current user according to the human body characteristic information.
Specifically, the projector may acquire at least one of human body feature information such as a face image, voice, blood pressure, heartbeat, and body movement of the current user through various sensors, and then determine emotion information of the current user according to the at least one of the human body feature information.
It should be noted that, when determining the emotion information of the current user, the projector determines the emotion information by comprehensively considering at least one of the human body feature information such as a facial image, voice, blood pressure, heartbeat, body movement, and the like, on one hand, the way of determining the emotion information with respect to a single piece of information enables the finally obtained emotion information to be more accurate and comprehensive, and on the other hand, the situation that the emotion information of the current user cannot be determined due to the fact that the projector does not acquire the relevant human body feature information can be avoided.
For example, when the projector collects the face image and the voice information and does not collect the blood pressure, the heartbeat and the body movement, the projector can also determine the emotion information according to at least one human body characteristic information in the face image and the voice information without waiting for the completion of the collection of the blood pressure, the heartbeat and the body movement. Of course, the projector may also receive the relevant human body feature information collected by the third-party sensor, and then determine the emotion information according to the relevant human body feature information collected by the third-party sensor, for example, the projector receives a facial image collected by the third-party camera, the projector receives voice information collected by the third-party microphone, and then determines the emotion information according to the facial image and the voice information.
Taking the determination of emotion information of the current user according to voice as an example, the projector may pre-input a plurality of voice data, establish an emotion information base based on feature information such as feature words, a speech rate, an average volume and the like in the voice data, and when the emotion information of the current user needs to be determined according to new voice, the projector may search whether there is emotion information matched with the new voice from the emotion information base based on the feature information of the voice, thereby determining the emotion information of the current user. In practical applications, the plurality of voice data pre-entered by the projector may include a plurality of positive emotional voices, a plurality of negative emotional voices and a plurality of neutral emotional voices.
Taking the determination of the emotion information of the current user according to the facial image as an example, the projector may pre-input a plurality of facial images, establish an emotion information base based on feature information such as expressions, facial feature points, micro-expressions and the like in the facial images, and when the projector needs to determine the emotion information of the current user according to a new facial image, find out whether there is emotion information matched with the new facial image from the emotion information base based on the feature information of the facial image, thereby determining the emotion information of the current user. In practical applications, the plurality of face images pre-entered by the projector may include a plurality of positive emotion pictures, a plurality of negative emotion pictures, and a plurality of neutral emotion pictures.
In addition, if a plurality of users exist, the projector may acquire emotion information of the plurality of users, for example, the projector acquires a facial expression of a first user and a facial expression of a second user through the camera, so as to determine the emotion information of the first user according to the facial expression of the first user, and determine the emotion information of the second user according to the facial expression of the second user, in an actual scene, the emotion information of the first user and the emotion information of the second user may be different or the same, for example, the emotion information of the first user indicates happy mood, and the emotion information of the second user indicates angry; or the emotion information of the first user and the emotion information of the second user both represent happiness, and how to acquire presentation data matching the current weather influence information and the emotion information of the plurality of users will be specifically described later.
In one embodiment, the step of determining emotion information of the current user according to the human body feature information includes:
inputting the human body characteristic information into an emotion classification model obtained through pre-training to obtain an emotion type output by the emotion classification model;
and determining the emotion information of the current user according to the emotion type.
Specifically, the projector may be trained in advance to obtain an emotion classification model, and after obtaining at least one of facial images, voice, blood pressure, heartbeat, body movement and other human characteristic information, the obtained information is input to the emotion classification model for recognition, and the output result of the emotion classification model is an emotion type, such as happy, neutral, angry, or sad, and is not limited herein. The projector may then determine the current user's emotional information based on the mood category.
202. And acquiring display data matched with the current weather influence information and the emotion information.
In this embodiment, the display data is preset display data for adjusting emotion.
In this embodiment, after the projector obtains the current weather influence information and the emotion information of the current user, since different weather influence information and emotion information have corresponding display data, the projector may obtain display data matched with the current weather influence information and the emotion information of the current user, it should be noted that the display data is preset display data for adjusting emotion of the user, the display data may be image data or audio data, the audio data includes music, voice, and the like, and this is not limited here.
In one embodiment, the step of obtaining presentation data matching the current weather influence information and the emotion information includes:
determining a display data type matched with the current weather influence information and the emotion information;
sending the display data type to a cloud platform;
receiving display data sent by the cloud platform, wherein the display data are sent by the cloud platform after receiving the display data types.
In one embodiment, the step of obtaining presentation data matching the current weather influence information and the emotion information includes:
determining a display data type matched with the current weather influence information and the emotion information from a display data type database;
and acquiring display data matched with the display data type from a local display database.
Specifically, the projector may obtain the display data from the cloud platform, or obtain the display data from the local display database, and in addition, a mapping relationship exists between the two information, i.e., the weather influence information and the emotion information, and the display data type, and the mapping relationship may be stored in a form of a data table, as shown in table 2 below:
weather influence information Emotional information Exposing data types
Face/neutral Happy/neutral Cheerful/relaxed/excited/neutral
Face/neutral Generating qi Relief and relieve
Face/neutral Worry and wound Cheerful/relaxed/joke
Negative side effect Happy/neutral Joy/joke
Negative side effect Generating qi Cheerfulness/relief
Negative side effect Worry and wound Cheerful/relaxed/excited/joke
TABLE 2
Taking the display data as the audio data as an example, as can be seen from the content in table 2, when the current weather influence information represents a positive emotional influence (positive) or a neutral emotional influence (neutral), if it is determined that the emotional information represents happy or neutral, randomly selecting first audio data (without limitation on the type of audio data, which may be cheerful, relaxed, excited or neutral) from a preset audio library, if it is determined that the emotional information represents angry, randomly selecting second audio data of relaxed type from the preset audio library, and if it is determined that the emotional information represents sad, randomly selecting third audio data of cheerful, relaxed or joke type from the preset audio library;
when the current weather influence information represents negative emotional influence (negative), if the emotion information represents pleasure or neutrality, randomly selecting fourth audio data of cheerful, exciting or joke types from the preset audio library, if the emotion information represents angry, randomly selecting fifth audio data of cheerful or relaxing types from the preset audio library, and if the emotion information represents sadness, randomly selecting sixth audio data of cheerful, relaxing, exciting or joke types from the preset audio library.
In the foregoing, if there are multiple users, the projector may obtain emotional information of the multiple users, and how to obtain presentation data matching with the current weather influence information and the emotional information of the multiple users will be specifically described below, taking the first user and the second user as an example, as follows:
assuming that the emotion information of the first user is the same as the emotion information of the second user, the projector may directly acquire presentation data matched with the current weather influence information and emotion information.
Assuming that the emotion information of the first user is different from the emotion information of the second user, the projector determines target emotion information of the user with a low emotion value, and obtains display data matched with the current weather influence information and the target emotion information, it should be noted that the lower the emotion value, the larger the negative emotion is, and conversely, the higher the emotion value, the smaller the negative emotion is, for example, the emotion value corresponding to "happy" is higher than the emotion value corresponding to "angry", taking each emotion in the emotion information shown in table 2 as an example, the sequence of the emotion values from low to high is as follows: hypochondriac, angry, neutral, happy. The following supplementary description is made by way of example: assuming that the emotion information of the first user is happy and the emotion information of the second user is angry, the projector acquires display data matched with the current weather influence information and the "angry".
203. And determining a weather visualization page and weather audio/video data according to the current weather information and the current time.
In this embodiment, after the projector obtains the current weather information, the projector generates a weather visualization page in a differentiated form by combining with the current time, and determines corresponding weather audio and video data.
For example, also with cloudy weather, if the current time is determined to be a certain time in the morning, a bright white color is displayed on the weather visualization page, and if the current time is determined to be a certain time in the evening, a color of sunset is displayed on the weather visualization page. For another example, for thunderstorm weather, the weather visualization page can play the effect of thunder in addition to displaying the animation effect of thunderstorm, so that the weather of the user can be more intuitively reminded. The visual page of weather can show in real time on user's wall or projection curtain through the mode of projection, and the cooperation audio plays good information display and reminds the effect.
For the same weather condition, a plurality of different weather visualization pages can be set according to different current times, for example, weather is also cloudy, clouds in the corresponding weather visualization pages show bright white in the morning, and clouds in the weather visualization pages show the color of sunset in the evening.
In an embodiment, the step of determining a weather visualization page and weather audio/video data according to the current weather information and the current time includes: determining a time period to which the current time belongs; and determining a corresponding weather visualization page from a weather visualization effect library and determining corresponding weather audio and video data from a weather audio and video library according to the current weather information and the time period. According to the embodiment, the weather visualization page and the weather audio and video data corresponding to the current weather information are determined according to the time period to which the current time belongs, so that the weather visualization page and the weather audio and video data corresponding to the weather information do not need to be determined in real time, the operation efficiency is improved, and the power consumption of electronic equipment (a projector) is reduced.
Wherein, the time periods are divided into early morning, afternoon, dusk, night and the like according to the current time. The time period corresponding to morning can be set to 5:00-7:00, the time period corresponding to morning can be set to 7:00-12:30, the time period set in afternoon can be set to 12:30-16:30, the time period corresponding to evening can be set to 16:30-19:00, the time period corresponding to night can be set to 19:00-5:00, and the like.
The corresponding time periods described above are merely illustrative for easier understanding of the aspects in the embodiments of the present disclosure. The time period can also be set to other more or fewer time periods, and can also be set to different time periods according to different seasons; the time range corresponding to each time period may also be set to other time ranges, or may also be set to different time ranges according to different seasons, and so on.
And after the time period of the current time is determined, determining a corresponding weather visualization page from a weather visualization effect library according to the current weather information and the time period. In addition, in an embodiment, the weather visualization pages corresponding to the current weather information and the future weather information can be determined from the weather visualization effect library according to the current weather information, the future weather information, and the corresponding time periods, respectively. The weather visualization effect library stores various weather information and different weather visualization pages corresponding to different time periods.
The method for determining the weather visualization page corresponding to the future weather information from the weather visualization effect library according to the future weather information and the corresponding time period comprises the following steps: determining a future time period corresponding to the future weather information according to the time corresponding to the future weather information and the corresponding time period; and determining a corresponding weather visualization page from a weather visualization effect library according to the future time period and the future weather information, namely determining the weather visualization page matched with the future time period and the future weather information.
Referring to fig. 3, fig. 3 is a schematic view of a weather visualization page according to an embodiment of the present disclosure, and as shown in fig. 3, contents displayed on the weather visualization page include: weather information (snowing, -4 ℃) corresponding to 12 points, weather information (snowing, -4 ℃) corresponding to 13 points, weather information (snowing, -1 ℃) corresponding to 14 points, weather information (snowing, -3 ℃) corresponding to 15 points, weather information (snowing, 0 ℃) corresponding to 16 points, weather information (snowing, -2 ℃) corresponding to tomorrow, and weather information (snowing, -9 ℃) corresponding to three days later, wherein the 'snowing' displayed in the weather information is displayed in the form of a snowing image. In addition, the rendering effect of the weather visualization page in different time periods can be differentiated, and related sound effects can be matched while the weather information is displayed, so that good information display and reminding effects are achieved. It should be noted that the weather visualization page provided in fig. 3 may further provide an audio playing control, and based on the audio playing control, the playing of the related audio data is executed, and the audio data may be the above display data, so that the weather visualization page is displayed, and meanwhile, the audio data is played through the audio playing control provided by the weather visualization page.
In an embodiment, the step of generating a weather visualization page according to the current weather information and the current time includes:
determining a weather visualization effect according to the current weather information and the current time;
and generating a weather visualization page according to the weather visualization effect.
Further, the step of determining the weather visualization effect according to the current weather information and the current time includes:
if the current time is determined to be within a preset time period, determining a weather visualization effect matched with the current weather information and the preset time period from a weather visualization effect library; wherein the weather visualization effect comprises a display effect and an audio effect.
Specifically, different weather information and time correspond to different weather visualization effects, the projector can generate a corresponding weather visualization page through the weather visualization effects, further, the weather visualization effects correspond to the different weather information and time periods, after the weather information and the current time are obtained, the time period where the current time is located can be determined by the projector, and then the weather visualization effects are determined through the weather information and the time periods. With reference to the content shown in fig. 3, it can be known that the projector matches the weather visualization effect according to the current weather information and the current time, matches the weather visualization effect after several hours according to several hours following the current time and the corresponding weather information, and matches the weather visualization effect after several days according to several days following the current time and the corresponding weather information, and further generates a weather visualization page according to the weather visualization effect corresponding to the current time, the weather visualization effect corresponding to several hours after and the weather visualization effect corresponding to several days after, and finally presents the weather visualization page to the user, it should be noted that the projector can call a visualization effect three-dimensional model, and projects the weather visualization page to the user through the visualization effect three-dimensional model, wherein the visualization effect three-dimensional model employs a perspective projection technology, the weather visualization page seen by the user has a real three-dimensional (3-dimension, 3D) effect, and the user experience is favorably improved.
204. And determining display content according to the weather visualization page, the weather audio and video data and the display data.
In this embodiment, after the projector obtains the weather visualization page, the weather audio/video data, and the display data, the display content may be determined according to the weather visualization page, the weather audio/video data, and the display data. For example, the display content is that, while the weather visualization page is displayed, an image, voice, music, animation, or the like corresponding to the display data is embedded in the weather visualization page. The projector can predetermine the display content at the background, and can display the display content immediately when the display content is required to be displayed, so that the display efficiency is improved. In an actual scene, for example, presentation data is used as audio data, and when a projector needs to present content, the projector can play the audio data while projecting and displaying a weather visualization page.
In an embodiment, the step of determining the display content according to the weather visualization page, the weather audio/video data, and the display data includes:
when the weather visualization page and the display data meet the display conditions, determining to display the weather visualization page and the display data in a projection manner;
and when the weather audio and video data are detected to meet the playing condition, determining to play the weather audio and video data.
Specifically, the projector detects whether the current projection shows an interface corresponding to weather information, if so, the weather visualization page and the display data are determined to meet the display condition, and the weather visualization page and the display data are displayed in a projection mode; if not, determining that the weather visualization page and the display data do not meet the display conditions, and not performing projection display.
The interface corresponding to the weather information may be an interface corresponding to a weather forecast APP. And if the current projection display is not the interface corresponding to the weather information, such as a movie playing interface, projection display is not performed, so that the current movie playing of the user is prevented from being influenced. If the current projection displays an interface corresponding to the weather information, displaying the interface to be displayed in a real-time projection manner; and when the weather information is updated, updating the weather visualization page.
The projector detects whether the current projection display is an interface corresponding to the weather information and whether the preset playing time is reached; if so, determining that the weather audio and video data meet the playing condition, and playing the weather audio and video data; if not, determining that the weather audio and video data does not meet the playing condition, and not playing the weather audio and video data.
The preset playing time may correspond to a time period, for example, the starting time corresponding to the time period is used as the preset playing time; the preset playing time may also be determined in other ways. If the current projection display is not the interface corresponding to the weather information, the weather audio and video data are not played, so that the influence on the use of a user is avoided; if the preset playing time is not reached, the weather audio and video data are not played, so that the weather audio and video data are prevented from being played too frequently, and the user experience is reduced; and if the current projection display is not the interface corresponding to the weather information and does not reach the preset playing time, the weather audio and video data are not played.
It should be noted that, in the process of displaying the display content, the projector still obtains the emotion information of the user in real time, and if it is detected that the emotion information of the user changes, corresponding display data is obtained in real time, new display content is determined according to the weather visualization page and the display data obtained in real time at present, and the new display content is switched and displayed, so that the display content is adaptively adjusted according to the change of the emotion information of the user. Similarly, in the process of displaying the display content, the projector also acquires the current weather influence information in real time, if the weather influence information changes in the current day is detected, the current real-time weather influence information is acquired, the display data is determined according to the current real-time weather influence information and the emotion information, the weather visualization page and the weather audio and video data are determined according to the current real-time weather information and the current time, the new display content is determined according to the weather visualization page, the weather audio and video data and the display data, and the new display content is switched and displayed, so that the display content can be adaptively adjusted according to the change of the weather information.
It should be noted that the display content is related to the current weather and the current mood of the user, so that the display content helps the user to adjust the emotional state of the user, for example, when the user is in poor mood (such as worry, anger, etc.), the projected display content may be cheerful, relaxed, or joke content, and when the user is in good mood (such as happy), the projected display content may be cheerful, relaxed, violent, or neutral content.
In one scenario, assume that the content presented by the current projector projection includes the following information: a place: beijing; time: at five pm; weather: cloud-like; music: an X song; if the projector receives a song switching instruction sent by a user through the mobile terminal, the projector switches songs according to the song switching instruction, the songs switched by the projector can be preset default songs, and the projector can also determine the songs to be switched according to song identifiers carried in the song switching instruction. That is to say, in this scenario, the user terminal and the projector may communicate wirelessly or via bluetooth, a projection control interface is provided on the user terminal, and the displayed content of the projector, such as songs played by the projector and weather visualization pages, can be set by user-defined based on the projection control interface, for example, if the weather visualization page generated by the projector is not satisfied by the user, the user can edit the weather visualization page through the projection control interface, after the editing is completed, the projector can display the weather visualization page according to the edited weather visualization page, and at the same time, the projector can also store the weather visualization page and corresponding weather information and time information, so as to facilitate the next direct call of the weather visualization page, thereby improving the generation efficiency of the weather visualization page, and on the other hand, the method is more suitable for the requirements of users. Specifically, please refer to fig. 4, where fig. 4 is a schematic diagram illustrating content displayed by controlling a projector through a terminal according to an embodiment of the present disclosure, as shown in fig. 4, a "weather visualization page" control, a "display data" control, and a "projection parameter" control are provided on a projection control interface, a user may click the "weather visualization page" control to edit the weather visualization page, and information of the edited weather visualization page is sent to the projector based on a network, so that the projector conveniently displays relevant content in a projection manner. Of course, the user may click the "display data" control or the "projection parameter" control to perform operations such as editing the display data or the projection parameter.
In summary, in the scheme provided by the embodiment of the present disclosure, current weather influence information and emotion information of a current user are first obtained, then display data matched with the current weather influence information and the emotion information are obtained, a weather visualization page and weather audio/video data are determined according to the current weather information and current time, and finally display content is determined according to the weather visualization page, the weather audio/video data and the display data. According to the method and the device, in the process of determining the display content of the weather information, the emotion of the current user is fully considered, and the attaching degree of the display effect of the display content and the emotion of the user is improved.
In order to better implement the above solution of the embodiment of the present disclosure, the following provides a related apparatus for implementing the above solution, fig. 5 is a schematic structural diagram of a control apparatus for displaying content provided by the embodiment of the present disclosure, please refer to fig. 5, where the control apparatus for displaying content includes:
a first obtaining unit 501, configured to obtain current weather influence information and obtain emotion information of a current user, where the current weather influence information is used to represent an influence of the current weather information on an emotion of the user;
a second obtaining unit 502, configured to obtain display data matched with the current weather influence information and the emotion information, where the display data is preset display data for adjusting emotion;
a first determining unit 503, configured to determine a weather visualization page and weather audio/video data according to the current weather information and the current time;
a second determining unit 504, configured to determine display content according to the weather visualization page, the weather audio/video data, and the display data.
According to the scheme provided by the embodiment of the disclosure, the current weather influence information and the emotion information of the current user are firstly acquired, then the display data matched with the current weather influence information and the emotion information are acquired, the weather visualization page and the weather audio and video data are determined according to the current weather information and the current time, and finally the display content is determined according to the weather visualization page, the weather audio and video data and the display data. According to the method and the device, in the process of determining the display content of the weather information, the emotion of the current user is fully considered, and the attaching degree of the display effect of the display content and the emotion of the user is improved.
Optionally, in some possible embodiments of the present disclosure, the second obtaining unit 502 is specifically configured to determine, from a display data type library, a display data type matching the current weather influence information and the emotion information; and acquiring display data matched with the display data type from a local display database.
Optionally, in some possible embodiments of the present disclosure, the second obtaining unit 502 is specifically configured to determine a display data type matching the current weather influence information and the emotion information; sending the display data type to a cloud platform; receiving display data sent by the cloud platform, wherein the display data are sent by the cloud platform after receiving the display data types.
Optionally, in some possible embodiments of the present disclosure, the second obtaining unit 502 is specifically configured to: when the current weather influence information represents positive emotion influence or neutral emotion influence, if the emotion information represents pleasure or neutrality, randomly selecting first audio data from a preset audio library, if the emotion information represents anger, randomly selecting second audio data of a relieving type from the preset audio library, and if the emotion information represents sadness, randomly selecting third audio data of a cheerful type, a relieving type or a joke type from the preset audio library; or when the current weather influence information represents negative emotional influence, if the emotion information represents pleasure or neutrality, randomly selecting fourth audio data of cheerful, exciting or joke types from the preset audio library, if the emotion information represents angry, randomly selecting fifth audio data of cheerful or relaxing types from the preset audio library, and if the emotion information represents sadness, randomly selecting sixth audio data of cheerful, relaxing, exciting or joke types from the preset audio library.
Optionally, in some possible embodiments of the present disclosure, the first obtaining unit 501 is specifically configured to obtain the current weather information; acquiring current weather influence information corresponding to the current weather information according to the mapping relation between the weather information and the weather influence information; wherein the weather-effect information characterizes a positive emotional effect, a neutral emotional effect, or a negative emotional effect.
Optionally, in some possible embodiments of the present disclosure, the first obtaining unit 501 is specifically configured to collect human body feature information of a current user; and determining the emotion information of the current user according to the human body characteristic information.
Further, the first obtaining unit 501 is specifically configured to input the human body feature information to an emotion classification model obtained through pre-training, so as to obtain an emotion type output by the emotion classification model; and determining the emotion information of the current user according to the emotion type.
Optionally, in some possible embodiments of the present disclosure, the first determining unit 503 is specifically configured to determine a time period to which the current time belongs; and determining a corresponding weather visualization page from a weather visualization effect library and determining corresponding weather audio and video data from a weather audio and video library according to the current weather information and the time period.
Optionally, in some possible embodiments of the present disclosure, the second determining unit 504 is specifically configured to determine to display the weather visualization page and the display data in a projection manner when it is detected that the weather visualization page and the display data satisfy a display condition; and when the weather audio and video data are detected to meet the playing condition, determining to play the weather audio and video data.
Fig. 6 illustrates a physical structure diagram of a computer device, and as shown in fig. 6, the computer device may include: a processor (processor)601, a communication Interface (Communications Interface)602, a memory (memory)603 and a communication bus 604, wherein the processor 601, the communication Interface 602 and the memory 603 complete communication with each other through the communication bus 604. The processor 601 may call logic instructions in the memory 603 to perform the following method: acquiring current weather influence information and acquiring emotion information of a current user, wherein the current weather influence information is used for representing the influence of the current weather information on the emotion of the user; acquiring display data matched with the current weather influence information and the emotion information, wherein the display data is preset display data for adjusting emotion; determining a weather visualization page and weather audio/video data according to the current weather information and the current time; and determining display content according to the weather visualization page, the weather audio and video data and the display data.
In addition, the logic instructions in the memory 603 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, embodiments of the present disclosure further provide a storage medium, on which a computer program is stored, where the computer program is implemented to perform the method provided by the foregoing embodiments when executed by a processor, for example, the method includes: acquiring current weather influence information and acquiring emotion information of a current user, wherein the current weather influence information is used for representing the influence of the current weather information on the emotion of the user; acquiring display data matched with the current weather influence information and the emotion information, wherein the display data is preset display data for adjusting emotion; determining a weather visualization page and weather audio/video data according to the current weather information and the current time; and determining display content according to the weather visualization page, the weather audio and video data and the display data.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for controlling presentation, comprising:
acquiring current weather influence information and acquiring emotion information of a current user, wherein the current weather influence information is used for representing the influence of the current weather information on the emotion of the user;
acquiring display data matched with the current weather influence information and the emotion information, wherein the display data is preset display data for adjusting emotion;
determining a weather visualization page and weather audio/video data according to the current weather information and the current time;
and determining display content according to the weather visualization page, the weather audio and video data and the display data.
2. The method for controlling presentation content according to claim 1, wherein the step of obtaining presentation data that matches the current weather effect information and the emotion information includes:
determining a display data type matched with the current weather influence information and the emotion information from a display data type database;
and acquiring display data matched with the display data type from a local display database.
3. The method for controlling presentation content according to claim 1, wherein the step of obtaining presentation data that matches the current weather effect information and the emotion information includes:
determining a display data type matched with the current weather influence information and the emotion information;
sending the display data type to a cloud platform;
receiving display data sent by the cloud platform, wherein the display data are sent by the cloud platform after receiving the display data types.
4. The method for controlling the display content according to claim 1, wherein the display data includes audio data, and the step of obtaining the display data matching the current weather influence information and the emotion information includes at least one of:
when the current weather influence information represents positive emotion influence or neutral emotion influence, if the emotion information represents pleasure or neutrality, randomly selecting first audio data from a preset audio library, if the emotion information represents anger, randomly selecting second audio data of a relieving type from the preset audio library, and if the emotion information represents sadness, randomly selecting third audio data of a cheerful type, a relieving type or a joke type from the preset audio library;
when the current weather influence information represents negative emotional influence, if the emotion information represents pleasure or neutrality, randomly selecting fourth audio data of cheerful, exciting or joke types from the preset audio library, if the emotion information represents angry, randomly selecting fifth audio data of cheerful or relaxing types from the preset audio library, and if the emotion information represents sadness, randomly selecting sixth audio data of cheerful, relaxing, exciting or joke types from the preset audio library.
5. The method for controlling display content according to claim 1, wherein the step of obtaining the current weather-related information includes:
acquiring the current weather information;
acquiring current weather influence information corresponding to the current weather information according to the mapping relation between the weather information and the weather influence information; wherein the weather-effect information characterizes a positive emotional effect, a neutral emotional effect, or a negative emotional effect.
6. The method for controlling presentation according to claim 1, wherein the step of obtaining emotional information of the current user comprises:
collecting human body characteristic information of a current user;
and determining the emotion information of the current user according to the human body characteristic information.
7. The method for controlling display content according to claim 6, wherein the step of determining the emotion information of the current user based on the human body characteristic information includes:
inputting the human body characteristic information into an emotion classification model obtained through pre-training to obtain an emotion type output by the emotion classification model;
and determining the emotion information of the current user according to the emotion type.
8. The method for controlling the display content according to any one of claims 1 to 7, wherein the step of determining the weather visualization page and the weather audio/video data according to the current weather information and the current time comprises:
determining a time period to which the current time belongs;
and determining a corresponding weather visualization page from a weather visualization effect library and determining corresponding weather audio and video data from a weather audio and video library according to the current weather information and the time period.
9. The method for controlling the display content according to any one of claims 1 to 7, wherein the step of determining the display content according to the weather visualization page, the weather audio-video data and the display data comprises:
when the weather visualization page and the display data meet the display conditions, determining to display the weather visualization page and the display data in a projection manner;
and when the weather audio and video data are detected to meet the playing condition, determining to play the weather audio and video data.
10. A control apparatus for displaying contents, comprising:
the first obtaining unit is used for obtaining current weather influence information and obtaining emotion information of a current user, wherein the current weather influence information is used for representing influence of the current weather information on emotion of the user;
the second acquisition unit is used for acquiring display data matched with the current weather influence information and the emotion information, wherein the display data is preset display data used for adjusting emotion;
the first determining unit is used for determining a weather visualization page and weather audio/video data according to the current weather information and the current time;
and the second determining unit is used for determining display content according to the weather visualization page, the weather audio and video data and the display data.
CN202110282092.7A 2021-03-16 2021-03-16 Display content control method and device Pending CN112948622A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110282092.7A CN112948622A (en) 2021-03-16 2021-03-16 Display content control method and device
PCT/CN2021/102085 WO2022193465A1 (en) 2021-03-16 2021-06-24 Method and device for controlling display content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110282092.7A CN112948622A (en) 2021-03-16 2021-03-16 Display content control method and device

Publications (1)

Publication Number Publication Date
CN112948622A true CN112948622A (en) 2021-06-11

Family

ID=76230149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110282092.7A Pending CN112948622A (en) 2021-03-16 2021-03-16 Display content control method and device

Country Status (2)

Country Link
CN (1) CN112948622A (en)
WO (1) WO2022193465A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022193465A1 (en) * 2021-03-16 2022-09-22 深圳市火乐科技发展有限公司 Method and device for controlling display content

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102970364A (en) * 2012-11-21 2013-03-13 北京奇虎科技有限公司 Method for automatically changing subject and client
US20170262256A1 (en) * 2016-03-10 2017-09-14 Panasonic Automotive Systems Company of America, Division of Panasonic Corporation of North Americ Environment based entertainment
CN107340947A (en) * 2017-06-23 2017-11-10 珠海市魅族科技有限公司 A kind of interface adjusting method and device, computer installation and storage medium
CN109446375A (en) * 2018-10-29 2019-03-08 广州小鹏汽车科技有限公司 Adaptive method for playing music and system applied to vehicle
CN109916014A (en) * 2019-02-19 2019-06-21 奥克斯空调股份有限公司 A kind of information cuing method, device, electronic equipment and computer readable storage medium
CN110147930A (en) * 2019-04-16 2019-08-20 平安科技(深圳)有限公司 Data statistical approach, device and storage medium based on big data analysis
CN111414506A (en) * 2020-03-13 2020-07-14 腾讯科技(深圳)有限公司 Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium
WO2020253372A1 (en) * 2019-06-19 2020-12-24 深圳壹账通智能科技有限公司 Big data analytics-based information pushing method, apparatus and device, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106302989B (en) * 2016-07-28 2019-10-15 维沃移动通信有限公司 The display methods and its mobile terminal of Weather information
CN109190459A (en) * 2018-07-20 2019-01-11 上海博泰悦臻电子设备制造有限公司 A kind of car owner's Emotion identification and adjusting method, storage medium and onboard system
US20200064986A1 (en) * 2018-08-22 2020-02-27 Caressa Corporation Voice-enabled mood improvement system for seniors
CN112948622A (en) * 2021-03-16 2021-06-11 深圳市火乐科技发展有限公司 Display content control method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102970364A (en) * 2012-11-21 2013-03-13 北京奇虎科技有限公司 Method for automatically changing subject and client
US20170262256A1 (en) * 2016-03-10 2017-09-14 Panasonic Automotive Systems Company of America, Division of Panasonic Corporation of North Americ Environment based entertainment
CN107340947A (en) * 2017-06-23 2017-11-10 珠海市魅族科技有限公司 A kind of interface adjusting method and device, computer installation and storage medium
CN109446375A (en) * 2018-10-29 2019-03-08 广州小鹏汽车科技有限公司 Adaptive method for playing music and system applied to vehicle
CN109916014A (en) * 2019-02-19 2019-06-21 奥克斯空调股份有限公司 A kind of information cuing method, device, electronic equipment and computer readable storage medium
CN110147930A (en) * 2019-04-16 2019-08-20 平安科技(深圳)有限公司 Data statistical approach, device and storage medium based on big data analysis
WO2020253372A1 (en) * 2019-06-19 2020-12-24 深圳壹账通智能科技有限公司 Big data analytics-based information pushing method, apparatus and device, and storage medium
CN111414506A (en) * 2020-03-13 2020-07-14 腾讯科技(深圳)有限公司 Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022193465A1 (en) * 2021-03-16 2022-09-22 深圳市火乐科技发展有限公司 Method and device for controlling display content

Also Published As

Publication number Publication date
WO2022193465A1 (en) 2022-09-22

Similar Documents

Publication Publication Date Title
US10559323B2 (en) Audio and video synchronizing perceptual model
CN107832434A (en) Method and apparatus based on interactive voice generation multimedia play list
CN108922525B (en) Voice processing method, device, storage medium and electronic equipment
WO2019034026A1 (en) Emotion recognition-based artwork recommendation method and device, medium, and electronic apparatus
US10783884B2 (en) Electronic device-awakening method and apparatus, device and computer-readable storage medium
CN112188266A (en) Video generation method and device and electronic equipment
US9208205B2 (en) Information processing device and program
US20110231194A1 (en) Interactive Speech Preparation
WO2021031733A1 (en) Method for generating video special effect, and terminal
CN113707113B (en) User singing voice repairing method and device and electronic equipment
CN109164713B (en) Intelligent household control method and device
US20240004606A1 (en) Audio playback method and apparatus, computer readable storage medium, and electronic device
CN112667346A (en) Weather data display method and device, electronic equipment and storage medium
CN112948622A (en) Display content control method and device
CN110660375A (en) Method, device and equipment for generating music
CN110019919A (en) A kind of generation method and device of the rhymed lyrics
CN112492400B (en) Interaction method, device, equipment, communication method and shooting method
CN112287129A (en) Audio data processing method and device and electronic equipment
JP6515899B2 (en) Voice interactive apparatus and control method thereof
US20220189500A1 (en) System and methodology for modulation of dynamic gaps in speech
CN113923477A (en) Video processing method, video processing device, electronic equipment and storage medium
Karpouzis et al. Induction, recording and recognition of natural emotions from facial expressions and speech prosody
CN113032616A (en) Audio recommendation method and device, computer equipment and storage medium
CN112750456A (en) Voice data processing method and device in instant messaging application and electronic equipment
JP6838739B2 (en) Recent memory support device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination